Great news for the AI providers, turns out they are automatically turning their audience into captives who end up increasingly dependent on their product to get anything done.
it's probably even worse, because knowledge is basically extracted out of all communities and eventually the rug pull comes where you're denied access one way or another.
not only do students graduate now only due to chatgpt but also 10 year old kids never build up education while using ai to do their homework.
Right, soon we will be like those weirdos still running their own website while everyone else is on Facebook, reddit and X (making low effort comments complaining about a standardized set of topics like who got banned and why)
Or like the devs still on IRC.
Of course they will take the www and Google away from us by replacing everything with AI slop.
SO answers will be like, did you ask Macro Banana 42?
It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.
I agree in principle, although I personally consider mental atrophy to be far more serious than physical atrophy (and I value physical fitness very high already!).
> It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.
And arguably, our society has made a lot of bad choices about many "convenience[s] in modern life." For instance, cities should probably be designed to make you walk more by default, so healthy physical activity isn't turned into a chore you then have to have the discipline to do consistently.
Basically, collectively, we're stupid and unwise, picking short term convenience and neglecting the medium and long term, and we need to get better at that.
Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).
Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.
I have, absolutely, as I'm trying to learn the fully agentic style of development to keep up with the pace that a couple colleagues are setting.
Im that style of working, spinning up multiple parallel workstreams appears to be the highest output strategy. So now I'm practicing rapid context switching, jumping from virtual desktop to virtual desktop, and even adding monitors to my desk to keep tabs on more workstreams.
In my home life, I've observed myself wandering off mid-task (reminder to self: the eggs on the stove DO NOT have the ability to wait idly for your next input), or pausing to make an unrelated voice note mid-conversation with a loved one (which does NOT feel good to anyone involved...)
I suspect I can get better as I learn more skills and practice. For example, there are people great at both the hours long tournament chess format, and the 2 minute bullet chess format.
But the fact that so I quickly went from being top tier at long term focus to not very good at focusing on anything gives me real pause...
I think I'm more able to stay on task - when there is something hard I don't want to do I just tell the AI to figure it out. Previously I would find any excuse to procrastinate. For that matter while the AI is "thinking" I can read a book (unrelated fiction), but I'm still on task because the work is getting done.
It predates LLMs though. It's after work and you're hanging out with friends, and someone asked about that one actress from that one thing. Do you struggle and think real hard and pull a name out of your brain with a bunch of effort, or do you just look it up in IMDB?
Working with AI just feels like having a team of junior employees.
Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy?
edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.
It doesn't. Juniors are generally SLOW because they are soaking up information and constantly learning. However, this allowed them to learn how to work through difficult problems, and how to communicate if they can't achieve their goals.
I think LLMs are a big problem for development of junior devs.(pun intended)
Agreed, and I feel like it was pretty rare to distinguish junior devs before LLMs, we just used to talk about devs and senior devs. Then we needed a way to make sure it's understood that WE understand how dumb an LLM can be, so "junior" smashed its way into the discourse.
If anything, it's more like an over enthusiastic intern who'll go way down a rabbithole of self-doubt and overengineering when you're away at a conference for 3 days.
How about "Working with AI just feels like having a team of junior employees who are completely unscrupulous, sychophantic and sometimes profoundly stupid psychopathic liars"?
That you can be utterly awful to and they won't quit or feel sick. They'll never show up to work hung over or have a relative that needs surgery so they need an advance in pay and also they're never emotional because their partner of seven years broke up with them and their dog and cat and pet rabbit died. They'll never go to HR because you sexually harassed them, they'll work on your schedule and are available, in your house in your bed, at 4 am when inspiration hits so you pull out your laptop.
I already had the impression that auto-complete was bad for programmers, since I've many times seen coders brute-force it until they found something that looked like it would do.
With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing.
This is why everyone needs to implement "Rawdog Thursdays" as I call it, in which you write code without the assistance of AI (i.e., you are "rawdogging" your professional output).
How about you take it even further and implement "Rawdog Weeks", where every day for the week you write code without the assistance of LLMs, and you repeat that every week. That way you won't be able to develop any kind of dependance
I'm taking the radical approach of starting with the problem and finding a solution rather than start with a solution and hit all your problems with it.
Did people forget that practice makes perfect? The best way for someone to level up is to go get their hands dirty and dive into everything themselves.
One of my math teachers said that practice doesn't make perfect. Practice makes permanent. You can practice and reinforce the wrong thing until that's all you know.
I do it for around 20% of my PRs. However my employer is complaining that my numbers are below their 100% target. So I am being penalised for trying to keep my skills up.
How are they measuring it? My boss gave me a hard time because I wasn't using enough of my token budget. How do they know what percentage of your pull request was AI written?
So I guess when employers force AI use by their developers those developers progress towards worthlessness in that they will produce wrong code, not know the difference, not care about the resulting harm, and finally not even try to course correct if AI is removed.
This sounds like something I have seen before: jQuery, Angular, React.
What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.
I personally find that LLMs help me store my mental energy to later put into more (personally) fruitful endeavors. Instead of being too tired to contribute to OSS, write, do other things at the end of the day, I find I can leave more juice for the end of the day after work hours, or just at work, I can move faster thus I utilize that extra time and energy into stuff like Anki, ups killing, etc.
Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.
Reminder: Human cognition is complex and determining whether something is "good" or "bad" won't come from 1 or 2 studies.
Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought.
Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...
And now the number of people who car read Homer and study it are dropping to 0. They just want the summary notes without any deep thought and the reward that comes with it.
All fun and games until the first time someone successfully sues an employer who mandated it and wins a mental health claim.
The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down.
And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.
Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup.
This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen.
Great news for the AI providers, turns out they are automatically turning their audience into captives who end up increasingly dependent on their product to get anything done.
it's probably even worse, because knowledge is basically extracted out of all communities and eventually the rug pull comes where you're denied access one way or another.
not only do students graduate now only due to chatgpt but also 10 year old kids never build up education while using ai to do their homework.
This might actually be a point for the need of sophisticated local AI.
mom: we have AI at home
the AI at home:
Right, soon we will be like those weirdos still running their own website while everyone else is on Facebook, reddit and X (making low effort comments complaining about a standardized set of topics like who got banned and why)
Or like the devs still on IRC.
Of course they will take the www and Google away from us by replacing everything with AI slop.
SO answers will be like, did you ask Macro Banana 42?
"Hey kid, wanna try an LLM? First session is free"
It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.
I agree in principle, although I personally consider mental atrophy to be far more serious than physical atrophy (and I value physical fitness very high already!).
> It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.
And arguably, our society has made a lot of bad choices about many "convenience[s] in modern life." For instance, cities should probably be designed to make you walk more by default, so healthy physical activity isn't turned into a chore you then have to have the discipline to do consistently.
Basically, collectively, we're stupid and unwise, picking short term convenience and neglecting the medium and long term, and we need to get better at that.
Link to the preprint paper: https://arxiv.org/pdf/2604.04721
Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).
> "People’s persistence drops."
Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.
I have, absolutely, as I'm trying to learn the fully agentic style of development to keep up with the pace that a couple colleagues are setting.
Im that style of working, spinning up multiple parallel workstreams appears to be the highest output strategy. So now I'm practicing rapid context switching, jumping from virtual desktop to virtual desktop, and even adding monitors to my desk to keep tabs on more workstreams.
In my home life, I've observed myself wandering off mid-task (reminder to self: the eggs on the stove DO NOT have the ability to wait idly for your next input), or pausing to make an unrelated voice note mid-conversation with a loved one (which does NOT feel good to anyone involved...)
I suspect I can get better as I learn more skills and practice. For example, there are people great at both the hours long tournament chess format, and the 2 minute bullet chess format.
But the fact that so I quickly went from being top tier at long term focus to not very good at focusing on anything gives me real pause...
I think I'm more able to stay on task - when there is something hard I don't want to do I just tell the AI to figure it out. Previously I would find any excuse to procrastinate. For that matter while the AI is "thinking" I can read a book (unrelated fiction), but I'm still on task because the work is getting done.
It predates LLMs though. It's after work and you're hanging out with friends, and someone asked about that one actress from that one thing. Do you struggle and think real hard and pull a name out of your brain with a bunch of effort, or do you just look it up in IMDB?
Working with AI just feels like having a team of junior employees.
Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy?
edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.
It doesn't. Juniors are generally SLOW because they are soaking up information and constantly learning. However, this allowed them to learn how to work through difficult problems, and how to communicate if they can't achieve their goals.
I think LLMs are a big problem for development of junior devs.(pun intended)
I train up beginners pretty regularly and this is not a good analogy.
I honestly detest the junior employee analogy, AI is not and will never be like working with actual humans.
Agreed, and I feel like it was pretty rare to distinguish junior devs before LLMs, we just used to talk about devs and senior devs. Then we needed a way to make sure it's understood that WE understand how dumb an LLM can be, so "junior" smashed its way into the discourse.
If anything, it's more like an over enthusiastic intern who'll go way down a rabbithole of self-doubt and overengineering when you're away at a conference for 3 days.
I guess-- it feels like a junior dev in the sense that it has terrible self-direction, but is fully capable at the actual act of coding.
right. working with junior devs should include teaching which reinforces thinking and problem solving fundamentals
How about "Working with AI just feels like having a team of junior employees who are completely unscrupulous, sychophantic and sometimes profoundly stupid psychopathic liars"?
A team of fresh slaves.
That you can be utterly awful to and they won't quit or feel sick. They'll never show up to work hung over or have a relative that needs surgery so they need an advance in pay and also they're never emotional because their partner of seven years broke up with them and their dog and cat and pet rabbit died. They'll never go to HR because you sexually harassed them, they'll work on your schedule and are available, in your house in your bed, at 4 am when inspiration hits so you pull out your laptop.
So what if they lie every once in a while?
I already had the impression that auto-complete was bad for programmers, since I've many times seen coders brute-force it until they found something that looked like it would do.
With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing.
I'm pretty sure nobody will read the docs now.
This is why everyone needs to implement "Rawdog Thursdays" as I call it, in which you write code without the assistance of AI (i.e., you are "rawdogging" your professional output).
How about you take it even further and implement "Rawdog Weeks", where every day for the week you write code without the assistance of LLMs, and you repeat that every week. That way you won't be able to develop any kind of dependance
I use chatGPT instead of googling. I honestly don't think this is necessary at all. The job has joined, we have a new tool in our arsenal.
I love coding for problem solving, and can do problems in my spare time. However, lately code is just work for me. It pays the bills.
I'm taking the radical approach of starting with the problem and finding a solution rather than start with a solution and hit all your problems with it.
LLMs have yet to feature.
Did people forget that practice makes perfect? The best way for someone to level up is to go get their hands dirty and dive into everything themselves.
One of my math teachers said that practice doesn't make perfect. Practice makes permanent. You can practice and reinforce the wrong thing until that's all you know.
I do it for around 20% of my PRs. However my employer is complaining that my numbers are below their 100% target. So I am being penalised for trying to keep my skills up.
How are they measuring it? My boss gave me a hard time because I wasn't using enough of my token budget. How do they know what percentage of your pull request was AI written?
Your employer is a fucking moron.
Tell me something I don’t know
Businesses can remain irrational for longer than you can stay solvent.
My life has mostly been making that as not true as possible :)
I can't wait to be one of the last thinking humans.
"you are the 10 people you talk to the most" -- not always true, but broadly
now imagine if most of them are using AI
So I guess when employers force AI use by their developers those developers progress towards worthlessness in that they will produce wrong code, not know the difference, not care about the resulting harm, and finally not even try to course correct if AI is removed.
This sounds like something I have seen before: jQuery, Angular, React.
What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.
I personally find that LLMs help me store my mental energy to later put into more (personally) fruitful endeavors. Instead of being too tired to contribute to OSS, write, do other things at the end of the day, I find I can leave more juice for the end of the day after work hours, or just at work, I can move faster thus I utilize that extra time and energy into stuff like Anki, ups killing, etc.
Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.
Source: https://arxiv.org/abs/2604.04721 (https://news.ycombinator.com/item?id=47682908)
Reminder: Human cognition is complex and determining whether something is "good" or "bad" won't come from 1 or 2 studies.
Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...
Interesting. Seems analogous to the atrophy of navigation abilities caused by over-reliance on GPS. I wonder if there's a common underlying mechanism.
I sent the study to ChatGPT for analysis and it told me not to worry about it so I'm not gonna.
And the amount of people that can recite Homer by heart has collapsed since writing came along.
And now the number of people who car read Homer and study it are dropping to 0. They just want the summary notes without any deep thought and the reward that comes with it.
[dead]
All fun and games until the first time someone successfully sues an employer who mandated it and wins a mental health claim.
The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down.
And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.
and the last piece of the remaining work moves to a place with less strict mandates.
[dead]
Related:
(https://metr.org/blog/2026-02-24-uplift-update/)This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen.
I am so glad I don't use this stuff.
The next study will be just to ask and measure how fast they run away.