I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.
All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]
In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
Agreed, it interests me how much some people emphasise knowing facts - like dates in history or dictionary definitions of words.
Facts alone are like pebbles on a beach, far better (IMO) to have a few stones mortared with understanding to make a building of knowledge. A fanciful metaphor but you know ...
I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all
Mine as well. 2 years ago my mind was blown that I could code in a language I didn’t know (scala) while on a log train ride with no internet (Amtrak) using a local model on a laptop. Couldn’t believe it.
The staggeringly effective compression of LLMs is still under appreciated, I think.
2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.
Yes! Continuing on thoughts of LLM compression, I'm now convinced and amazed that economics will dictate that all devices contain a copy of all information on the Internet.
I wrote a post about it: Your toaster will know mesopotamian history because it’s more expensive not too.
It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.
I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.
Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.
Will you do anything differently knowing this? Does the risk of LLMs being unaffordable to you in the near future make you wary about losing the skills?
Open Models are currently within reach for most of the kind of writing I do
I still decide what and why it generates what it does, I just don't do it manually
I'm not super worried, either I still do the last leg of the work, or I go back an abstraction level with my prompts and work there
Others have addressed other aspects of this, but I want to address this:
> I cannot remember basic boilerplate stuff.
I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.
I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
Jeez, I never remembered boilerplate stuff anyway. Losing grasp of your commonly used, slightly more involved code idioms in your key languages would probably be where I’d draw the ‘be concerned’ line. Like if I get into a car after years of only using public transit, I wouldn’t be too worried if I couldn’t immediately use a standard transmission smoothly. If I no longer could intuitively interact with urban traffic or merge onto a highway, I’d be a lot more concerned.
I read the "boilerplate" in that comment as "basic" meaning "I don't know how to center a div" or "I do not know how to remove duplicates from a collection"
Well both of them are easily retrieved from web search, it's not a problem if you forget one or two. I'll probably need some refreshment if I want to implement bubble sort again.
Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.
I'm wondering if this is something that hits new developers faster than more experienced ones?
Probably depends on the individual. Senior developer here and I've always offloaded boilerplate and other "easy to google" things to search engines and now AI. Just how my brain and memory work. Anything I haven't used recently isn't worth keeping (in my subconscious mind's opinion anyway).
Experience isn't the problem. I have 20+ years of C++ development, built commercial software in Java, Rust, Python, played with assembly, Erlang, Prolog, Basic.
Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.
Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.
Same for me. Been fully agentic for half a year or so, still remember the myriad of programming languages and things just as well if there's no AI present at all. Hard to shake 15 years of experience that quick, unless maybe that experience never fully cemented?
Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.
I can tell you that I can still code Python and Haskell just fine (I did those in vim without bothering to set up any language assistance), but Rust I only ever did with AI and IDE and compiler assistance.
People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.
There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.
If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.
It's probably too much inside baseball to merit a study, but I'm curious if the results would change for part-time coders. When I'm not coding, I'm writing patents, doing technical competitive analysis, team building, etc.
My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.
SDKs and Binary format descriptors are where I see agents failing the most, they are typically acceptable for the happy path but fail at the edge cases.
As an example I have been fighting with agents re-writing or removing guard clauses and structs when dealing with Mach-o fat archives this week, I finally had to break the parsing out into an external module and completely remove the ability for them to see anything inside that code.
I get the convenience for prototyping and throwaway code, but the problem is when you don’t have enough experience with the quirks to know something is wrong.
It will be code debt if one doesn’t understand the core domain. That is the problem with the confidence and surface level competence of these models that we need to develop methods for controlling.
Writing code is rarely the problem with programming in general, correctness and domain needs are the hard parts.
I hope we find a balance between gaining value from these tools while not just producing a pile of fragile abandonware
The task was essay writing, and the three 3 groups were No tools, search, ChatGPT.
The people who used chatGPT had the most difficulty quoting their own work. So not boilerplate, CRUD - but yes the advantage is clear for those types of tasks.
There were definite time and cognitive effort savings. I think they measured time saved, and it was ~60% time saved, and a ~32% reduction in cognitive effort.
So its pretty clear, people are going to use this all over the place.
i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.
i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.
I guess writing code is now like creating punch-cards for old computers. Or even more recently, as writing ASM instead of using a higher level language like C. Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.
It needs to be something stronger than just deterministic.
With the right settings, a LLM is deterministic. But even then, small variations in input can cause very unforeseen changes in output, sometimes drastic, sometimes minor. Knowing that I'm likely misusing the vocabulary, I would go with saying that this counts as the output being chaotic so we need compilers to be non-chaotic (and deterministic, I think you might be able to have something that is non-deterministic and non-chaotic). I'm not sure that a non-chaotic LLM could ever exist.
(Thinking on it a bit more, there are some esoteric languages that might be chaotic, so this might be more difficult to pin down than I thought.)
I cringe every time I read this "punch card" narrative. We are not at this stage at all. You are comparing deterministic stuff and LLMs which are not deterministic and may or may not give you what you want. In fact I personally barely use autonomous Agents in my brownfield codebase because they generate so much unmaintainable slop.
Except that compiler is a non-deterministic pull of a slot-machine handle. No thanks, I'll keep my programming skills; COBOL programmers command a huge salary in 2026, soon all competent programmers will.
The Perez model contains a falsification test the article doesn't apply to its own thesis. In Perez's framework, the installation phase is characterized by financialization, frothy infrastructure bets, and capital rushing toward uncertain new technology—exactly the behavior we see with US AI investment (hyperscalers committing $500B+ to uncertain infrastructure, speculative valuations). Deployment phases look like industrial efficiency gains and normal returns. By those criteria, US AI investment is behaving like an installation-phase bet, not late-deployment optimization.
The article's US-China comparison quietly reveals the prediction that would follow from the thesis: if the Perez 'late deployment' framing is right, then the Chinese model—lean, industrial, healthcare and education application, grounded in near-term ROI—is betting correctly on where we are in the curve and should outperform over the next decade. That's a concrete, testable claim that would validate or falsify the argument independently of whether AI constitutes a 'new surge.'
I'm currently looking for sort of niche clothes for an event and it's the first time I had to give up on buying online because of the sheer amount of AI-generated pictures. Going to a physical store was just a much better experience, I can't recall the last time this happened, almost all sellers on Etsy are using AI for their pictures.
A hell that’s been widely documented in fiction as well. That’s the part that’s so wild to me about this. None of this was unseen. Across every medium the extreme commercialization and general collapse of the social contract due to AI has been described and a lot of the authors have been largely prophetic.
In the US this is due to the overall failure of trust in our institutions.
No one trusts Congress or the US government to effectively regulate AI for the greater good of the population. Each party believes regulations proposed by the other party will be used to discriminate against and control their party.
I do woodworking for a hobby and wanted to find a nice "intro to routers" article. After skimming past the obvious SEO crap on google I clicked the first likely-seeming link and was greeted by an AI slop image of two misshapen routers being operated by three disembodied hands with seventeen fingers each. I immediately threw my laptop out the window, watched it shatter into five hundred pieces, walked across the street to the library, and checked out a goddamn book.
I was already getting disillusioned with the Internet as a learning resource during the SEO spam era, but the AI era has completely destroyed it.
For questions like this you can ask an AI directly instead of getting herded through the clickbait.
Education and targeted summary searches are one of the best uses. I literally found the location of the criminal who embezzled thousands of euros from my condominium with an AI search. It took me around fifteen minutes. Other people had been looking for years. (True story...)
full disclosure I work at Whatnot but that sort of thing is a large part of the appeal of Whatnot to me, that people are showing off the stuff live on stream and you can ask questions about it
This was my experience as well trying to buy a charger. You can't trust anything. For brands that have their own store, some have such a bad experience that it's easier and less stressful to go to the store and buy directly from there.
I think it's clear to me that AI will be both things:
1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around
but- it will also be
2) new technologies and ideas enabled by a completely new set of capabilities
The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.
But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
> do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
The cost cutting is the only revenue producing models for the AI companies so far. It's being pitched as a way for corporations to fire a lot of employees and save money.
Revenue for the consumer facing products is not very impressive. Consumers are mostly satisfied with the free versions and very resistant to adding yet another channel to shove advertising at them.
Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.
I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.
Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.
Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.
The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.
I also have contemplated just retraining now to try and get ahead of the curve, but I'm not confident that trades can absorb the shock of this - both in terms of supply (more unemployment) and demand (anything non-commercial will be hit by capital flight on the customer-side). I figure I will just try and make as much money on a higher wage as I can and hope for the best...
Well, it really isn’t. First, this entire post makes two assumptions: 1) that AI adds more value to the process than it removes and 2) that it’s sustainable.
It’s not pessimism to want to validate these first.
Are AI “gains” really transformative or simply random opportunities for automation which we can achieve by other means anyway?
Can the world continue to afford “AI as a service” long enough for the gains to result in improvements that make it sustainable? Are we dooming our kids to a hellishly warm planet with no clear plan how to fix it?
It’s not pessimism, just simple project management if you ask me.
Hard to understand, when essential human nature is so predictable? Sure, we will do novel things with it. But society in the main will use to it exploit labor. same as it ever was.
That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.
Cost cutting has less uncertainty than making something new, so they do that first. If something else comes along, then great.
This is also why the people should make the transition as difficult as possible for companies doing layoffs when the companies are paying proportionally very little in taxes compared to the people they are laying off.
> do people really believe no novel things will be unlocked with this tech?
Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.
Give your best example of something that is novel, ie isn't just replacing existing processes at scale.
It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.
I think that most people are pretty short-sighted about the utility cases right now (which is understandable given the negative feelings about a lot of what's currently going on).
There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.
I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.
When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.
With this said, a lot of people are likely worried about being eaten by whales when it comes to doing things with AI.
It's kind of like dealing with Amazon, or any other company that has both compute and the ability to sell the kind of product you make.
Said AI providers can sell you the compute to make the product, or they can make the product themselves with discounted compute and eat all the profits you'd make.
The most obvious thing is bio-tech, protein folding, drug discovery, etc. As in, things that have an actual positive effect on humanity (not just dollars).
I don't really get people who are dismissive about this aspect of AI- my original question wasn't about cost-efficiency of developing these things, but just that the technology itself is creating things that wouldn't have been possible before. It seems hard to refute.
Whether or not it's worth the cost is a different debate entirely- about how tech trees are developed and what the second order effects of technology are. There are so many examples- the computer itself, nuclear power, etc. I think AI is probably on the same order as these.
Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?
The implication of your comment seemed to be that this was going to be so much more than replacing people. But I fail to see how any of the items you listed are anything other than that.
These things have always been possible. Just slow and limited by labor. Which is the primary and novel "unlock" of AI.
You can argue it's a good thing, and in many areas I'd probably agree. I'm directly responding to your skepticism and implied absurdity that replacement is the main unlock here. It absolutely is.
If you're implying that hand-spun cotton is better, that's an easy question to answer- people used to spend a huge amount of their income on clothing, also spending a huge amount of time washing it. Industrialization made clothing so much cheaper that it's now completely disposable. There's plenty of reasons why that's not a bad thing.
One reason people forget that "good quality" shoes existed was that you could only afford to buy one pair ever, not that things were made better, necessarily. (or could be both, but that replacing a pair of shoes was a financial hardship, because hand-made things, even back then, were expensive).
Even if you're against fast fasion I don't think anyone wants a pair of shoes to cost $10,000.
Define better. Fast fashion sucks, but hand-spun cotton won't give you Kevlar or modern wind-resistant clothing or fireproof materials for your furniture or... <insert half thousand different things adjacent to modern textile production>.
It's always win some, lose some with the economy, but technology itself opens previously impossible capabilities.
It used to open them to most of the population - at least that was the ideal for a couple of decades - but now it seems to be opening them to oligarchs more than workers.
It's essentially a political energy source. It heats everything up.
Eventually it either explodes, goes through a phase change to a new (meta)stable state, or collapses back to a previous state.
I view this post as primarily pattern-matching and storytelling. But I think there’s a buried truth there, and that they were nibbling at the edges of it when they started talking about the overlapping stages.
There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.
To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.
Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.
The lack of robotics mention somewhat undermines this article.
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
The question is whether robotics will look like a some number of platforms with little development to adapt to different scenarios, or a million types of machines that are highly fit for purpose.
Because the first situation won't create that many jobs. The second one might.
I expect hybrids. Something general has to be adaptable for what will be an expensive capital purchase.
The human form factor - torso up anyway - is probably easier to bootstrap on a general basis; keyed off of human data. But I don't like the failure modes of bipedal robots - imagine a robot flailing around trying to regain balance, in any setting with humans around.
Anthropic today, who next week? If locally run models ever get to the point where they can reliably solve... 85% of what the frontier cloud models can do, I think many would be willing to accept slightly less problem solving ability and just run the thing locally.
All hypothetical, but if compute + AI research continues at pace, in 5 years we should see extremely good local models.
tangentially related, but as someone who built multiple internet businesses -- mostly unsuccessful, some mildly successful -- I barely have any new ideas to work on.
I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
I can relate to this, in the past I felt like I could write down pages of projects to try if only I had time. Now my mind immediately goes towards "do I want to manage this long term after the initial spark".
It seems really premature to talk about AI being the end of anything. What’s at an end stage is adoption of smart phones and monetizing human attention. That’s been the fuel that powered the last quarter century of tech gains, and while still huge in absolute terms it has been running out of steam as a growth engine and facing cultural pushback (eg. Social media lawsuits) for a while.
AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.
The question it raises is if this is the fake surge, the one we see, what is the real one we don't see? Renewable energy comes to mind. Robotics too but maybe that's too tied up with AI.
Eh, robotics is going through explosive growth right now with the same computing power that's being used on LLMs. You can take human motion capture of a task, dump it in a robotics simulator for a few hours and get a model that can operate autonomously better than something that would have taken a half a year to teach just a few years back.
Space (Space-X showed that reusable rockets are feasible), Programmable health (Covid vaccine and remember that mRNA curing that dog?),etc.
Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.
I firmly believe that Renewable energy, the Solar+battery+EV stack, not LLMs, really is the biggest technology transformation of our times. Renewable energy really is surging, just it's on a longer timeline and unlike LLMs, it doesn't benefit venture capitalists to hype it. In fact many existing sectors deliberately downplay it. But we are in the middle of it.
Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".
mRNA vaccines? Sure, they're a huge medical advance. With great potential, in that area. But it's just an area.
Space? Maybe, if we get past LEO, find something useful to do there, and don't succumb to Kessler syndrome.
>Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".
Eh, I do think this is kind of underestimating the changes in robotics that are occurring. LLMs incorporated with other ML kernels extend the capabilities a long way. That and the amount of computing power now usable to train robotics is far far larger.
if this could last till a point where AI have actual automation ability, it's not a tool for humans anymore. it could have a identity and start to evolve literally.
i don't understand why some people consider AI as tech revolution.
maybe i'm into sf, but AI can be something other than just a tool.
I sort of agree with the premise of the article. I ask myself, did more non-technical people pick up AI chat bots when they were invented than picked up personal computers in the late 70s/early 80's? I think probably. From my conversations with others.
The very first personal computers came out in 1972. In 1978, we got several. The PC came out in 1981. The computer boom didn't begin until 1992.
My wife is absolutely not technical, and she began using ChatGPT before me.
This is to say, I believe you to be correct here. The LLM adoption rate is many times the computer adoption rate. Non-technical people are immediately seeing the benefit of LLMs where they did not with computers in the 1970s.
Part of this is because we aren’t paying the actual cost of these chatbots. If ChatGPT wasn’t essentially free for casual users then we’d definitely see a much smaller/slower adoption rate. I wonder if a single person using them, even paying for tokens, isn’t substantially subsidized. Probably not but I’m speculating.
If 3D printers could’ve given usage away for years directly in our homes then I bet we would’ve seen wider adoption there too.
this perez model thing completely misses the communications revolutions of the telegraph, radio and television not to mention demonopolization of bell.
> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,
i thought the transformer and large language models came from google research.
> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.
the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.
addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.
previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
Every time I see these I am thinking to myself: Is microsoft copilot a problem of implementation or the capability of the models?
I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.
Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.
At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.
So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.
I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
> I have ZERO doubt that if you put people that haven't used a computer in front of one ... presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox
I don't know. We've been telling ourselves things like that about user interfaces for a long time. For decades, it was pretty much universally understood that everyone would prefer to talk to their computer instead of using a keyboard. Now that you can, no one really wants to. In fact, now that we can text / email / IM other people, we don't talk to them as much as before.
One obvious problem with the interface you're proposing is that sometimes, it's easier to do the thing than to explain precisely what you want. For example, it takes much longer to ask ChatGPT what's the weather forecast for this week, and then read the flowery response, than to press Ctrl-N, "wea", enter, and see it at a glance in a consistent format with pictograms.
You already know how to use a computer or a phone, but take someone who has never seen or used a smartphone, computer or a laptop. I think the story will be very different.
I don't know. In a vacuum, if we prevent them from ever finding out that there's a faster way with less cognitive overhead? Sure. Until they have to explain to an agent precisely which shoes they want the AI agent to buy them...
In any case, in practice, people pick up stuff from each other. I'm old enough that learning to use the computer mouse needed to be a deliberate effort on my end. I never really had to "teach" that to my kids, they just picked it up naturally. So you might even have a difficulty producing that "computer-naive" subject in the first place.
It's better to look at these things statistically rather than anecdotally. And statistically the Xennial group seems to have the highest penetration of computer skills, even more so than the generations that followed them. Simply put the new tablet generation is more apt to use apps and not understand the premises of how they work.
If you find yourself going to an actual computer to make 'large' purchases you're part of a group that is not growing in size.
The theory doesn't seem to make much sense to me - like why can't there be simultaneous technological revolutions? And why would they last an arbitrary 50-60 years?
> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.
Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.
50-60 years is far from arbitrary: it's very roughly two generations (plus a bit of extra time, to ensure the process takes). 50-60 years gives enough time for a generation to grow up and reach adulthood who have never known anything other than the post-revolution state.
AI is destroying the economic premise that has drawn so much investment into Silicon Valley. It's going from a capital light business model with network driven moats that allow market domination, to a capital heavy, high burn-rate model with the potential to not only offer ZERO moat protection but destroy the ones that already exist. Cloud infrastructure + vibe coding now make it possible to quickly replace existing apps with custom fit alternatives. Open source+cheap Chinese LLMs may not be as good as Opus but maybe good enough turns out to be good enough ( Sun Microsystems Vs. Linux is a good example). Currently AI has just as much potential destroying Silicon Valley as it does building it up.
I could totally see it, recently there has been a social club opened near me and it has 100+ people attending weekly. All younger, 20-30 year olds in their early career.
Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.
Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
Going to the local movie rental shop with my kids is the highlight of my week. What a bizarre sentence to write in 2026 but it’s absolutely 1000% better than modern streaming (outside of my Plex setup).
I gladly pay the (modest/token) late fees to help keep them open at this point. If someone set up a local arcade man…I’d be in heaven ha
> I gladly pay the (modest/token) late fees to help keep them open at this point
Keeping movies longer and paying late fees may be hurting them more than helping them. It's entirely possible that the late fees are underpriced to avoid scaring away customers. New customers going away disappointed they movie they want wasn't returned on time hurts them more than your late fees help.
Not keeping them on purpose, I’m just not sweating the fee because I’m happy to pay them.
Additionally, the odds that my kids are holding on to exactly what somebody else wants in that timeframe is very small. It’s a small shop within a larger co-op situation with a modest following and pretty substantial stock. I know for instance we’ve never had an issue of wanting something that was rented.
Has it happened? Maybe. But the fees I’ve paid probably net positive against that rare instance. They aren’t open half the week so I can’t return them once Monday passes for several days anyway. Owner certainly hasn’t expressed concern and has even waived the fee before because clearly it’s of little consequence.
Introduction of new mass production techniques often has an initial wave of high profit when early adopters have an initial advantage... existing workers are more efficient... but this will followed by a long term decline in the rate of profit as margins aggressively fall ...
e.g. if every software company uses AI to double its coding speed, the price of software will eventually drop by half.
As "AI" becomes a required and common commodity input, competition will drive prices down until the productivity gains are entirely captured by customers, leading to margin compression across the sector.
Also... firms will be forced to invest in using AI just to stay in the same place. If you don't adopt it aggressively, you'll be priced out; if you do, your margins still shrink because everyone else did too.
So... yeah, I don't think this is the next part of a "digital wave" if that means giant increase in new startup investments and SaaS companies etc, it's actually probably the start of I think a margin collapse and consolidation in our industry.
If it's 2x easier to build e.g. a CRM, we’ll end up with 10x more CRMs, leading to a "race to the bottom" on pricing.
The last 15 years of investment by people like YC etc seems to have been in businesses that were "like Uber but for <X>". Service businesses on which a small layer of software automated things, and drove some sort of explosion of customers. I don't really see how VCs are going to separate wheat from chaff on this front anymore? If anybody can do it.... what's the value of any particular approach over the others? I'd think the result would be consolidation?
So I suppose if you're selling "the means of production" in the form of GPUs you're in a good spot, but even that is likely to be subject to aggressive downward pricing.
I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.
All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]
Use it or lose it, as it were.
0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...
In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
Einstein famously refused to learn people's phone numbers, stating that he could look them up in the phonebook whenever he needed it.
I don't think there is that much value in memorizing rarely used, easily looked up information.
Agreed, it interests me how much some people emphasise knowing facts - like dates in history or dictionary definitions of words.
Facts alone are like pebbles on a beach, far better (IMO) to have a few stones mortared with understanding to make a building of knowledge. A fanciful metaphor but you know ...
I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all
Mine as well. 2 years ago my mind was blown that I could code in a language I didn’t know (scala) while on a log train ride with no internet (Amtrak) using a local model on a laptop. Couldn’t believe it.
The staggeringly effective compression of LLMs is still under appreciated, I think.
2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.
Yes! Continuing on thoughts of LLM compression, I'm now convinced and amazed that economics will dictate that all devices contain a copy of all information on the Internet.
I wrote a post about it: Your toaster will know mesopotamian history because it’s more expensive not too.
https://wanderingstan.com/2026-03-01/your-toaster-will-know-...
It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.
I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.
Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.
Will you do anything differently knowing this? Does the risk of LLMs being unaffordable to you in the near future make you wary about losing the skills?
Open Models are currently within reach for most of the kind of writing I do I still decide what and why it generates what it does, I just don't do it manually
I'm not super worried, either I still do the last leg of the work, or I go back an abstraction level with my prompts and work there
I don't think it would take very long to regain those skills either.
This conversation keeps missing me because I don't think I've typed out boilerplate in like 20 years.
Were people actually physically typing every character of the software they were writing before a couple of years ago?
I haven't written complex code for so long I forgot how I used to type && on my keyboard. Wild times.
Soon everyone will run local models for simple stuff like that.
Others have addressed other aspects of this, but I want to address this:
> I cannot remember basic boilerplate stuff.
I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.
I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
So many use cases for LLMs I've read leave me asking "did none of you have a working text editor?"
Jeez, I never remembered boilerplate stuff anyway. Losing grasp of your commonly used, slightly more involved code idioms in your key languages would probably be where I’d draw the ‘be concerned’ line. Like if I get into a car after years of only using public transit, I wouldn’t be too worried if I couldn’t immediately use a standard transmission smoothly. If I no longer could intuitively interact with urban traffic or merge onto a highway, I’d be a lot more concerned.
Lisp macros had pretty much solved the boiler plate problem decades ago.
I read the "boilerplate" in that comment as "basic" meaning "I don't know how to center a div" or "I do not know how to remove duplicates from a collection"
Does anyone know how to centre a div?
Last time I looked there were at least seven ways to do it.
Well both of them are easily retrieved from web search, it's not a problem if you forget one or two. I'll probably need some refreshment if I want to implement bubble sort again.
If this was me you couldn't waterboard this info out of me.
Why? Is this is because of shame or fear of losing your job?
Because the info is no longer in their brain.
Because its incredibly embarrassing to admit you can no longer do very basic programming tasks as a "professional" in that field.
Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.
I'm wondering if this is something that hits new developers faster than more experienced ones?
Probably depends on the individual. Senior developer here and I've always offloaded boilerplate and other "easy to google" things to search engines and now AI. Just how my brain and memory work. Anything I haven't used recently isn't worth keeping (in my subconscious mind's opinion anyway).
Yeah, having to look up the "basic boilerplate" stuff is not worse for me after starting to use AI than it was beforehand.
Experience isn't the problem. I have 20+ years of C++ development, built commercial software in Java, Rust, Python, played with assembly, Erlang, Prolog, Basic.
Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.
Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.
Same for me. Been fully agentic for half a year or so, still remember the myriad of programming languages and things just as well if there's no AI present at all. Hard to shake 15 years of experience that quick, unless maybe that experience never fully cemented?
Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.
> I'm wondering if this is something that hits new developers faster than more experienced ones?
Almost certainly, at least according to Ebbinghaus' forgetting curve.
> random JavaScript, C#, Java, Rust and Clojure "manually"
Right, sounds very credible to me. What did you write, an addition function in each of those?
I can tell you that I can still code Python and Haskell just fine (I did those in vim without bothering to set up any language assistance), but Rust I only ever did with AI and IDE and compiler assistance.
It a side effect of using AI.
People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.
> https://arxiv.org/abs/2506.08872
There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.
If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.
It's probably too much inside baseball to merit a study, but I'm curious if the results would change for part-time coders. When I'm not coding, I'm writing patents, doing technical competitive analysis, team building, etc.
My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.
SDKs and Binary format descriptors are where I see agents failing the most, they are typically acceptable for the happy path but fail at the edge cases.
As an example I have been fighting with agents re-writing or removing guard clauses and structs when dealing with Mach-o fat archives this week, I finally had to break the parsing out into an external module and completely remove the ability for them to see anything inside that code.
I get the convenience for prototyping and throwaway code, but the problem is when you don’t have enough experience with the quirks to know something is wrong.
It will be code debt if one doesn’t understand the core domain. That is the problem with the confidence and surface level competence of these models that we need to develop methods for controlling.
Writing code is rarely the problem with programming in general, correctness and domain needs are the hard parts.
I hope we find a balance between gaining value from these tools while not just producing a pile of fragile abandonware
> [...] and ability to quote their work accurately.
I guess that's an advantage? People shouldn't have to burden their memory with boilerplate and CRUD code.
The task was essay writing, and the three 3 groups were No tools, search, ChatGPT.
The people who used chatGPT had the most difficulty quoting their own work. So not boilerplate, CRUD - but yes the advantage is clear for those types of tasks.
There were definite time and cognitive effort savings. I think they measured time saved, and it was ~60% time saved, and a ~32% reduction in cognitive effort.
So its pretty clear, people are going to use this all over the place.
i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.
i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.
Honestly, you shouldn't be working on a plane. This thing where people are plugged in all the time is just insane.
Yes, you lost some abilities. Install local model so you have someone to talk to while you are on the plane ;)
probably a junior/semi sr developer?
I guess writing code is now like creating punch-cards for old computers. Or even more recently, as writing ASM instead of using a higher level language like C. Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.
> Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.
No we don't and we never should actually, compilers need to be deterministic.
It needs to be something stronger than just deterministic.
With the right settings, a LLM is deterministic. But even then, small variations in input can cause very unforeseen changes in output, sometimes drastic, sometimes minor. Knowing that I'm likely misusing the vocabulary, I would go with saying that this counts as the output being chaotic so we need compilers to be non-chaotic (and deterministic, I think you might be able to have something that is non-deterministic and non-chaotic). I'm not sure that a non-chaotic LLM could ever exist.
(Thinking on it a bit more, there are some esoteric languages that might be chaotic, so this might be more difficult to pin down than I thought.)
I cringe every time I read this "punch card" narrative. We are not at this stage at all. You are comparing deterministic stuff and LLMs which are not deterministic and may or may not give you what you want. In fact I personally barely use autonomous Agents in my brownfield codebase because they generate so much unmaintainable slop.
Except that compiler is a non-deterministic pull of a slot-machine handle. No thanks, I'll keep my programming skills; COBOL programmers command a huge salary in 2026, soon all competent programmers will.
This is not what a compiler is in any sense.
The Perez model contains a falsification test the article doesn't apply to its own thesis. In Perez's framework, the installation phase is characterized by financialization, frothy infrastructure bets, and capital rushing toward uncertain new technology—exactly the behavior we see with US AI investment (hyperscalers committing $500B+ to uncertain infrastructure, speculative valuations). Deployment phases look like industrial efficiency gains and normal returns. By those criteria, US AI investment is behaving like an installation-phase bet, not late-deployment optimization.
The article's US-China comparison quietly reveals the prediction that would follow from the thesis: if the Perez 'late deployment' framing is right, then the Chinese model—lean, industrial, healthcare and education application, grounded in near-term ROI—is betting correctly on where we are in the curve and should outperform over the next decade. That's a concrete, testable claim that would validate or falsify the argument independently of whether AI constitutes a 'new surge.'
I'm currently looking for sort of niche clothes for an event and it's the first time I had to give up on buying online because of the sheer amount of AI-generated pictures. Going to a physical store was just a much better experience, I can't recall the last time this happened, almost all sellers on Etsy are using AI for their pictures.
We're racing to build hell.
A hell that’s been widely documented in fiction as well. That’s the part that’s so wild to me about this. None of this was unseen. Across every medium the extreme commercialization and general collapse of the social contract due to AI has been described and a lot of the authors have been largely prophetic.
In the US this is due to the overall failure of trust in our institutions.
No one trusts Congress or the US government to effectively regulate AI for the greater good of the population. Each party believes regulations proposed by the other party will be used to discriminate against and control their party.
I do woodworking for a hobby and wanted to find a nice "intro to routers" article. After skimming past the obvious SEO crap on google I clicked the first likely-seeming link and was greeted by an AI slop image of two misshapen routers being operated by three disembodied hands with seventeen fingers each. I immediately threw my laptop out the window, watched it shatter into five hundred pieces, walked across the street to the library, and checked out a goddamn book.
I was already getting disillusioned with the Internet as a learning resource during the SEO spam era, but the AI era has completely destroyed it.
For questions like this you can ask an AI directly instead of getting herded through the clickbait.
Education and targeted summary searches are one of the best uses. I literally found the location of the criminal who embezzled thousands of euros from my condominium with an AI search. It took me around fifteen minutes. Other people had been looking for years. (True story...)
full disclosure I work at Whatnot but that sort of thing is a large part of the appeal of Whatnot to me, that people are showing off the stuff live on stream and you can ask questions about it
This sounds like a really unpleasant shopping experience to me.
AI is in spitting distance of being able to do that too.
I sometimes wonder if the random people sitting there hawking a pile of Amazon goods that pops up after every Amazon purchase are already AI.
This was my experience as well trying to buy a charger. You can't trust anything. For brands that have their own store, some have such a bad experience that it's easier and less stressful to go to the store and buy directly from there.
I think it's clear to me that AI will be both things:
1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around
but- it will also be
2) new technologies and ideas enabled by a completely new set of capabilities
The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.
But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
> do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
The cost cutting is the only revenue producing models for the AI companies so far. It's being pitched as a way for corporations to fire a lot of employees and save money.
Revenue for the consumer facing products is not very impressive. Consumers are mostly satisfied with the free versions and very resistant to adding yet another channel to shove advertising at them.
Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.
I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.
Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.
Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.
The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.
I also have contemplated just retraining now to try and get ahead of the curve, but I'm not confident that trades can absorb the shock of this - both in terms of supply (more unemployment) and demand (anything non-commercial will be hit by capital flight on the customer-side). I figure I will just try and make as much money on a higher wage as I can and hope for the best...
> AI pessimism is hard to understand
Well, it really isn’t. First, this entire post makes two assumptions: 1) that AI adds more value to the process than it removes and 2) that it’s sustainable.
It’s not pessimism to want to validate these first.
Are AI “gains” really transformative or simply random opportunities for automation which we can achieve by other means anyway?
Can the world continue to afford “AI as a service” long enough for the gains to result in improvements that make it sustainable? Are we dooming our kids to a hellishly warm planet with no clear plan how to fix it?
It’s not pessimism, just simple project management if you ask me.
Hard to understand, when essential human nature is so predictable? Sure, we will do novel things with it. But society in the main will use to it exploit labor. same as it ever was.
are you under the impression life was better before capitalism?
That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.
>That it's all about cost-cutting?
Cost cutting has less uncertainty than making something new, so they do that first. If something else comes along, then great.
This is also why the people should make the transition as difficult as possible for companies doing layoffs when the companies are paying proportionally very little in taxes compared to the people they are laying off.
> do people really believe no novel things will be unlocked with this tech?
Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.
Give your best example of something that is novel, ie isn't just replacing existing processes at scale.
It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.
I think that most people are pretty short-sighted about the utility cases right now (which is understandable given the negative feelings about a lot of what's currently going on).
There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.
I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.
When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.
With this said, a lot of people are likely worried about being eaten by whales when it comes to doing things with AI.
It's kind of like dealing with Amazon, or any other company that has both compute and the ability to sell the kind of product you make.
Said AI providers can sell you the compute to make the product, or they can make the product themselves with discounted compute and eat all the profits you'd make.
The most obvious thing is bio-tech, protein folding, drug discovery, etc. As in, things that have an actual positive effect on humanity (not just dollars).
I don't really get people who are dismissive about this aspect of AI- my original question wasn't about cost-efficiency of developing these things, but just that the technology itself is creating things that wouldn't have been possible before. It seems hard to refute.
Whether or not it's worth the cost is a different debate entirely- about how tech trees are developed and what the second order effects of technology are. There are so many examples- the computer itself, nuclear power, etc. I think AI is probably on the same order as these.
Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?
The implication of your comment seemed to be that this was going to be so much more than replacing people. But I fail to see how any of the items you listed are anything other than that.
These things have always been possible. Just slow and limited by labor. Which is the primary and novel "unlock" of AI.
You can argue it's a good thing, and in many areas I'd probably agree. I'm directly responding to your skepticism and implied absurdity that replacement is the main unlock here. It absolutely is.
It’s pretty decent for natural language -> query language tasks
But also you don’t need SOTA frontier models for that!
"Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist."
Wouldn't that apply to most technological advances? Cars, computers, cell phones.
Yes, but I'm not the one who introduced the "novel" constraint to the argument.
e: Also I don't know that I'd strictly bucket these specific examples you gave as shittier versions, though I guess that's a matter of perspective.
So now the ancillary question from your example is: "Is hand-spun cotton better than industrialized polyester?"
If you're implying that hand-spun cotton is better, that's an easy question to answer- people used to spend a huge amount of their income on clothing, also spending a huge amount of time washing it. Industrialization made clothing so much cheaper that it's now completely disposable. There's plenty of reasons why that's not a bad thing.
One reason people forget that "good quality" shoes existed was that you could only afford to buy one pair ever, not that things were made better, necessarily. (or could be both, but that replacing a pair of shoes was a financial hardship, because hand-made things, even back then, were expensive).
Even if you're against fast fasion I don't think anyone wants a pair of shoes to cost $10,000.
Define better. Fast fashion sucks, but hand-spun cotton won't give you Kevlar or modern wind-resistant clothing or fireproof materials for your furniture or... <insert half thousand different things adjacent to modern textile production>.
It's always win some, lose some with the economy, but technology itself opens previously impossible capabilities.
It used to open them to most of the population - at least that was the ideal for a couple of decades - but now it seems to be opening them to oligarchs more than workers.
It's essentially a political energy source. It heats everything up.
Eventually it either explodes, goes through a phase change to a new (meta)stable state, or collapses back to a previous state.
It's been a few years and I have yet to see a single novel thing come out of it. Even chatbots weren't novel when ChatGPT came out.
I view this post as primarily pattern-matching and storytelling. But I think there’s a buried truth there, and that they were nibbling at the edges of it when they started talking about the overlapping stages.
There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.
To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.
Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.
The lack of robotics mention somewhat undermines this article.
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
The question is whether robotics will look like a some number of platforms with little development to adapt to different scenarios, or a million types of machines that are highly fit for purpose.
Because the first situation won't create that many jobs. The second one might.
I expect hybrids. Something general has to be adaptable for what will be an expensive capital purchase.
The human form factor - torso up anyway - is probably easier to bootstrap on a general basis; keyed off of human data. But I don't like the failure modes of bipedal robots - imagine a robot flailing around trying to regain balance, in any setting with humans around.
I'm no expert of course, just pontificating.
Anthropic today, who next week? If locally run models ever get to the point where they can reliably solve... 85% of what the frontier cloud models can do, I think many would be willing to accept slightly less problem solving ability and just run the thing locally.
All hypothetical, but if compute + AI research continues at pace, in 5 years we should see extremely good local models.
As far as I know Anthropic still bleeds money, as Open AI also does.
They will keep bleeding money by the way.
I don't believe the marginal customer of Claude Code is loss-making.
tangentially related, but as someone who built multiple internet businesses -- mostly unsuccessful, some mildly successful -- I barely have any new ideas to work on.
I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
I can relate to this, in the past I felt like I could write down pages of projects to try if only I had time. Now my mind immediately goes towards "do I want to manage this long term after the initial spark".
That made me wonder, honestly, if AI can build it, could AI manage it too?
Wait, I just deleted prod. You're absolutely right, that shouldn't have happened. My mistake.
It seems really premature to talk about AI being the end of anything. What’s at an end stage is adoption of smart phones and monetizing human attention. That’s been the fuel that powered the last quarter century of tech gains, and while still huge in absolute terms it has been running out of steam as a growth engine and facing cultural pushback (eg. Social media lawsuits) for a while.
AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.
The question it raises is if this is the fake surge, the one we see, what is the real one we don't see? Renewable energy comes to mind. Robotics too but maybe that's too tied up with AI.
I think robotics will be the next surge for sure. But I don't think it's really tied up with the LLM stuff either and it could be decades away.
In the end, it'll probably require something like model-based RL like Yann LeCun talks about and that's totally different to the LLMs.
Eh, robotics is going through explosive growth right now with the same computing power that's being used on LLMs. You can take human motion capture of a task, dump it in a robotics simulator for a few hours and get a model that can operate autonomously better than something that would have taken a half a year to teach just a few years back.
Space (Space-X showed that reusable rockets are feasible), Programmable health (Covid vaccine and remember that mRNA curing that dog?),etc.
Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.
I firmly believe that Renewable energy, the Solar+battery+EV stack, not LLMs, really is the biggest technology transformation of our times. Renewable energy really is surging, just it's on a longer timeline and unlike LLMs, it doesn't benefit venture capitalists to hype it. In fact many existing sectors deliberately downplay it. But we are in the middle of it.
Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".
mRNA vaccines? Sure, they're a huge medical advance. With great potential, in that area. But it's just an area.
Space? Maybe, if we get past LEO, find something useful to do there, and don't succumb to Kessler syndrome.
>Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".
Eh, I do think this is kind of underestimating the changes in robotics that are occurring. LLMs incorporated with other ML kernels extend the capabilities a long way. That and the amount of computing power now usable to train robotics is far far larger.
I don't really understand why they call it the end of the digital revolution.
if this could last till a point where AI have actual automation ability, it's not a tool for humans anymore. it could have a identity and start to evolve literally. i don't understand why some people consider AI as tech revolution. maybe i'm into sf, but AI can be something other than just a tool.
Surprising number of people here and in tech general lacks any imagination.
I sort of agree with the premise of the article. I ask myself, did more non-technical people pick up AI chat bots when they were invented than picked up personal computers in the late 70s/early 80's? I think probably. From my conversations with others.
The very first personal computers came out in 1972. In 1978, we got several. The PC came out in 1981. The computer boom didn't begin until 1992.
My wife is absolutely not technical, and she began using ChatGPT before me.
This is to say, I believe you to be correct here. The LLM adoption rate is many times the computer adoption rate. Non-technical people are immediately seeing the benefit of LLMs where they did not with computers in the 1970s.
personal computers in early 70s/80s were a considerable investment for little to no gain and especially no force pushed FOMO.
it costs you nothing to install/adopt an AI chat bot and it's being force fed to everyone at head turning loss in order to justify the push.
Part of this is because we aren’t paying the actual cost of these chatbots. If ChatGPT wasn’t essentially free for casual users then we’d definitely see a much smaller/slower adoption rate. I wonder if a single person using them, even paying for tokens, isn’t substantially subsidized. Probably not but I’m speculating.
If 3D printers could’ve given usage away for years directly in our homes then I bet we would’ve seen wider adoption there too.
Well, we are not paying for Gmail, Youtube, TikTok either, all sorts of other services that are free as well.
Well, we are paying for it, but not directly with cash.
Chat bots can run on your local hardware these days, even mobile phone hardware. That's effectively free.
this perez model thing completely misses the communications revolutions of the telegraph, radio and television not to mention demonopolization of bell.
> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,
i thought the transformer and large language models came from google research.
> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.
the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.
addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.
previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
Every time I see these I am thinking to myself: Is microsoft copilot a problem of implementation or the capability of the models?
I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.
Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.
At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.
So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.
I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
> I have ZERO doubt that if you put people that haven't used a computer in front of one ... presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox
I don't know. We've been telling ourselves things like that about user interfaces for a long time. For decades, it was pretty much universally understood that everyone would prefer to talk to their computer instead of using a keyboard. Now that you can, no one really wants to. In fact, now that we can text / email / IM other people, we don't talk to them as much as before.
One obvious problem with the interface you're proposing is that sometimes, it's easier to do the thing than to explain precisely what you want. For example, it takes much longer to ask ChatGPT what's the weather forecast for this week, and then read the flowery response, than to press Ctrl-N, "wea", enter, and see it at a glance in a consistent format with pictograms.
You already know how to use a computer or a phone, but take someone who has never seen or used a smartphone, computer or a laptop. I think the story will be very different.
I don't know. In a vacuum, if we prevent them from ever finding out that there's a faster way with less cognitive overhead? Sure. Until they have to explain to an agent precisely which shoes they want the AI agent to buy them...
In any case, in practice, people pick up stuff from each other. I'm old enough that learning to use the computer mouse needed to be a deliberate effort on my end. I never really had to "teach" that to my kids, they just picked it up naturally. So you might even have a difficulty producing that "computer-naive" subject in the first place.
> I never really had to "teach" that to my kids,
It's better to look at these things statistically rather than anecdotally. And statistically the Xennial group seems to have the highest penetration of computer skills, even more so than the generations that followed them. Simply put the new tablet generation is more apt to use apps and not understand the premises of how they work.
If you find yourself going to an actual computer to make 'large' purchases you're part of a group that is not growing in size.
The theory doesn't seem to make much sense to me - like why can't there be simultaneous technological revolutions? And why would they last an arbitrary 50-60 years?
> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.
Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.
50-60 years is far from arbitrary: it's very roughly two generations (plus a bit of extra time, to ensure the process takes). 50-60 years gives enough time for a generation to grow up and reach adulthood who have never known anything other than the post-revolution state.
Not unrelated: https://blog.gardeviance.org/2015/03/on-pioneers-settlers-to...
I mean when I needed to look up something I used to just google it.
Now, with the advent of LLMs I've had to pull out my old textbooks from storage.
AI is destroying the economic premise that has drawn so much investment into Silicon Valley. It's going from a capital light business model with network driven moats that allow market domination, to a capital heavy, high burn-rate model with the potential to not only offer ZERO moat protection but destroy the ones that already exist. Cloud infrastructure + vibe coding now make it possible to quickly replace existing apps with custom fit alternatives. Open source+cheap Chinese LLMs may not be as good as Opus but maybe good enough turns out to be good enough ( Sun Microsystems Vs. Linux is a good example). Currently AI has just as much potential destroying Silicon Valley as it does building it up.
That sounds like Silicon Valley's fault for taking the actual silicon out of the valley.
Sounds like it's best to be the shovel manufacturer now.
it be the end of the paradigm myth, and eventually, the Anthropocene
it be the beginning of vast and infinite potentia spreading out beyond us
And with robots, this also applies to the physical world.
These economic frameworks sure look like pareidolia to me
I could totally see it, recently there has been a social club opened near me and it has 100+ people attending weekly. All younger, 20-30 year olds in their early career.
Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.
Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
Going to the local movie rental shop with my kids is the highlight of my week. What a bizarre sentence to write in 2026 but it’s absolutely 1000% better than modern streaming (outside of my Plex setup).
I gladly pay the (modest/token) late fees to help keep them open at this point. If someone set up a local arcade man…I’d be in heaven ha
> I gladly pay the (modest/token) late fees to help keep them open at this point
Keeping movies longer and paying late fees may be hurting them more than helping them. It's entirely possible that the late fees are underpriced to avoid scaring away customers. New customers going away disappointed they movie they want wasn't returned on time hurts them more than your late fees help.
Not keeping them on purpose, I’m just not sweating the fee because I’m happy to pay them.
Additionally, the odds that my kids are holding on to exactly what somebody else wants in that timeframe is very small. It’s a small shop within a larger co-op situation with a modest following and pretty substantial stock. I know for instance we’ve never had an issue of wanting something that was rented.
Has it happened? Maybe. But the fees I’ve paid probably net positive against that rare instance. They aren’t open half the week so I can’t return them once Monday passes for several days anyway. Owner certainly hasn’t expressed concern and has even waived the fee before because clearly it’s of little consequence.
Yeah, I agree with TFA I think.
Introduction of new mass production techniques often has an initial wave of high profit when early adopters have an initial advantage... existing workers are more efficient... but this will followed by a long term decline in the rate of profit as margins aggressively fall ...
e.g. if every software company uses AI to double its coding speed, the price of software will eventually drop by half.
As "AI" becomes a required and common commodity input, competition will drive prices down until the productivity gains are entirely captured by customers, leading to margin compression across the sector.
Also... firms will be forced to invest in using AI just to stay in the same place. If you don't adopt it aggressively, you'll be priced out; if you do, your margins still shrink because everyone else did too.
So... yeah, I don't think this is the next part of a "digital wave" if that means giant increase in new startup investments and SaaS companies etc, it's actually probably the start of I think a margin collapse and consolidation in our industry.
If it's 2x easier to build e.g. a CRM, we’ll end up with 10x more CRMs, leading to a "race to the bottom" on pricing.
The last 15 years of investment by people like YC etc seems to have been in businesses that were "like Uber but for <X>". Service businesses on which a small layer of software automated things, and drove some sort of explosion of customers. I don't really see how VCs are going to separate wheat from chaff on this front anymore? If anybody can do it.... what's the value of any particular approach over the others? I'd think the result would be consolidation?
So I suppose if you're selling "the means of production" in the form of GPUs you're in a good spot, but even that is likely to be subject to aggressive downward pricing.
Humanity has industrialized the production of intelligence. We're nowhere near the end of what this leads to.
It could also be a huge bubble like everyone seems to agree about.
ItS thE eND of ThE InTeRwEbS
Could