This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.
> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
Flat at 60% of pre-covid hiring while the number of graduates continue to increase and there's still a backlog of people who were laid off. That's not a particularly optimism inducing hiring market.
Do not with a straight face act like pre-COVID hiring levels were a Good Thing. They weren’t. They were a symptom of a broken economy that you personally happened to pretty directly benefit from.
Thing is, the companies doing these layoffs rarely actually end up losing money from overhiring. They’re still profitable. Just not profitable enough for the people on top.
That’s a bit perverse. In democracies, corporations ultimately exist to serve society, not shareholders.
The plutocracy is forgetting that a working and productive populace - with fair wages and representation - is their end of the deal for disproportionally benefitting from the fruits of labor from others; and directly prevents violence against the status quo. See: The top articles in the last 3 days.
Sure, but all they have to do is not hold up their end of the bargain. Who enforces that? These are just norms from 60 years ago that the rich decided they no longer have to follow.
They’ve started treating incorporation like a modern day papal indulgence, something that absolves whatever they do in the name of profit. It doesn’t. Limited liability buys you forgiveness in court but it doesn’t buy you forgiveness in the court of public opinion. Doing harm for a company is still doing harm.
I think you are correct in asserting the mercy-disciple of market forces.
I also think that counter points on the inhumanity of firms, misses that economies are an objective way to structure incentives to achieve subjective ends.
If you want more money to travel to other parts of the pyramid, or you want to disincentivize certain behavior, then economic incentives can be set up to achieve those goals.
Expecting firms to do charity is pointless. Expecting firms to optimize under constraints is not.
At societal scale hiring people is self-interest, not charity. Otherwise you'll get to exactly where the US is heading now: large parts of the consumer market are mostly dead because people have no discretionary spending power left, and the only way to make money as a business is to become a monopolist.
There have been a lot of headlines the past couple years about companies stating they are doing layoffs or slowing hiring because of AI. I would bet the average adult pays way more attention to news headlines than FRED reports.
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
The companies doing the layoffs are themselves stating AI as a reason; that’s the news people are responding to. The parent didn’t claim that it’s based on reality, but it informs public opinion.
Whether or not the CEOs' statements are true, they affect public opinion.
You have CEOs claiming that AI is driving layoffs alongside CEOs of Anthropic and OpenAI talking about the end of white collar work. All this is then amplified by tech journalists like Casey Newton and Kevin Roose. The biggest public proponents of AI keep telling people that it will take their jobs.
What comes after the end of jobs? Who knows. Sam Altman occasionlly making vague statements about curing cancer. There are vague hand-waving notions of a Star Trek utopia.
But to be honest it feels more like a Cyberpunk future, where the Altmans and Musks get to live cancer-free and the rest of us eek out an existence without jobs or any prospect for a better life. Or maybe it looks more like Star Trek, but we're all red shirts.
Anything Musk or Altman say is just about raising money. Nothing they say can be taken at face value. There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
The better question to ask is what happens after the end of OpenAI/Tesla/etc? AI may take your job away, but not because of robots replicating your labor, just good old-fashioned economic collapse.
Blame then then. Simple as that. Lying to "just raise money" is one of the most harmful ways of lying. It distorts the whole economy.
> There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
Yes, we know they are psychopaths and assholes. The blame is on them.
>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
That is the value of other companies doing that and you going to poach those new seniors. With the money you saved not training those juniors you can offer better salaries and still have higher profits.
We haven't hired for about 6 months, but the value of a junior is that they eventually become not junior, and if you value them, you pay them what they're worth and they stay.
My guess is that AI can now assist juniors just as it can assist seniors, and they'll become competent in the correct skillset needed for the future, just as everyone before them has.
They are increasing, but the level is still lower than it's been since Oct 2020. In my experience at two different companies since 2020, hiring more or less stopped sometime in 2022 to early 2023. In early 2025, some hiring started again but it's still a very low rate compared to pre-COVID, particularly for new college grads. While I don't believe that AI has actually taken any significant number of jobs in the software field, I do think it's being used as a convenient excuse by executives to lay people off. Regardless of the actual numbers though, the general perception in tech is "lots of layoffs are happening with not so much hiring" and "AI has something to do with it (either directly or as an excuse)."
Software job openings are mostly bullshit. Companies post ghost jobs en masse, while refusing to hire people. You can ask anyone that's had to look for a job recently and see how bad the market is.
Is that data useful at all? Indeed postings are a poor proxy for how many people actually get hired. One of the major problems we have is that employment statistics are largely just estimates, and don’t reflect reality on the ground. Factor in the Trump admin firing most of the BLS and other agencies for not giving him the numbers he wants, and there really is no reliable data.
I feel like the junior problem contributes more heavily than people might think. The people on top see juniors as replaceable since they view them as cheap menial labor, whereas most seniors at least acknowledge the human element as part of the benefit
Today's juniors are tomorrow's seniors, or more importantly today's seniors' yesterday.
They do the dirty, repetitive work, learn the systems inside out, take note of the flaws, and fix them if they are motivated and the system/process allows.
Thinking them as replaceable, worthless gears is allowing your organization rot from inside. I can't believe people can't see it.
Plenty of people see it - but, to a hiring team, a junior is an extremely risky investment. They demand a high cost relative to when they can start contributing actual value, may not work out, or may hop ship the moment they become competent. It is rational for a business to want to eliminate this risk. It's possible that everyone is acting rationally here, knowing it will lead to a result that is not favorable down the line - because the immediate benefit is too great to consider the latter.
In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line. And that risk may be overblown - people are still hiring some juniors, it's not like it has stopped entirely - so future seniors will likely just be worth more than they are currently. To some, that may be worth the risk, especially if you believe AI will continue to get stronger.
I am not saying I agree with this decision making, more pointing out the thought process. We have had to have similar discussions where I am but are still hiring juniors, FYI. That's basically all we're hiring right now, actually, because the market for strong juniors is very good right now.
It's not an economic decision, it's a cultural one. Are you investing to build something useful and sustainable? Or are you exploiting for a profitable quarter?
I read someone compare the mindset to that of a drug-dealer. In any given neighborhood, a handful of people get very wealthy, at the expense of the stability and potential of everyone else. Our elite are drug-dealers - literally, in some cases. And conditions are deteriorating about how you'd expect.
Besides the “its not x, its y” llm smell here, no, comments like this are also part of the hype, just the other side of it. The fact that LLM tooling can replace a lot of tedium typically set aside for junior’s is hardly disputable at this point.
Okay. And you could also still hire the juniors and have them oversee the LLMs, interrogate them about how well they understand the principles and technical details of what they're having the LLM do, correct them when they're wrong, try to get them to explore other approaches or extend the rote approach or synergize with some other task, etc. You know, training. Like companies used to do (or so I hear, such initiatives having been long gone by the time I hit the workforce).
The fact that you won't isn't a productivity, bottom-line decision, as we've already established that the business is trading efficiency now for incompetence later; the financials are a wash, at best. It's a cultural decision to throw your youth under the bus for seniors and shareholders' short-term interests. The best you could say is, "Well, of course. This has been a common narrative across the American economic landscape for the past 30ish years. 'F* them kids,' is the rule."
and what happens in half a generation or so when those seniors start retiring? the only way software production will meet demand is if the fewer seniors out there are propped up by way more competent ai than we have now. that also means the work will fundamentally change from being massively nerdy to moderately nerdy with the ability to work with ai. many of the people in the computer industry now just wont be attracted to that type of work.. what will they do? become physicists or mathematicians? and what type of person is tomorrows senior software developer?
edit: maybe todays computer nerds will become tomorrows backyard hackers, the only ones able to beat the ai.
> In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line.
I mean, writing the code which makes mon^H^H^H^H provides value for minimum cost is the ultimate goal of a software company, but any competent CS grad or anyone with basic algorithms knowledge knows that greedy algorithms can't solve all problems. Sometimes the company needs to look ahead, try, fail and backtrack.
Nerdy analogies aside, self-sabotaging whole sector with greedy shortsightedness is a pretty monumental misstep. It's painful yet unbelievably hilarious at the same time. Pure dark comedy.
The problem is that it's systemic. The entire system rewards the short term thinking, so that even people with some awareness of what's happening tend to contribute to it all. People are fantastically good at finding reasons to work at places like OpenAI, Anthropic, Google, Meta, Palantir, X, etc. And once they're there, they similarly figure out how to justify the actions they're taking.
If people need to be retained for many years, is the solution to give a bunch of stock that vests over many years? It would be interesting if such incentives (the need to hang onto talent that has been incubated for many years) could bring about a return to one-company careers.
not just fast and loose though… I bet the number of companies who are not drowning in “AI strategies” meetings is roughly equal to number of ICE Agents who are Democrats
This would have been a great revelation for decision-makers across the economy to have had about 20 years ago. Instead, they took every opportunity to turn the job market into the Hunger Games. Congrats to the people who survived the Cornucopia; the rest of us have been bleeding out for well over a decade.
It's not that juniors are replaceable, but that hiring them is a high variance move. Few, if any, know whether a candidate is just memorizing leetcode and is going to be a dud, costing you effort before they get a PIP, or you are hiring a very talented individual that will be contributing in 2 weeks
.
With seniors, you risk less, just because the track record makes the very worst candidates unlikely.
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
Not really managers, I would put the new role more in the senior engineer / architect category. Those still have to deal with deeply technical things like design, architecture, problem decomposition, research, domain expertise, code review, collaborating with technical peers -- all of which (people) managers don't typically do.
If you ever wanted to climb the senior technical ladder, this is now the quickest way to experience it. Except instead of other people you get to work with agents which, while a very different experience, requires largely the same skills.
So yes, your job is not what it was before, but with career growth it typically was not anyway.
The use-cases for data science and other engineers are different. AI is not uniformly good at all kinds of development.
There is an issue with execs pushing it though. You have people at the top of the company with little to no idea how people work attempting to micromanage tool usage. It is as if you had a group of execs determining what IDEs people could use.
No-one is getting fired because of AI. The start of this year is the start of companies beginning to use AI. The reason layoffs are happening is because of the massive overhiring after Covid.
How long after COVID are we going to be able to keep using this excuse? This is starting to feel like the politician blaming his predecessor even though he's been in office for years. In the year 2033, Company X lays off another 10,000, just as it did each year since 2023, again blaming massive over-hiring during COVID, ten years ago.
> How long after COVID are we going to be able to keep using this excuse?
I am with you but if you look what happened after COVID it is a big line going waaaaay up. COVID was a significant event and there is no way around it, no? the OPs comment is invalid because we below the pre-COVID (by miles) but COVID should be taken into account (everyone seems to use it to further some agenda by looking at just one particular aspect of what happened post-COVID)
> It is as if you had a group of execs determining what IDEs people could use.
its worse than that; its more like determining what ide you use and also mandating how much time you spend in it, and then chewing you out at review time because you used jira and confluence too much instead of writing md files in the blessed ide of their choice
I have a similar experience. I seldom use it to test to see its current state, and it generally (85% of the time) gives wrong answers. Then I discuss this with a couple of friends:
Me: I tried $AI recently, I asked $question, it hallucinated.
Them: But it sucks at that.
Me: Then what's good at? It's useful if it helps me out of a ditch.
Them: It depends on the domain...
These guys are not evangelists or anything, but colleagues who want to reduce their workloads. If it can't help with what I need, then how can it help me at all?
At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.
A friend of mine has copilot integrated with his storage appliance that all the business docs are hosted on for his firm. He says it's amazing.
My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.
There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.
Reminds me of a bounty I received recently. Someone essentially exposed a bedrock agent that had access to the companies internal documents to the internet unauthenticated. They actually had the reports and notes for other bug bounties that had been reported to them as well.
Tell claude what you do and ask it where it can be the most helpful. It is true that the tool has to be learned, and won't help everywhere. If you are doing web dev just to make a tool, it is purely magical. I've found it to be mostly useless in making geed helm charts.
I generally use them for researching things which I was unable to find anywhere else. For example, for Gemini I have two extreme examples:
I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.
Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.
For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.
The opposite happened to me. I asked Gemini about a type of Vietnamese dance called "nhảy sạp" and it returned a good sounding summary along with a video it claimed to explain the dance and how it worked. The video was from the Knowledge Academy and titled, "What is SAP?"
Funny, I was supposed to be the expert in my company, but I was run over by the demo folks, while I was uselessly preaching about evaluation, safeguards, guardrails, observability.
For mine it’s worse because we have new leadership who believes in it to a far larger extent than it can deliver. Now a massive amount of our workforce is building up proofs of concepts and spitting out tons of effectively useless output to look good because of how strongly they’ve signaled it’s good for careers here to fully embrace it. It’s a massive mess and there’s nobody to clean it up, and the voices advocating for rigor or good engineering practices are being sidelined.
It’s full out mania. As someone raised in and who escaped a cult, I am having to use every tool in my very large toolbox to stay sane while I wait for this to pass and die down or make my move towards a place that still cares whether their product works.
If the majority of engineers decide to rot their brains and abandon best practices, the industry will eventually implode. Stay true to your beliefs and use the bare minimum of AI to keep your job.
We’re in what I would call the “dark ages” of tech. There will be a new renaissance led by those who used this as an opportunity to build skills and tools that are genuinely useful and ingenious.
If you keep a long-term horizon this is the perfect opportunity to work on a solo project in stealth mode. Or build professional connections with others who see things the way you do.
When people talk about one’s salary being an imperative to them understanding something, they are talking about exactly you. “This’ll all wash over and we’ll be back to the good old days that I’m used to” has never happened. Ever.
Well actually it did happen. Greco-Roman intellectual tradition was lost when Rome collapsed and institutions of knowledge with it. Islamic scholars preserved much of this knowledge during the dark ages but in the western world Christian religious dogma reigned supreme.
During the renaissance western thinkers pieced together lost information and we got the scientific revolution.
Kind of wild that you completely ignored the example I gave of exactly this happening in my original comment.
And speaking of people whose salary dictates their understanding of something, let’s talk about Sam Altman and the rest of SV currently spinning a fairytale about AI which just so happens to justify astronomical valuations for their companies.
At my company everyone’s salary and career ladder are determined by exactly how much they dive into AI and show enthusiasm for it, regardless of whether they’re using it for something useful or they’re just competing for how much money they can burn
AI isn't going away, but leadership expectations to (say) increase "efficiency" by 50% in the next 6 months through "AI" will. Eventually. After lots of fudging of numbers and general reluctance to admit that the Emperor's clothes are looking awfully translucent.
If LOC and tokenmaxxing is the future, nobody will have a job.
I use AI all day every day, I’m not a luddite, I’m someone who has seen people take the same shitty shortcuts to working systems they are now. They’re wasting tons of money and smarter competitors who can actually think clearly about the benefits and costs are gonna eat their lunch.
Early stages of any major disruptive technology will have hype due to get-rich-quick folks. Dot-com boom & bust of 2000 is similar. But the underlying technology (internet) defined our lives forever.
I don't know why people are comparing the Day-1 of one technology with the Day-1000 of another. Yes, AI is useless in many fields - NOW. But you can't imagine doing any work without in a couple years.
Like the kids used to ask - 'How did they build Google without Google?'
Now their kids will ask - 'How did they build chatGPT without chatGPT'?
ChatGPT has been around for 4 years at this point. Not very long, but I’ve heard of the ‘imagine what it’ll do in one year’ spiel quite a few times by now.
The “Internet” was a DARPA-funded research curiosity initially. It was not crammed down people’s throats like a roll of Oreos while advocates screamed that, “You like this, right‽ This is the future! You have to like this, what is wrong with you?”
Transformers were treated like any other ML technique until Sutskever decided to just go big on training it. That it can look like a compelling simulacrum, I am not arguing, but this thing left the ivory tower of research prematurely and recklessly. We are all going to pay for it.
2 things - it’s not day 1 for AI, and it’s also not dot-com (which dropped the nasdaq 80% btw). It’s the entire American economy right now. When it can’t deliver anything approaching its hype, just like all the data centers that can’t deliver on power, the profit margins that can’t deliver, and the promises of massive 500% revenue increases this fiscal year… sorry, I was raised in a cult and know what the fuck I’m seeing, sadly among a lot of otherwise intelligent people here.
I expect I’ll be using LLMs now and in the future, but the public is far more right about the companies and the people running them than the tech “insiders” here.
You replace half of a team with AI. Salary cost go immediately down, but team output can keep up for some time. You don't see the technical debt, the security issues and the prompt injection which will result in wrong invoices being sent. In six months suddenly there will be a big problem, but this quarter a lot of shareholders are happy about the cost-cutting. You may even be promoted by the time shit hits the fan, and it won't even be your problem anymore.
On the other hand there probably also is a general correction in the market after the covid hiring spree.
The reality is most of them are so divorced from reality that they think they are infallible and AI will pick up the slack because they want it to be true.
And specifically, their expectations as to what will positively impact the stock price. Shareholder value this quarter is more important than keeping the company afloat next quarter.
It can be both if for the majority of layoffs, AI is just a scapegoat to act as cover for cuts made for financial reasons or offshoring and not the actual cause.
From what I've seen many efforts to replace roles such as customer service with AI are being rolled back or downscaled due to intolerably high error rates and general incapability. While these segments won't come out unscathed I don't think the actual impact will end up being as severe as feared.
You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.
I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.
OK, but your post reads as if you think that AI being the cause of layoffs can't be true if AI is "worthless" (less capable than they are assuming), which is false.
CEOs are laying off because of AI because they think it will save them money, but are doing so based on misinformation, largely due their own insistence that everyone uses AI, and report how much they are using - they are just hearing what they asked to hear (just like Mao hearing about impossible levels of rice production during the "Great Leap Forward"). I'm not making this up - I've seen it first hand.
You can see the proof of this - companies laying of because of what they mistakenly believe AI can do - in companies like Salesforce, forced to do an embarassing U-turn and hire people back when the reality sets in. At least Salesforce were quick to correct - most big companies are not so nimble or ready to admit their own mistakes.
We seem to have reached mania-like levels of rice-production reporting, with companies like Meta now taking AI token usage as a proxy for productivity and/or a measure of something positive, and apparently having a huge leaderboard displaying who is using the most (i.e. spending the most money!). The only guaranteed outcome of this is that they will indeed see massive use of tokens, and a massive AI bill, and then in a year or so will likely be left scratching their heads wondering why nothing much appears to have changed.
AI could be a huge net benefit, and justify large layoffs.
AI could be a huge short-term benefit, justify layoffs now, so long as you (the exec doing the laying off) don't have to worry about the long term
AI could have middling net benefit, but be a great excuse to justify layoffs now. In this scenario, the people laid off and those that remain bear the cost (one, losing their jobs; those that remain, burning out with the extra workload)
etc etc, many scenarios to consider...
can happen at the same time when businesses speculatively fire workers to replace with AI. The lack of results might bite them in the ass and the bubble might pop. Or not, but they are going long on their AI position
I totally understand where you are coming from and my personal take is LLMs are to "stuff" as a drill driver is to a screw driver. They are a tool, just a tool. ... bear with ...
I over floored several rooms in my house (UK, '20s build) with plywood before laying insulation, heating mats and laminate floor boards for the final finish. I don't have a staple gun so I screwed the boards down at roughly 600mm c/c across the floorboards and 300mm along them.
What the blazes has that got to do with LLMs?
Well, I used a nearly inappropriate method for a job and blasted through it nearly as fast as the best method! If I had used a manual screwdriver I would have been at it nearly forever and ended up with a very limp wrist. I do own an old school ratchet screwdriver and that would have speeded things up but still been slow. I did use yellow passivated screws with sharp threads and a notch to initiate biting into the wood - rather more expensive than a staple or a nail.
So I burned through my tokens (screws instead of nails/staples) faster than if I had used a pneumatic nail/staple gun.
Anyway. LLMs are tools. They can be good tools in the right hands or rip your fingers off in the wrong hands.
Running with this analogy, the two sides of the AI argument are the people who think they can fire their plumber and electrician now that they have a drill driver, and the people who know it doesn't work that way...
Quite. My larger drill driver will wrench your wrist unless you know how to set the speed/mode/etc correctly and know how to brace yourself correctly.
At the moment, I think that a LLM needs skilled hands too. Have a casual chat - that's fine but for work ... be aware.
I recently dumped a wikimedia (our knowledge base is a wiki) formatted table into a LLM (on prem) and asked it to sort the list on the first column. It lost a few rows for some reason. No problem - I know how my tools work but it was a bit odd!
Your statement is a bit contradictory. That is, the article about "the growing disconnect between AI insiders and everyone else" pretty clearly states that "everyone else" is scared about job losses and the extreme inequality they see advanced AI causing. This is in line with your second to last sentence.
But the first part of your comment is basically saying "AI insiders think the tech is super awesome and powerful, while other engineers think it doesn't stand up to the hype." Well, if the AI is indeed not as good a tech as its boosters are saying, well, this would be great news for everyone scared about job losses and widening inequality if AI turned out to be a nothing burger.
No and it has been said already elsewhere in this thread: decision makers are not entirely rational, they might fire entire departments even if the AI revolution isn’t here quite yet
That's likely because it takes an entirely different approach to make it work. Augmenting your existing flow with "sophisticated auto complete" isn't as interesting and isn't actually using the tools how they were designed to be used.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
Smart guy phoning it in now - realized a few weeks ago that he “notices” something interesting to share, but is really paraphrasing a recently released paper that found it - without giving paper credit.
Wasn't Karpathy the guy who used to work for tesla and that tried to convince everyone that you only need cameras for self-driving and that by 2025 there wouldn't be anymore cars without self-driving capabilities to sell?
Underwhelmed is the absolute correct word to use here.
Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.
My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.
It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.
People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.
Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.
I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.
"I refuse to pick up the magic hammer that nails things in by me just thinking about it while holding it in my hand; nosiree, give me that old fashioned hammer so I can sit here and nail some nails into a 2x4 while the guy using the other tool is building whole slop neighborhoods. Ha, that guy is so dumb and I'm too cool because I won't ever use that hammer."
I don't get it. Proudly saying you don't plan to use better tools is not some 'cool' look or the brag you think it is. You're just making yourself less valuable and being ignorant on purpose.
Yeah, I'm sure I'm the one who doesn't get it, not the guy refusing to use the paradigm changing tools, "because".
You sound like my Grandmother who refused to even look at a computer screen. Literally the same. It's amazing it's coming from so called tech-literate people in the field.
I have heard this same logic numerous times through my career and its a bias void of evidence, a loud indication of low confidence.
People would lose their minds when they discovered I did not pray at the cult of jQuery and then later React and so forth. I didn't need them. I was more productive without them and still managed to produce applications that executed dramatically faster with substantially less code. AI tools fall into this same camp. What could they provide me that I cannot do better myself at this point in my career? That is a serious question, by the way.
AI tools might work well for you. I am not you. Affirming your bias with baseless assumptions void of evidence will not make you a better programmer. Real developers write their own original code and/or architecture plans. Real engineers measure things and live or die by those measurements.
AI is just another tool. It is not a skill and will not compensate for skills. Perhaps I will use AT later to write test automation because that is something that is very simple to validate and likewise something I really don't want to bother with.
"Real developers write their own original code and/or architecture plans."
Nice tired no true Scotsman argument. You literally sound like a stubborn boomer refusing to see the change. You sound ignorant, not wise.
I've created several multi-modal AI applications deployed in production; fully created through Codex and Claude code. SOC2 compliant and created in a month, not years.
You can be an old man, but you don't need to have an old man mentality.
You wouldn't hire someone who only knows how to use a typewriter in a world of computers. No big deal, right? "A computer is just 'another tool', why should I not be fine with my typewriter?"
Literally fucking blows my mind I'm even having this discussion on this website, with people who should know better.
Your frequent astonishment indicates you have not been writing code very long. I also suspect you also may have some combination of ADHD or ASD, which would explain both your necessary dependence on a tool for relevance as well as your hostile defensiveness. Either way, in the long run, you will have trouble sticking around because tools eventually get replaced.
If after 20 years you still have not figured out how this industry works, hope that tool evangelism will save your career, and cannot measure things you almost certainly have autism. If you are not already diagnosed I strongly recommend seeking an evaluation.
The reality of success in this industry has nothing to do with tools. Its all about KPIs (however your organization defines them), superior planning/communication skills, and leading people. The people that produce the most with the least maintenance overhead are the people most well rewarded. Simply just not getting fired is not a metric of success.
I literally won a corporate Innovation award last year at my company (that does $30B in revenue yearly) for some of the previous applications I put into production that I mentioned. Even got posted on LinkedIn where thousands of people liked it. I am also in a technical role that is not far removed from the C-Suite, reporting to a VP.
Your analysis couldn't be further from reality, except the ADHD part, lmao.
I have the industry figured out buddy, it's you who thought they did, but doesn't anymore.
Totally right! The folks who were very recently telling us we were all going to be trading NFTs in the metaverse are the clear eyed optimists not motivated by anything but rational consideration for the truth.
It seems like you get personally offended by people using their critical reasoning abilities.
I know a folk who did a PhD in the area, and work at one of those frontier labs as a researcher, and privately he is as sceptical as the most "stubborn" HN denizen you mention.
Unbounded enthusiasm for AI without any reservations is something that can only be born out of minds utterly deprived of imagination and creativity.
As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.
When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.
As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.
The only thing that will be embarrassing is how badly your comments, and those like yours will age.
I don't know what happened to this place, but it went from actual young people sharing information on the newest things in tech, tech philosophy, interesting stuff; to now old men yelling at the clouds about the new tech.
I agree with your basic point, but it’s not just an age thing. There are plenty of older people enthusiastically using AI for software development now. Just as an example, Steve Yegge, who vibe-coded the Beads and Gas Town AI projects, is around 57. I’m a bit older than him, and I’m working with Claude, Gemini, and Codex on a daily basis, having great fun and learning tons.
What we seem to be seeing with AI is that the prospect of completely changing the way you work is threatening for a lot of people, and of course so is the prospect of losing your job. When people are faced with something threatening, a common reaction is to criticize it in every possible way - you can’t admit anything about it is good because that risks encouraging the threat. It’s not exactly rational, but it’s what people often do.
HN has never been exempt from that, it’s just that AI is a big change that brings out this instinct in many more people.
"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.
Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].
So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.
[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.
" But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer."
And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.
Anyone here who is currently 'underwhelmed'; please get through all 5 levels here and then say the same thing.
This is just the beginning. I seriously can't believe this place turned into neo-boomerism ideology on tech. I honestly don't get it, just makes me think everyone here talking about being seniors and architecture and blah blah; don't actually know shit, and aren't actually good at what they do.
That is the completed instructions for the fifth level, I leave it as an exercise to the reader to actually read more and find the rest of the steps on their own.
I spent some time chatting with Google engineer who put this together, Ayo Adedeji, at UCLA's SAIRS conference.
You asked about Google and what impressed me so much, going through this exercise, while not exactly helpful for me and my work directly (I'm doing similar things but completely in the Azure ecosystem), it is definitely a great display of how agents are more than just an 'LLM' that everyone here seems to think is equivalent to AI.
It's seriously the opposite feeling of imposter syndrome at this point, I'm in my 30's, a senior data engineer myself at a F200 company; I can't believe so many of my peers are so behind and ignorant of what is going on, confident enough to makes publicly lasting comments about how 'unreliable', 'bad', 'slop'; 'AI will never this or that'.
Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.
Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....
And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.
"AI-insiders" are trying to market their tools to you. See Anthropic's continuous lithany of "all programmers will be replaced in 6 months" while they struggle to make their TUI API wrapper consume less than 2-4 GB of RAM (they brought it down from 68 GB[1]), or have a decent uptime.
> When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?
Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.
> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
Cringe, as I was saying.
Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.
Every conference I participated always overhyped every current bullshit.
There is emotional bias and stubbornness in nearly all of your responses in this thread, the very same traits you lambasted HN broadly for in another comment. Rather than calling people, "stupid and wrong", why don't you make your case?
If you don't want to be bothered to argue your points, and this place truly chaps your ass to the degree it does, why even waste your time commenting at all when, according to you, there's a more fun place with bigger brains that-a-way, as far as you're concerned? points
I mean, it takes more energy and effort to be angry and annoyed than to just move on and leave us luddites in the dust.
It is pretty emotional seeing a place with people you respected and learned from for so long, where you could rely on for the place to find the newest and most interesting things happening in tech, where people in the know discussed the technical aspects; to now neo-luddites everywhere bashing shit they don't understand, ON A FUCKING TECH FORUM; like THE tech forum.
I feel like I'm living in some kind of bizzarro world now when I read anything AI related on HN. It's insane.
"Or: they actually understand the tech, and see its limitations. Unlike wide-eyed neophytes and zealots."
I'm sure you super qualified rando's on HN know and understand the tech and know its limitations better than the Google engineers actually making the stuff.
Real ripe coming from a guy who can't even refactor a few lines of code correctly with an AI.
If you hate everyone here so much, why did you come back today? Further, why did you come back just to spew more negative, unhelpful comments that just parrot what you've ranted about already, rather than attempt to foster the "smart" dialogue that you wax poetic about?
Yes, we know, you've said that a few times already.
You could foster that high-level dialogue you seem to value so much by trying to better articulate your view so that the plebs understand, kinda like I suggested just there. Ya know, "be the change you want to see in the world" and all that, but okay...
This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.
But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.
> But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
It was never any of these things, and you're misremembering if you think it was. There's never been a mono-opinion held by some all-encompassing hivemind.
I'm not misremembering. You can easily find monoculturey threads about all of these things. Just because there's a small slice of counter views, doesn't mean the average HN positions on these things isn't or wasn't decidedly negative.
It's literally unbearable now. I don't know how the place that once used to be exciting and deep in the know; is now old-man-yells-at-clouds ignorant of what is happening. It's actually really sad. /g/ and /r/accelerate seem like the last bastions of actual intelligent people discussing these things.
Shocking that people who are in data science/ML are excited about data science/ML, and people in jobs not interested in that area are not interested in it.
It's like a programmer being surprised that a worker in $random_job wants to keep doing their job, and not learn how to be a programmer instead.
There's this weird unspoken assumption in a lot of these HN posts that any layoffs or lack of hiring is due to companies shirking on providing the cushy jobs they owe software engineers. Actually, they hire engineers to get stuff done. If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
This is basically how most engineers talk to their managers, politely implying - "can you see how this decision has a short term payoff but a long term consequence?"
Before LLMs I only worked at one place that "only hired seniors and above" and now its the most commonplace thing in the world.
Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
> Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
Seems a bit like asking where the bread will come from, if no-one is forced to bake it.
Yes, this is what hysteria about bread looks like. People have been saying a disaster of the kids not knowing how to bake is coming since the 1800's. Yet, we still have bread.
How exactly will the knowledge of creating software be lost when the claim is that an ubiquitous software creation tool is going to take over the world? Is it going to refuse to emit anything less complex than a todo app?
I've never baked bread in my life and yet, with the right motivation, I'm sure I could learn from the literature and some trial and error alone. In the hypothetical world where bread demand massively exceeds supply, we'd form a guild and incrementally improve from there. Same way we learned it in the first place. Breadmaking wasn't gifted to us by aliens.
Well, that is the point :) we don't fret about where the bread comes from too much, or talk about how we need to act now lest we never have bread again. People want bread, and the price goes up until someone is willing to make bread.
> If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
AI works fine to get a vibe coded BS version of the app. No doubt there. But eventually, especially once scale hits your app, it will devolve into an unholy mess of low performance and (extremely) high cost if you do not have a bunch of senior talent able and willing to clean up after the AI mess.
Unfortunately, our capitalist economy only rewards the metrics you mentioned... but by the time the house of cards collapses, either from financial issues stemming from the above or because the tech debt explodes, it's too late to turn the ship around.
And I've even heard rumors of software engineers that don't even write apps or write code that runs on the internet at all. They say some of them don't even use javascript or python! The horror.
I get it, but as a "AI expert and senior leader" myself in my 1,000 people organization (in relative terms), the disconnect I have is:
A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.
But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.
I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.
I am a strong believer and selected as power user because of AI usage metrics, but I also see perverse incentives -- a colleague was desperately searching for me on the Claude token usage leaderboard (I was part of a different group he did not have access to) -- it was clear he was actively trying to climb that leaderboard.
Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.
No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.
Yes, your 3 minutes prompt got merged.
So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks.
Different perspective? Survivor bias? High authority?
Blame your engineering culture not AI if metrics such as Github stats, number of nitpick reviews and token usage is what is used to judge one's performance.
In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.
> But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
this is a nice anecdote but i think the real issue is the forcing and kpi-nization of llms top-down for nearly everything
there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc
people are annoyed at the force-feeding of llms/ai into everything even when its not needed
somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness
> somethings can be one-shotted and some things cant
True but my point is that people vastly underestimate what is one-shottable.
In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.
I believe that this has happened in some cases but am very skeptical that it is widespread and generalizeable at this point. My own experience is that software engineers thinking they can easily solve a problem in a domain they know nothing about overrate their ability to do so ~99% of the time.
Well "non-believers" don't see any gain from being faster, right? That'll just set expectations of "do a lot more for same". Fear of being "toast" will get you the loyalty you'd expect from fear.
the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.
on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)
You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.
How impressed someone get from that will depend on the recipient.
output, not recipient. try it on your own code. not everything on the example 3-page markdown you'll agree (much like you push back on the PR) but in significant number of occasions code changes were made based on the provided output
Recipient, as in the person who the output is intended for.
And I have seen what an AI do when it provides a code review, and it is very much like something that plausible looks like a code review. A lot of suggestion and nitpicks that at surface looks like plausible comments, but without any understanding. How much value a programmer get from that depend on the programmer. For me it reminds me of the value that teddy bears has on a support desk, or why some users are actually helped by being forced to go through layers of faq/ai suggested solutions before they are allowed to talk to a real person. Sometimes all that a person need to improve something is time to think about the code from a new perspective, and an AI code review can help the person find that time by throwing a bunch of shallow comments at them.
Unsure of this really tracks tho. How are you evaluating for the bias that they’re not merging it because you’re “their leader of 1000 people org” and not because you’re actually an engineer deep in the trenches that knows the second or third order effects of slop?
This is a genuine question btw, I see plenty of instances of this in my own org.
1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture.
2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.
I've been on this ride about three or four times over decades. Every new major wave of technology takes a surprisingly long time to be adopted, despite advantages that seem obvious to the evangelists.
I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.
The resistance was just incredible!
"That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.
"It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.
I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.
Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.
Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...
All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.
None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!
This is a true step change in capability.
It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.
You're not forced to use only an LLM for data loss prevention! You can combine it with regex. You can also feed the output of the regex matches to the LLM as extra "context".
Similarly, I was just flipping through the SQL Server 2025 docs on vector indexes. One of their demos was a "hybrid" search that combined exact text match with semantic vector embedding proximity match.
I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
> I think people are really underestimating how poorly today's tweens think of AI.
I think you might be really underestimating how poorly today's adults think of AI. Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI, I immediately discount what that person is saying as I assume they are unable to think for themselves.
> Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI
yes, n=1 (ok n=2 i guess) but noticing that is an immediate back button press for me but its getting harder and harder to avoid as search results become inundated with this stuff
Far as AI gen images they still make me nauseous due to uncanny-valley stuff. Still see a lot of non-standard number of fingers; so much content elicits a weird double-take and gut dropping feel.
The kids are smarter than most people give them credit for. They see their future being destroyed in real time, and AI is only accelerating it and largely being celebrated/promoted/used by the same people currently destroying their future. To them, there are few benefits beyond being able to cheat on their homework, and an enormous amount of downsides.
I think it's only a matter of time before we see some more serious, organized opposition to AI (and perhaps even the internet and other technologies) by these young people.
For some kids, they see their parents get themselves in a mountain of college debt, work for 50 years and struggle to afford necessities, and decide maybe trying to be a streamer/tiktokker is worth a gamble and could set them up for life instead.
Makes sense. I think it’s hard to argue against someone that uses the platform and others as an example of entrepreneurial pursuits. Not “all social media is bad” when to use different lens types.
It's the modern day "I'm going to be a hollywood actor." Every one of my kid's friends has said at some point they were going to grow up to be a famous YouTube or TikTok streamer. The vast majority are not serious, and of those who are serious, the vast majority won't make it.
You might be surprised by how many of them are
aware of the harms of social media, while acknowledging that it’s impossible not to engage with it. It’s not their fault we built the toxic slot machine world for them that we have. And besides, I’m pretty sure my boomer parents spend about as much time scrolling slop on Facebook as kids do on TikTok.
When I get a message from a co-worker that seems to have been written by an LLM, I am incredibly turned off and instantly think less of the person. It can be easy to spot: key words bolded, acknowledging that I'm right, longer and with a different tone than their typical messages, with neat bullet points.
It feels a little disrespectful. It feels a little pointless (why am I bothering talking to you if I can get the same result from the AI). I have no idea whether you've given the problem any actual thought, or if you're just copy-pasting an answer. I have no idea if you actually believe what you're telling me (or if you've even read it or understand it).
pr comments from a human that is generated by ai has got me feeling the same... like why this person even here? its totally disrespectful; i want a person to interact with not a machine with a meatsuit.
My partner was working at an event and a co-worker had prepared a poster using AI - a teenage kid at the event pointed out how the poster "has AI smudges".
You know how your parents are weirdly shitty at recognizing obvious photoshops? Kids are constantly surprised that we adults can't recognize obvious AI images.
In the 80s, 90s and 00s that's what they thought about coding.
Then when the salaries got good every pretended to have always been a nerd and really into everything nerd. With the result that they kicked all the nerds out.
If you consider what assemblers and compilers do programming, sure.
But men didn't kick them out, technology did. Von Numan famously forbid the Eniac from ever being used for assembly when you had a perfectly cheap secretary pool to do the assembly by hand.
Low creativity repetitive work requiring great attention to detail is what the early female programmers did and what was automated first.
If we ever get deterministic AI the same will happen up the chain. I'm not holding my breath for the current generation of models, or the upcoming ones I've seen in papers.
That's underselling their role. One of those ladies doing the assembling for Von Numan was Grace Hopper, who then used that expertise to develop the first compilers.
I can't recall a piece of technology in which the age distribution of the people embracing it was similar to what we're seeing with AI. In the past, this stuff has almost always been picked up by the young first and foremost, but the embrace of AI seems mostly to be coming from elder millennials through boomers (I'll admit this is anecdotal, so it's possible this is an observation of my own bubble).
Understandably, the people currently above water want to be able to sleep at night and believe things are just going to continue being acceptable from their perspective, so they may go to unknown lengths to convince themselves of this no matter how unrealistic it is. Then one day reality hits them with a layoff followed by a seemingly endless and fruitless job search.
I have noticed similar sentiments among some teenagers. It's not a universal sentiment but those who hate AIs really hate them with a passion.
In the meanwhile there is a rising tide of feel good AI content targeted at old people on Facebook. My mother has been sharing with me many "funny videos" that are very obviously AI generated. She evidently does not care, and according what I hear from others she is far from the only old person who gets sucked into "slop." I hesitate to use this word but it captures the feeling too well for me to pass it up.
I don't have data but I sense there is an inverse correlation between age and disgust towards AI generated content.
I'm guessing there's a sizable portion of the HN crowd that are millenials. Millenials who have paid the costs of the Boomer/GenX generations absolute destruction of the "American Dream" for their own benefit. They climbed the ladder and pulled it up behind them leaving millenials holding the bag.
That same set of millenials are now visiting that treatment upon Gen Z. We are building AI that will eviscerate the remaining middle class, raise electricity rates to a level where many people will not be able to power their homes, and poisoning the air and water so that portions of the world will become unliveable.
Gen Z is justified in being upset with millenials. We used to be the victims, but we've become the abusers.
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
I didn't say it was devoid of substance, the poll part is actually interesting (and worth discussing!) it's just that it actually appears *after* the sloppy tweets and "someone pretended to shoot at Sam Altman's house" screenshot as if that was somehow relevant.
Good catch on the 52→50→52 "growth." The actual Stanford report has more interesting data than TechCrunch pulled out - the gap between industry practitioners and academic researchers on safety concerns is arguably the more striking finding, but that doesn't make as good a headline as "public vs elites."
I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.
AI applications that would help normal people in a significant way are pretty lacking, so I'm not surprised. So much conversation about AI products is cycles of "this tech will change everything" without material backup outside of coding agents.
How much of the workforce is organising and other information dissemination or transformation?
I'm more on the skeptical side than the evangelist, but I can see how large parts of such things could theoretically be shifted away from humans. Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking.... AI is a whole lot of hot air when considering the "second 80%" of the work involved in any of these tasks, but that's still a lot of jobs that may make little sense to start studying these years, until you have some idea how the field will develop or if there's a giant surplus of, say, French-native Spanish language experts. At least for those for whom a given study is not a real passion and they might as well choose something else
> Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking
the issue is, these things "lie" subtly and not so subtly (they make up issues, rename agendas, forget questions and change meanings all the time) and for me that is a deal-breaker for a business tool that i need to rely on
Yes, for me as well, but large chunks of these tasks seem within the realm of what they can do when you break it up into small enough bits and control the prompt very tightly
Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators (due to some level of context "understanding" - or simulation thereof, at least). At 50x human speed, the energy consumption is also lower than keeping a human alive for that time. There is no scenario in which this capability goes unused
Or grammar checking, if you catch 98% (as even some of the weaker models seem to achieve), the editor who'd otherwise do this can do more intellectually stimulating things
It's not that there's no downsides but it also seems silly to dismiss it altogether
> Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators
Sometimes. I use Google Translate (literally the same architecture, last I heard), and when it works, great. Every single time I've tried demonstrating that it can't do Chinese by quoting the output it gives me from English-to-Chinese, someone replies to tell me that the translated text is gibberish*.
Even with an easier pair, English <-> German, sometimes I get duplicate paragraphs. And there's definitely still cases where even the context-comprehension fails, as you should be able to see from going to a random German website e.g. https://www.bahn.de/ in e.g. Chrome and translating it into English and noticing the out-of-place words like how destination is "goal", the tickets are "1st grade" and "2nd grade" instead of class.
* I'm curious if this is still true, so let's see:
I'm not sure if we're on the same page. I mean LLMs right? Not whatever Google Translate and DeepL use. The latter was better than gtrans when it launched, nowadays it's probably similar idk, and both are machine learning clearly, but the products(' quality) predates LLMs. They're not LLMs. They haven't noticeably improved since LLMs. Asking an LLM produces better output (so long as the LLM doesn't get sidetracked by the text's contents). Presumably also orders of magnitude higher energy consumption per word, even if you ignore training
I agree that Google Translate, now on par with DeepL's free product afaik (but I'm not a gtrans user so I don't know), is decent but not a full replacement for humans, and that LLMs aren't as good as human translations either (not just for attention reasons), but it's another big step forwards right?
I'm not sure what DeepL uses, but Google invented the Transformer architecture, the T in GPT, for Google Translate.
IIRC, the original difference between them was about the attention mask, which is akin to how the Mandelbrot and Julia fractals are the same formula but the variables mean different things; so I'd argue they're basically still the same thing, and you can model what an LLM does as translating a prompt into a response.
I didn't know that! I had heard they made transformers and (then-Open)AI used it in GPT, but that explains how come Google wasn't then first to market with an LLM product when the intended application was translation
> It's not that there's no downsides but it also seems silly to dismiss it altogether
definitely silly to dismiss them all together, but the issue is using it for everything where its not appropriate or unreliable; so in the context of my posting, i cant rely on it for the things i outlined, thats all
I assume that's just a manner of speaking, like a judgmental form of hallucination
I remember HN piling on me for saying something along the lines of evolution causing a property (am I stupid, do I not understand that it's not intelligently chosen) rather than some unwieldy statement about a property having a positive selection pressure. I'm also much more familiar with the English phraseology of this non-tech topic now (so I can actually say that in the few words I just used), do we even have that vocabulary for LLMs?
You make it sound as if "coding" was a distinct thing with clear boundaries in the technical world. But this critically misses the fact that coding agents dramatically lowered the barrier to controlling everything with a microchip. The only thing that exists "outside [the reach] of coding agents" is purely the analog world and that boundary will get fuzzier than it is perceived to be.
If it's fundamentals of ML, I'm surprised to hear that.
If it's "how to use ChatGPT for creative writing" then I'm not surprised. Why would someone take a class from a teacher who has had only just as much experience with these tools as their students have?
Agree… OP said “not CS” so doesn’t seem surprising. If we’re going by anecdotes, AI classes in the CS dept have risen in popularity in the past few years.
I actually feel the opposite. I don't think people from outside CS will have that much interest into the very basics of AI because there is usually a huge gap between "this is how back propagation works" to any AI model that is remotely useful. And if you are interested in the fundamentals themselves you would probably be majoring CS anyway.
A course on how to use existing AI tools will be pointless, but if there is anything I know about college students is they love taking easy courses for easy credits.
Students don't enroll in a class for various reasons, but most likely because it's useless (or at least people perceive it as useless). At top universities, even notoriously challenging courses have a decent class size.
The biggest visible AI impact, for me, is vibe coding. For that, I am convinced that the hype will collapse and will throw back the most enthusiast companies by years.
On the downside we have untrustworthy, doom or glory praising CEOs, companies slashing jobs, AI companies going into military business, hacks, spam, psychosis, general anxiety and uncertainty.
Even if you don't believe the hype and know that AI is just statistics, there is nothing to be positive about. I can't blame anyone to dismiss it. Maybe it's even the best thing that can happen, big tech won't take a sane route without civic supervision and calibration.
From what I know there was progress in AI cancer detection before the hype. I consider the big tech advancements is a side show for them. I may be wrong.
I heard nothing about the other stories. AI can code and write generic texts, can pull off a lot of knowledge. But the frontier models are general purpose idiots and any interesting specialization/innovation has probably nothing to do with them.
a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
I think most people oustide the area do not care and do not know about who's on top, and the negative perception is much more related to how the tech will enable users to misuse it (replacing phone lines/support, AI art, things losing quality, etc) than about the companies themselves.
Yes I believe we're quickly approaching crypto territory, where distributed ledgers certainly have their valuable use cases, but the overwhelming _mindshare_ is active scamming and/or monkey jpegs.
There needs to be a concerted focus on real value for end users and less "yeah the terminator will take your job and raise your kids in your absence"
I think there is a lot of truth to what you say, particularly when it comes to caring rather than parroting; however as part of my personal and civil life I interact with a lot of non-tech people in non-tech capacities, and a surprising number of them raise unprompted complaints about people like Sam Altman and Elon Musk. Musk I understand everyone knowing about; between Tesla, SpaceX, the Thai boys football team, a very public inclination to raise his hand, and a position in the US government he is meaningfully famous. However how Sam Altman has managed to get his name out there in the wrong way very quickly to a bunch of Brits I don't know.
AI continues to be a stupidly vague term, and the example I keep going back to is present in this article
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
> Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that
I think agentic harnesses add a lot to LLMs, even if many are just simple loops. They are a separate thing from LLMs, are they not?
I get the feeling that even if we stopped shipping new models today, new far more useful products would be getting shipped for years, just with harness improvements. Or, am I way off base here?
Okay, then why is one of the questions these surveys ask folks about medical diagnosis? Do they mean to imply that advances in this field come from transformer-based chatbots and image generation? Because that framing is used in the "clear benefits of AI" section of every damn article about public opinion and controversy surrounding "AI". If you're right about the public perception of the term, this implies that people who write these articles - "journalists" and tech PR people and surveyors alike - are either ignorant of this general usage or deliberately being deceptive
I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public.
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
Pretty simple: The centaur of big-tech/government will pay people not to eat them. (i.e. UBI)
The incentives are, how you say, aligned.
The deeper issue I see is the psychological crisis for a species who believes it doesn't deserve to live if it isn't performing economically valuable activity, entering a world where it is unprofitable for it to be employed. (If I were the AI, I'd come up with some kind of fake jobs to keep the humans sane.)
UBI is just a massive extension of the welfare state. Governments can’t afford the current welfare spending, so where is the money going to come from? What do you think is going to happen to the markets when a large amount of the middle classes get laid off and can’t afford to pay their mortgages? What do you think is going to happen to the tech companies built on advertising to consumers when no-one has disposable income?
This assumes costs won't drop. I'm not an economist but the theory I hear is that there will be massive cost savings at every single point in the supply chain. So the same way your money is now amplified by AI in code, eventually with robotics that is the case in every field.
This sounds a lot like UBI is an replacement for salary for many jobs.
UBI funding doesn't come from thin air, every job has to pay for itself, even if it's just UBI. Mixed costing wont hold up because every market, every company, every worker acts on it's own. So companies must pay extra plus UBI, which will only lower prices if the overall salary gets lower at the end of the day.
In my world UBI should be an psychological tool that empowers people. The way UBI is usually discussed, it's a magical solution to a very hard, incomprehensible problem and the simplicity of it just throws 70% of humanity under the bus. It's literally the same we have now, the only difference is that now everyone can claim that everything is fair because of UBI.
The current group of oligarchs pretty clearly disagrees with your perspective on their incentives. The big tech era has made people like Elon and Bezos some of the richest people in history and they have used their power for negative wealth redistribution. They give essentially none of their money away to the masses and instead use their power to weaken existing social programs and wealth distribution systems. I can't see those people suddenly doing a complete 180 as they amass even more wealth and power.
Agreed, this article seems to be dancing around the point: WHY are the Gen Z hating AI? We have a political ruling class that is all too willing to throw everyone under the bus if they aren't living up to some expectation, and the political class is being driven by an economic ruling class that largely seems to have the same opinion.
Gen Z would likely have a very different opinion if their basic living necessities were available to them.
> a realistic economic scenario for how we're going to transition into our utopian abundant future
One aspect almost certainly has to be data centers being run as utilities. That forces transparency, resists monopolization and gives public commissions a say in e.g. expansion.
My wife has a very serious health issue, that has caused more suffering then words could describe. o1-preview was the first ai that actually proved useful. From there on, each improvement on ai caused an incremental improvement in her situation. Even recently we were able to pinpoint exactly what was causing her flare, and solve the situation the same day, just by prompting a claude opus conversation where i’ve shared all her health notes. But if i weren’t a data freak and haven’t been collecting data about her issues (what she does/takes and how she feels) for so long i dont think we would had been able to get this far. So i think ai appeals to people with problems that can be solved by finding patterns in data. People that say ai makes mistakes don’t understand that the power is in finding patterns, not in finding THE right answer. You need to prompt from that prespective
It is worth pointing out that we got here despite all of the “alignment” research and safetyism surrounding the models. As it turns out, the models don’t wake up and start destroying things. We knew this all along, but every time a new article came along and anthropomorphized and exaggerated another experiment it fed the clickbait machine.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
It is just obnoxious the gap between thought leaders and everyone else.
I was at a panel last week. The most pro-AI person was an account executive from a big fintech company.
EVERYONE else - a data scientist that works in AI, regulatory compliance, cybersec, and marketing, took the position of "hey this is great and will change things, but let's pump the brakes... a lot."
Random people cure cancer for their dog, every business can vibe code an app to make their operations more efficient, anyone can launch a business with 10% of the effort it used to take.
The AI companies are only capturing like 5% of the value produced with this tech right now.
In case you're wondering who they mean by "AI experts", I checked the Pew poll:
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
Always has been since the ZIRP era. The ‘make something people want’ phrase was coined by a famous Silicon Valley investor. I heard he runs a popular forum.
Agreed. As a kid it felt there was so much energy to make things better, to fight the system. So depressing growing up and seeing so many peers and idols becoming the same inward-looking grey old farts they used to mock.
There is certainly some logic behind the old joke about young people with no heart and old people with no brain. It's natural to become a bit more conservative as you age. Though I would clarify that I think it is natural to become more of a normal conservative; the current conservative party in the US is ... not.
I'm not seeing that. Trump support in 2024 was pretty strong across the board. The born-in-1960s edged out the other decades, but it was not by a wide margin (and I consider GenX more of a 1970s phenomenon than 1960s anyway).
If you want to pick a generation to complain about, look how hard the younger folks swung in favor of Trump in 2020 and then even more in 2024.
I work with LLMs extensively and daily and they are very useful. BUT dear god, absolutely nothing about them is intelligent.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
I think it's pretty clear that the problems with AI are:
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Automation can free humanity from toil, but automation in the hand of billionaires that does the work of white collar, educated people in a period of economic and cultural turmoil, with no plan to employ them all than hoping UBI descends from heaven unto the world, is the recipe for societal disaster on a massive scale.
People are anti AI for obvious and valid reasons, but I think we should focus on where the profit goes and not on hating the technology itself.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
Been saying this for a bit but the things I’ve seen associated with AI seem to be the things that it’s pretty mid at. Coding, automated actions etc. I wholeheartedly believe adoption and perception would be better if the things it was amazing at were pushed more.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
My experience has been that the disconnect is between the Bay Area and everywhere else. The engineers at my company are split 50% in the Bay Area and 50% elsewhere. The engineers in the Bay treat it as a borderline religion. They evangelize it, and do not allow any form of criticism. It reminds me of the hippie movement: idealistic and not grounded in reality.
They’ve gotta be feigning it right? I just don’t understand how you could be so out of touch with what happens when wealth becomes this concentrated. This isn’t the first go around at this.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends.
Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
I think that identifies an issue that is going to cause a real problem for the US in the future. The society is deeply politicised and polarised to the extent that essentially inanimate objects are regarded as having deep political and social significance. When there is political change, it is going to sweep back in the other direction.
It also seems like people on all sides within the AI debate have been fanning those flames thinking is will work in the short-term...and it won't. Big tech played that game in many countries in the early 2010s and it didn't end well.
It must be noted that the U.S. does allow inanimate object makers to fund politicians and such practices are widespread.
If all is well, then it's all good: no need to blame anyone, campaings get funded, etc. If one major crisis occours though, the country self-immolates by design.
Corporate contributions to Federal politicians and candidates are illegal in the US.
The New York Times is allowed to spend money like anyone else praising or slagging politicians, but that’s the First Amendment, not funding candidates.
> Corporate contributions to Federal politicians and candidates are illegal in the US.
And that's why the whole system is divided into two parties that both, each, funnel all their support to the presidential campaign (and then to taking over seats to guarantee more lobbying).
This whole thing would fall apart without lobbying.
The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)
This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
> This AI rollout has been fundamentally rushed and fucked from the very beginning
fake it till you make it has been modus operandi for tech for almost as long as i've been alive... i feel like this is the apotheosis of this kind of thinking...
> Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
"use at your own risk" and "no guarantees warranted or expressed" is basically in every single eula from tech as well... its not a new trend sadly...
Giant leaps in innovation almost always have a reaction like this.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
Many innovations are also on the refuse pile of history. Indoor gas lighting[1] is one. People were quite justifiably skeptical of electricity, when its relatively short-lived predecessor frequently killed people in explosions, carbon monoxide poisoning, etc.
> when its relatively short-lived predecessor frequently killed people
If only it were this obvious when the polluted air isn't your home but the entire planet, killing not your grandma but taking a few healthy years of life from everyone simultaneously. Maybe people would feel like we need to reverse priorities rather than go full steam ahead on newly created energy demand and see about cleaning it up later
Every invention is touted as the next electricity, or the next internet (crypto scams anyone?)
Meanwhile not every invention is. Electricity and internet are electricity and internet, and very few inventions come even close to that. Meanwhile LLMs have had arguably a net negative effect on the world at large.
Is it irrational to wonder how large swathes of the population will earn a living if their employable skills vanish in a couple of years, with little prospect for retraining into something else that AI hasn't replaced? Is it irrational to wonder what effect an influx of the AI-replaced will have on remaining AI-free fields? Is it irrational to wonder about the psychological impact of work where one simply operates the AI instead of thinking, creating, growing? Is it irrational to wonder if wealth inequality will spiral when these essentially-unobtainable resources are used by a select few to enact the above scenarios?
I can only assume you have easy answers for all of these questions given your casual dismissal of such concerns, likening them to being scared of a light switch.
I don't think the disconnect is very surprising to the "insiders".
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
My own anecdotal experience is yes, there is a real visceral hatred of AI among Gen-Z. You have to look at it through a lens where they already feel like there's been a massive amount of intergenerational theft against them - particularly with the housing market putting owning a home out of reach, along with the evaporation of the concept of a stable career. Now they are going through education learning skills that they are incessantly hearing will have no purpose and there will not be jobs for them.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
Singapore has a modular approach for regulation where they regulate sector by sector, e. g. financial, medical, educational etc. In the financial sector, for example, the board of the institution is responsible for risk assessment as well as implementation. I couldn't find out whether they are liable in case of damage, but I would assume so. They update their regulations frequently, sector by sector.
What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death.
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
Regardless, I think we are going to see an acceleration of AI research.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
Thinking about AI-induced(or perceived)-layoffs that triggers another depression which then triggers riots in the city, or something like a future war triggering oil going up crazily which in turn triggering the shortage of fertilizer and every other oil products, which further triggers China to put a stop on exporting some key chemical products, which then triggers more shortages and then what not, I think it’s a perfect sane possibility to live in the wilderness for at least a couple of weeks.
You might have better luck in suburbia, growing vegetables in your yard, trading with neighbors, and taking turns patrolling at night than trying to rough it in the wild.
Haven't we learned anything from The Walking Dead?
One of the most hilarious AI-vangelical posts I've seen recently is from Steve Yegg through Simon Willison [0]....
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
I have seen this shift myself. A year ago everyone was super excited by AI. Now, if you exit the tech ecosystem, most people have become decidedly “meh” about the tech.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.
This reads like such a cope. The only people who are hysterical about AI are the people pushing it, pushing the investments, pushing the AGI risk, pushing the marketing and promising to push workers out of their jobs. Listen to Sam or Dario for 10 minutes and tell me they’re not hysterical themselves. Sam compares himself with Oppenheimer, making direct nuclear weapon analogies, and warns of the dangers of what he is producing, yet the people who are concerned about this are hysterical?
You are in a massive bubble my colleague, and I hope you have held some small doubts in your mind so when it pops you will have something to hold onto.
The minor benefits of vibecoding unusable prototypes or lazy cretins "writing" blogs with AI can't quite compare to the benefits of cars and factories, don't you think?
If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.
Fewer, sure, but maybe less than you suggest. Plenty of harms are just as easy under a regime of open models only. Job losses, spamming, scraping the internet, data centers, scams, hacking etc are all possible with open weight models now.
Nah, the issue is more who controls access to these tools. People (rightfully) don't like billionaires or the elite ruling class very much. Without all the hype and investments it wouldn't be seen as such a big deal - just a neat technology.
Free open models are still capable of flooding art communities with slop images, which is worth sympathy, and is not included in your "Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc".
Without the hoards of grifters who latched onto the AI bubble there would be less slop, and the community would find a way to deal with the bad slop, and would be far more accepting of the good slop.
This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.
> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
[1] https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
Flat at 60% of pre-covid hiring while the number of graduates continue to increase and there's still a backlog of people who were laid off. That's not a particularly optimism inducing hiring market.
Do not with a straight face act like pre-COVID hiring levels were a Good Thing. They weren’t. They were a symptom of a broken economy that you personally happened to pretty directly benefit from.
I think it's much better for society for companies to overhire than underhire, especially when they can easily afford it.
This is an idealistic view and makes hiring seem like charity.
There will always be steep corrections when they overhire driven by economic cycles or otherwise (and we're living through an otherwise).
Thing is, the companies doing these layoffs rarely actually end up losing money from overhiring. They’re still profitable. Just not profitable enough for the people on top.
That’s a bit perverse. In democracies, corporations ultimately exist to serve society, not shareholders.
The plutocracy is forgetting that a working and productive populace - with fair wages and representation - is their end of the deal for disproportionally benefitting from the fruits of labor from others; and directly prevents violence against the status quo. See: The top articles in the last 3 days.
Sure, but all they have to do is not hold up their end of the bargain. Who enforces that? These are just norms from 60 years ago that the rich decided they no longer have to follow.
They’ve started treating incorporation like a modern day papal indulgence, something that absolves whatever they do in the name of profit. It doesn’t. Limited liability buys you forgiveness in court but it doesn’t buy you forgiveness in the court of public opinion. Doing harm for a company is still doing harm.
I think you are correct in asserting the mercy-disciple of market forces.
I also think that counter points on the inhumanity of firms, misses that economies are an objective way to structure incentives to achieve subjective ends.
If you want more money to travel to other parts of the pyramid, or you want to disincentivize certain behavior, then economic incentives can be set up to achieve those goals.
Expecting firms to do charity is pointless. Expecting firms to optimize under constraints is not.
At societal scale hiring people is self-interest, not charity. Otherwise you'll get to exactly where the US is heading now: large parts of the consumer market are mostly dead because people have no discretionary spending power left, and the only way to make money as a business is to become a monopolist.
That's the problem, they can't afford it
They can afford it. They just want to make even more profit.
There have been a lot of headlines the past couple years about companies stating they are doing layoffs or slowing hiring because of AI. I would bet the average adult pays way more attention to news headlines than FRED reports.
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
Because the messaging of the CEOs is intentional, to both do 'damage control' and influence stock price / valuation. It's not neutral messaging.
Yet, that is what the markets are operating on.
Dismissing their words, just brings us back to the issue of what is really going on.
And lest it is forgotten - AI is a huge part of the US economy at this point. It is highly dependent on firms spending tokens.
Saying it’s just CEO market speak means we have an AI bubble that is more worrisome.
The companies doing the layoffs are themselves stating AI as a reason; that’s the news people are responding to. The parent didn’t claim that it’s based on reality, but it informs public opinion.
Quite a convenient excuse, isn't it? I hope no one figures out that AI is still just kinda meh.
“I’ll take a CEO’s very calculated word for something if it supports my existing worldview” is intellectually dishonest.
Whether or not the CEOs' statements are true, they affect public opinion.
You have CEOs claiming that AI is driving layoffs alongside CEOs of Anthropic and OpenAI talking about the end of white collar work. All this is then amplified by tech journalists like Casey Newton and Kevin Roose. The biggest public proponents of AI keep telling people that it will take their jobs.
What comes after the end of jobs? Who knows. Sam Altman occasionlly making vague statements about curing cancer. There are vague hand-waving notions of a Star Trek utopia.
But to be honest it feels more like a Cyberpunk future, where the Altmans and Musks get to live cancer-free and the rest of us eek out an existence without jobs or any prospect for a better life. Or maybe it looks more like Star Trek, but we're all red shirts.
Can you blame people for hating this?
Anything Musk or Altman say is just about raising money. Nothing they say can be taken at face value. There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
The better question to ask is what happens after the end of OpenAI/Tesla/etc? AI may take your job away, but not because of robots replicating your labor, just good old-fashioned economic collapse.
Blame then then. Simple as that. Lying to "just raise money" is one of the most harmful ways of lying. It distorts the whole economy.
> There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
Yes, we know they are psychopaths and assholes. The blame is on them.
>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
I thought the main value of juniors was that you grow them into seniors, not really the random scripts they write?
That is the value of other companies doing that and you going to poach those new seniors. With the money you saved not training those juniors you can offer better salaries and still have higher profits.
Their main value is in being cheap before they realize that they're underpaid and hop jobs.
They tend to catch on quicker these days, making companies more reluctant to hire them. It has little to do with AI.
I want to agree with you but when was this ever true?
It was true when I started to work in this industry ~8 years ago. But of course YMMV especially depending on country and company
We haven't hired for about 6 months, but the value of a junior is that they eventually become not junior, and if you value them, you pay them what they're worth and they stay.
My guess is that AI can now assist juniors just as it can assist seniors, and they'll become competent in the correct skillset needed for the future, just as everyone before them has.
They are increasing, but the level is still lower than it's been since Oct 2020. In my experience at two different companies since 2020, hiring more or less stopped sometime in 2022 to early 2023. In early 2025, some hiring started again but it's still a very low rate compared to pre-COVID, particularly for new college grads. While I don't believe that AI has actually taken any significant number of jobs in the software field, I do think it's being used as a convenient excuse by executives to lay people off. Regardless of the actual numbers though, the general perception in tech is "lots of layoffs are happening with not so much hiring" and "AI has something to do with it (either directly or as an excuse)."
"Opening" doesn't mean anything. An actual job where someone is working and is being paid a salary means something.
Separately, I'm curious how that URL (IHLIDXUSTPSOFTDEVE) is encoded.
IHL (Indeed Hiring Lab) + IDX (Index) + US (country) + TP (topic) + SOFTDEVE (software development)
[dead]
Software job openings are mostly bullshit. Companies post ghost jobs en masse, while refusing to hire people. You can ask anyone that's had to look for a job recently and see how bad the market is.
Is that data useful at all? Indeed postings are a poor proxy for how many people actually get hired. One of the major problems we have is that employment statistics are largely just estimates, and don’t reflect reality on the ground. Factor in the Trump admin firing most of the BLS and other agencies for not giving him the numbers he wants, and there really is no reliable data.
Complete crap.
[dead]
I feel like the junior problem contributes more heavily than people might think. The people on top see juniors as replaceable since they view them as cheap menial labor, whereas most seniors at least acknowledge the human element as part of the benefit
Today's juniors are tomorrow's seniors, or more importantly today's seniors' yesterday.
They do the dirty, repetitive work, learn the systems inside out, take note of the flaws, and fix them if they are motivated and the system/process allows.
Thinking them as replaceable, worthless gears is allowing your organization rot from inside. I can't believe people can't see it.
Plenty of people see it - but, to a hiring team, a junior is an extremely risky investment. They demand a high cost relative to when they can start contributing actual value, may not work out, or may hop ship the moment they become competent. It is rational for a business to want to eliminate this risk. It's possible that everyone is acting rationally here, knowing it will lead to a result that is not favorable down the line - because the immediate benefit is too great to consider the latter.
In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line. And that risk may be overblown - people are still hiring some juniors, it's not like it has stopped entirely - so future seniors will likely just be worth more than they are currently. To some, that may be worth the risk, especially if you believe AI will continue to get stronger.
I am not saying I agree with this decision making, more pointing out the thought process. We have had to have similar discussions where I am but are still hiring juniors, FYI. That's basically all we're hiring right now, actually, because the market for strong juniors is very good right now.
It's not an economic decision, it's a cultural one. Are you investing to build something useful and sustainable? Or are you exploiting for a profitable quarter?
I read someone compare the mindset to that of a drug-dealer. In any given neighborhood, a handful of people get very wealthy, at the expense of the stability and potential of everyone else. Our elite are drug-dealers - literally, in some cases. And conditions are deteriorating about how you'd expect.
Besides the “its not x, its y” llm smell here, no, comments like this are also part of the hype, just the other side of it. The fact that LLM tooling can replace a lot of tedium typically set aside for junior’s is hardly disputable at this point.
Okay. And you could also still hire the juniors and have them oversee the LLMs, interrogate them about how well they understand the principles and technical details of what they're having the LLM do, correct them when they're wrong, try to get them to explore other approaches or extend the rote approach or synergize with some other task, etc. You know, training. Like companies used to do (or so I hear, such initiatives having been long gone by the time I hit the workforce).
The fact that you won't isn't a productivity, bottom-line decision, as we've already established that the business is trading efficiency now for incompetence later; the financials are a wash, at best. It's a cultural decision to throw your youth under the bus for seniors and shareholders' short-term interests. The best you could say is, "Well, of course. This has been a common narrative across the American economic landscape for the past 30ish years. 'F* them kids,' is the rule."
>Besides the “its not x, its y” llm smell here
Kindly fuzakenna off plx.
Benefitting from creating externalities and harming the commons.
and what happens in half a generation or so when those seniors start retiring? the only way software production will meet demand is if the fewer seniors out there are propped up by way more competent ai than we have now. that also means the work will fundamentally change from being massively nerdy to moderately nerdy with the ability to work with ai. many of the people in the computer industry now just wont be attracted to that type of work.. what will they do? become physicists or mathematicians? and what type of person is tomorrows senior software developer?
edit: maybe todays computer nerds will become tomorrows backyard hackers, the only ones able to beat the ai.
> In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line.
I mean, writing the code which makes mon^H^H^H^H provides value for minimum cost is the ultimate goal of a software company, but any competent CS grad or anyone with basic algorithms knowledge knows that greedy algorithms can't solve all problems. Sometimes the company needs to look ahead, try, fail and backtrack.
Nerdy analogies aside, self-sabotaging whole sector with greedy shortsightedness is a pretty monumental misstep. It's painful yet unbelievably hilarious at the same time. Pure dark comedy.
The problem is that it's systemic. The entire system rewards the short term thinking, so that even people with some awareness of what's happening tend to contribute to it all. People are fantastically good at finding reasons to work at places like OpenAI, Anthropic, Google, Meta, Palantir, X, etc. And once they're there, they similarly figure out how to justify the actions they're taking.
If people need to be retained for many years, is the solution to give a bunch of stock that vests over many years? It would be interesting if such incentives (the need to hang onto talent that has been incubated for many years) could bring about a return to one-company careers.
> Thinking them as replaceable, worthless gears
This is how companies see all of us though, for all ic levels
For today's fast and loose companies that's true, but not all companies are greedy and grind people for money.
Yes, they're not the norm, many of them are not glamorous, but it's not all black and white.
not just fast and loose though… I bet the number of companies who are not drowning in “AI strategies” meetings is roughly equal to number of ICE Agents who are Democrats
This would have been a great revelation for decision-makers across the economy to have had about 20 years ago. Instead, they took every opportunity to turn the job market into the Hunger Games. Congrats to the people who survived the Cornucopia; the rest of us have been bleeding out for well over a decade.
It's not that juniors are replaceable, but that hiring them is a high variance move. Few, if any, know whether a candidate is just memorizing leetcode and is going to be a dud, costing you effort before they get a PIP, or you are hiring a very talented individual that will be contributing in 2 weeks . With seniors, you risk less, just because the track record makes the very worst candidates unlikely.
maybe they should have seniors choose juniors, they tend to be good at recognizing the worth someone could bring
> but the enthusiasm on the ground is lacking
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
Not a universal view: Claude has added all the fun back into my job.
I am having fun as well. Using AI tools in a good and productive way is definitely a skill that is not a given, you have to learn it too.
Many engineers want nothing more than to eventually become managers. So this is not surprising. But your job is not what it was before.
Not really managers, I would put the new role more in the senior engineer / architect category. Those still have to deal with deeply technical things like design, architecture, problem decomposition, research, domain expertise, code review, collaborating with technical peers -- all of which (people) managers don't typically do.
If you ever wanted to climb the senior technical ladder, this is now the quickest way to experience it. Except instead of other people you get to work with agents which, while a very different experience, requires largely the same skills.
So yes, your job is not what it was before, but with career growth it typically was not anyway.
... but a very common one. There are always exceptions to every rule
Oh yeah? The way that you feel is the norm, and if anyone feels differently, they’re the exception? And that’s just based on… vibes?
The use-cases for data science and other engineers are different. AI is not uniformly good at all kinds of development.
There is an issue with execs pushing it though. You have people at the top of the company with little to no idea how people work attempting to micromanage tool usage. It is as if you had a group of execs determining what IDEs people could use.
No-one is getting fired because of AI. The start of this year is the start of companies beginning to use AI. The reason layoffs are happening is because of the massive overhiring after Covid.
> overhiring after Covid
How long after COVID are we going to be able to keep using this excuse? This is starting to feel like the politician blaming his predecessor even though he's been in office for years. In the year 2033, Company X lays off another 10,000, just as it did each year since 2023, again blaming massive over-hiring during COVID, ten years ago.
> How long after COVID are we going to be able to keep using this excuse?
I am with you but if you look what happened after COVID it is a big line going waaaaay up. COVID was a significant event and there is no way around it, no? the OPs comment is invalid because we below the pre-COVID (by miles) but COVID should be taken into account (everyone seems to use it to further some agenda by looking at just one particular aspect of what happened post-COVID)
Covid was six years ago man. Don't insult people's intelligence.
I have a similar experience. I seldom use it to test to see its current state, and it generally (85% of the time) gives wrong answers. Then I discuss this with a couple of friends:
These guys are not evangelists or anything, but colleagues who want to reduce their workloads. If it can't help with what I need, then how can it help me at all?At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.
A friend of mine has copilot integrated with his storage appliance that all the business docs are hosted on for his firm. He says it's amazing.
My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.
There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.
Reminds me of a bounty I received recently. Someone essentially exposed a bedrock agent that had access to the companies internal documents to the internet unauthenticated. They actually had the reports and notes for other bug bounties that had been reported to them as well.
I mean, anything with Sharepoint will be terrible. No amount of AI can fix that mess.
I too feel this way.
Tell claude what you do and ask it where it can be the most helpful. It is true that the tool has to be learned, and won't help everywhere. If you are doing web dev just to make a tool, it is purely magical. I've found it to be mostly useless in making geed helm charts.
I generally use them for researching things which I was unable to find anywhere else. For example, for Gemini I have two extreme examples:
I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.
Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.
For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.
The opposite happened to me. I asked Gemini about a type of Vietnamese dance called "nhảy sạp" and it returned a good sounding summary along with a video it claimed to explain the dance and how it worked. The video was from the Knowledge Academy and titled, "What is SAP?"
The second example I have given is no different than yours, actually.
Funny, I was supposed to be the expert in my company, but I was run over by the demo folks, while I was uselessly preaching about evaluation, safeguards, guardrails, observability.
For mine it’s worse because we have new leadership who believes in it to a far larger extent than it can deliver. Now a massive amount of our workforce is building up proofs of concepts and spitting out tons of effectively useless output to look good because of how strongly they’ve signaled it’s good for careers here to fully embrace it. It’s a massive mess and there’s nobody to clean it up, and the voices advocating for rigor or good engineering practices are being sidelined.
It’s full out mania. As someone raised in and who escaped a cult, I am having to use every tool in my very large toolbox to stay sane while I wait for this to pass and die down or make my move towards a place that still cares whether their product works.
If the majority of engineers decide to rot their brains and abandon best practices, the industry will eventually implode. Stay true to your beliefs and use the bare minimum of AI to keep your job.
We’re in what I would call the “dark ages” of tech. There will be a new renaissance led by those who used this as an opportunity to build skills and tools that are genuinely useful and ingenious.
If you keep a long-term horizon this is the perfect opportunity to work on a solo project in stealth mode. Or build professional connections with others who see things the way you do.
“What I would call […]”.
When people talk about one’s salary being an imperative to them understanding something, they are talking about exactly you. “This’ll all wash over and we’ll be back to the good old days that I’m used to” has never happened. Ever.
Well actually it did happen. Greco-Roman intellectual tradition was lost when Rome collapsed and institutions of knowledge with it. Islamic scholars preserved much of this knowledge during the dark ages but in the western world Christian religious dogma reigned supreme.
During the renaissance western thinkers pieced together lost information and we got the scientific revolution.
Kind of wild that you completely ignored the example I gave of exactly this happening in my original comment.
And speaking of people whose salary dictates their understanding of something, let’s talk about Sam Altman and the rest of SV currently spinning a fairytale about AI which just so happens to justify astronomical valuations for their companies.
At my company everyone’s salary and career ladder are determined by exactly how much they dive into AI and show enthusiasm for it, regardless of whether they’re using it for something useful or they’re just competing for how much money they can burn
[flagged]
AI isn't going away, but leadership expectations to (say) increase "efficiency" by 50% in the next 6 months through "AI" will. Eventually. After lots of fudging of numbers and general reluctance to admit that the Emperor's clothes are looking awfully translucent.
I've been waiting so long for it to become commonplace for people to equate AI with The Emperor's New Clothes. Hopefully it gains steam.
If LOC and tokenmaxxing is the future, nobody will have a job.
I use AI all day every day, I’m not a luddite, I’m someone who has seen people take the same shitty shortcuts to working systems they are now. They’re wasting tons of money and smarter competitors who can actually think clearly about the benefits and costs are gonna eat their lunch.
Early stages of any major disruptive technology will have hype due to get-rich-quick folks. Dot-com boom & bust of 2000 is similar. But the underlying technology (internet) defined our lives forever.
I don't know why people are comparing the Day-1 of one technology with the Day-1000 of another. Yes, AI is useless in many fields - NOW. But you can't imagine doing any work without in a couple years.
Like the kids used to ask - 'How did they build Google without Google?'
Now their kids will ask - 'How did they build chatGPT without chatGPT'?
ChatGPT has been around for 4 years at this point. Not very long, but I’ve heard of the ‘imagine what it’ll do in one year’ spiel quite a few times by now.
How long was the internet around before it became essential for every day life?
The “Internet” was a DARPA-funded research curiosity initially. It was not crammed down people’s throats like a roll of Oreos while advocates screamed that, “You like this, right‽ This is the future! You have to like this, what is wrong with you?”
Transformers were treated like any other ML technique until Sutskever decided to just go big on training it. That it can look like a compelling simulacrum, I am not arguing, but this thing left the ivory tower of research prematurely and recklessly. We are all going to pay for it.
2 things - it’s not day 1 for AI, and it’s also not dot-com (which dropped the nasdaq 80% btw). It’s the entire American economy right now. When it can’t deliver anything approaching its hype, just like all the data centers that can’t deliver on power, the profit margins that can’t deliver, and the promises of massive 500% revenue increases this fiscal year… sorry, I was raised in a cult and know what the fuck I’m seeing, sadly among a lot of otherwise intelligent people here.
I expect I’ll be using LLMs now and in the future, but the public is far more right about the companies and the people running them than the tech “insiders” here.
How can the AI be lacking in results and also at the same time responsible for layoffs and slowing of hiring?
Wouldn't it be one or the other?
You replace half of a team with AI. Salary cost go immediately down, but team output can keep up for some time. You don't see the technical debt, the security issues and the prompt injection which will result in wrong invoices being sent. In six months suddenly there will be a big problem, but this quarter a lot of shareholders are happy about the cost-cutting. You may even be promoted by the time shit hits the fan, and it won't even be your problem anymore.
On the other hand there probably also is a general correction in the market after the covid hiring spree.
Almost sounds like the "walking ghost phase" during radiation poisoning...
https://www.reddit.com/r/askscience/comments/1975oj/whats_ha...
This is such a good metaphor.
Fantastic analogy. I dare say it applies to our current economy as well.
You assume CEOs are completely rational actors.
The reality is most of them are so divorced from reality that they think they are infallible and AI will pick up the slack because they want it to be true.
No, layoffs happen due to leadership’s expectations. It’s never been about reality, layoffs and both massive hiring to ramp teams are based on vibes.
The lack of results is felt by those using it to assist with their work daily.
And specifically, their expectations as to what will positively impact the stock price. Shareholder value this quarter is more important than keeping the company afloat next quarter.
It can be both if for the majority of layoffs, AI is just a scapegoat to act as cover for cuts made for financial reasons or offshoring and not the actual cause.
But then you’d expect the trend to self correct in the long run. AI actually does seem to replacing customer-service and CS jobs effectively.
From what I've seen many efforts to replace roles such as customer service with AI are being rolled back or downscaled due to intolerably high error rates and general incapability. While these segments won't come out unscathed I don't think the actual impact will end up being as severe as feared.
I believe that too. Broadly, I’m agreeing with the parent comment—AI can’t be causing long-run layoffs and be worthless.
You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.
I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.
> assuming that AI related layoffs are rational
Nope. I’m saying if firms lay off on the assumption of AI gains that never come, they’ll be beaten by firms who don’t.
OK, but your post reads as if you think that AI being the cause of layoffs can't be true if AI is "worthless" (less capable than they are assuming), which is false.
CEOs are laying off because of AI because they think it will save them money, but are doing so based on misinformation, largely due their own insistence that everyone uses AI, and report how much they are using - they are just hearing what they asked to hear (just like Mao hearing about impossible levels of rice production during the "Great Leap Forward"). I'm not making this up - I've seen it first hand.
You can see the proof of this - companies laying of because of what they mistakenly believe AI can do - in companies like Salesforce, forced to do an embarassing U-turn and hire people back when the reality sets in. At least Salesforce were quick to correct - most big companies are not so nimble or ready to admit their own mistakes.
We seem to have reached mania-like levels of rice-production reporting, with companies like Meta now taking AI token usage as a proxy for productivity and/or a measure of something positive, and apparently having a huge leaderboard displaying who is using the most (i.e. spending the most money!). The only guaranteed outcome of this is that they will indeed see massive use of tokens, and a massive AI bill, and then in a year or so will likely be left scratching their heads wondering why nothing much appears to have changed.
> your post reads as if you think that AI being the cause of layoffs can't be true
Sorry, I was unclear. Those statements can’t both be true in the long run. They can absolutely be true in the short run.
Might be true, but unfortunately, we need to pay for rent/mortgage/groceries in the short run.
Executives don't need actual results or data to lay people off.
AI could be a huge net benefit, and justify large layoffs.
AI could be a huge short-term benefit, justify layoffs now, so long as you (the exec doing the laying off) don't have to worry about the long term
AI could have middling net benefit, but be a great excuse to justify layoffs now. In this scenario, the people laid off and those that remain bear the cost (one, losing their jobs; those that remain, burning out with the extra workload) etc etc, many scenarios to consider...
When Block laid off 40% of their staff Jack Dorsey said it was because of AI. Whether or not you believe him is a different question.
He went all in on block chain, so it would be consistent with previous hyped tech. In this case I would believe him.
Not if the actual vs stated reasons for the layoffs have nothing to do with AI.
Organisations happy to reduce costs, perhaps with greater output but with lower quality?
and surely, all of those companies are being entirely truthful about the reason for the layoffs.
can happen at the same time when businesses speculatively fire workers to replace with AI. The lack of results might bite them in the ass and the bubble might pop. Or not, but they are going long on their AI position
AI hype -> layoffs -> AI underperforms -> ????
Hilariously, it's the exact same playbook as the big third-world-country-outsourcing hype from a few years ago.
I totally understand where you are coming from and my personal take is LLMs are to "stuff" as a drill driver is to a screw driver. They are a tool, just a tool. ... bear with ...
I over floored several rooms in my house (UK, '20s build) with plywood before laying insulation, heating mats and laminate floor boards for the final finish. I don't have a staple gun so I screwed the boards down at roughly 600mm c/c across the floorboards and 300mm along them.
What the blazes has that got to do with LLMs?
Well, I used a nearly inappropriate method for a job and blasted through it nearly as fast as the best method! If I had used a manual screwdriver I would have been at it nearly forever and ended up with a very limp wrist. I do own an old school ratchet screwdriver and that would have speeded things up but still been slow. I did use yellow passivated screws with sharp threads and a notch to initiate biting into the wood - rather more expensive than a staple or a nail.
So I burned through my tokens (screws instead of nails/staples) faster than if I had used a pneumatic nail/staple gun.
Anyway. LLMs are tools. They can be good tools in the right hands or rip your fingers off in the wrong hands.
Running with this analogy, the two sides of the AI argument are the people who think they can fire their plumber and electrician now that they have a drill driver, and the people who know it doesn't work that way...
Quite. My larger drill driver will wrench your wrist unless you know how to set the speed/mode/etc correctly and know how to brace yourself correctly.
At the moment, I think that a LLM needs skilled hands too. Have a casual chat - that's fine but for work ... be aware.
I recently dumped a wikimedia (our knowledge base is a wiki) formatted table into a LLM (on prem) and asked it to sort the list on the first column. It lost a few rows for some reason. No problem - I know how my tools work but it was a bit odd!
Your statement is a bit contradictory. That is, the article about "the growing disconnect between AI insiders and everyone else" pretty clearly states that "everyone else" is scared about job losses and the extreme inequality they see advanced AI causing. This is in line with your second to last sentence.
But the first part of your comment is basically saying "AI insiders think the tech is super awesome and powerful, while other engineers think it doesn't stand up to the hype." Well, if the AI is indeed not as good a tech as its boosters are saying, well, this would be great news for everyone scared about job losses and widening inequality if AI turned out to be a nothing burger.
No and it has been said already elsewhere in this thread: decision makers are not entirely rational, they might fire entire departments even if the AI revolution isn’t here quite yet
That's likely because it takes an entirely different approach to make it work. Augmenting your existing flow with "sophisticated auto complete" isn't as interesting and isn't actually using the tools how they were designed to be used.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
I feel like this will change as people move around. It’s definitely a skill.
What about karpathy though?
Smart guy phoning it in now - realized a few weeks ago that he “notices” something interesting to share, but is really paraphrasing a recently released paper that found it - without giving paper credit.
Wasn't Karpathy the guy who used to work for tesla and that tried to convince everyone that you only need cameras for self-driving and that by 2025 there wouldn't be anymore cars without self-driving capabilities to sell?
What's that?
Villain from ghost busters 2
Underwhelmed is the absolute correct word to use here.
Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.
My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.
[flagged]
It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.
People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.
Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.
I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.
"I refuse to pick up the magic hammer that nails things in by me just thinking about it while holding it in my hand; nosiree, give me that old fashioned hammer so I can sit here and nail some nails into a 2x4 while the guy using the other tool is building whole slop neighborhoods. Ha, that guy is so dumb and I'm too cool because I won't ever use that hammer."
I don't get it. Proudly saying you don't plan to use better tools is not some 'cool' look or the brag you think it is. You're just making yourself less valuable and being ignorant on purpose.
Cool.
Yes. You don’t get it. I am not seeking your affirmation. There is no vanity in this for me. I am not bragging to you or anyone else.
Yeah, I'm sure I'm the one who doesn't get it, not the guy refusing to use the paradigm changing tools, "because".
You sound like my Grandmother who refused to even look at a computer screen. Literally the same. It's amazing it's coming from so called tech-literate people in the field.
Sad.
I have heard this same logic numerous times through my career and its a bias void of evidence, a loud indication of low confidence.
People would lose their minds when they discovered I did not pray at the cult of jQuery and then later React and so forth. I didn't need them. I was more productive without them and still managed to produce applications that executed dramatically faster with substantially less code. AI tools fall into this same camp. What could they provide me that I cannot do better myself at this point in my career? That is a serious question, by the way.
AI tools might work well for you. I am not you. Affirming your bias with baseless assumptions void of evidence will not make you a better programmer. Real developers write their own original code and/or architecture plans. Real engineers measure things and live or die by those measurements.
AI is just another tool. It is not a skill and will not compensate for skills. Perhaps I will use AT later to write test automation because that is something that is very simple to validate and likewise something I really don't want to bother with.
"Real developers write their own original code and/or architecture plans."
Nice tired no true Scotsman argument. You literally sound like a stubborn boomer refusing to see the change. You sound ignorant, not wise.
I've created several multi-modal AI applications deployed in production; fully created through Codex and Claude code. SOC2 compliant and created in a month, not years.
You can be an old man, but you don't need to have an old man mentality.
You wouldn't hire someone who only knows how to use a typewriter in a world of computers. No big deal, right? "A computer is just 'another tool', why should I not be fine with my typewriter?"
Literally fucking blows my mind I'm even having this discussion on this website, with people who should know better.
Good luck, you will need it.
I am already in management. I will be just fine.
That explains why you know nothing. Good luck to whoever you manage, I do not envy them.
Would suck to have someone be your manager who refuses to even understand the new tools, much less use them; you will make a GREAT manager.
I'm honestly in disbelief that people in the field can be this purposefully ignorant, and proudly at that.
Your frequent astonishment indicates you have not been writing code very long. I also suspect you also may have some combination of ADHD or ASD, which would explain both your necessary dependence on a tool for relevance as well as your hostile defensiveness. Either way, in the long run, you will have trouble sticking around because tools eventually get replaced.
You will be gone faster than your head can spin my guy.
Execs will not put up with managers who are not going to use the best tools. I expect soon you will be forced to, or you're out.
Also, I've been writing code for over 20 years. I started programming LUA and ActionScript back in 2000 when I was 10 years old.
Your days are numbered old man.
If after 20 years you still have not figured out how this industry works, hope that tool evangelism will save your career, and cannot measure things you almost certainly have autism. If you are not already diagnosed I strongly recommend seeking an evaluation.
The reality of success in this industry has nothing to do with tools. Its all about KPIs (however your organization defines them), superior planning/communication skills, and leading people. The people that produce the most with the least maintenance overhead are the people most well rewarded. Simply just not getting fired is not a metric of success.
I literally won a corporate Innovation award last year at my company (that does $30B in revenue yearly) for some of the previous applications I put into production that I mentioned. Even got posted on LinkedIn where thousands of people liked it. I am also in a technical role that is not far removed from the C-Suite, reporting to a VP.
Your analysis couldn't be further from reality, except the ADHD part, lmao.
I have the industry figured out buddy, it's you who thought they did, but doesn't anymore.
Ah, yes, because money and internet likes are the most important factors in judging whether or not someone is right.
Blindly dismissing everyone not impressed by the AI hype only serves to further delegitimize the AI hype.
[flagged]
You've been complaining about Hacker News for years.
Totally right! The folks who were very recently telling us we were all going to be trading NFTs in the metaverse are the clear eyed optimists not motivated by anything but rational consideration for the truth.
Finding people on HN that think NFTs are a joke and don't understand their utility, mind blowing.
This place is fucking dead.
>This place is fucking dead.
The door is right over there, feel free to see yourself out at any time. :)
It seems like you get personally offended by people using their critical reasoning abilities.
I know a folk who did a PhD in the area, and work at one of those frontier labs as a researcher, and privately he is as sceptical as the most "stubborn" HN denizen you mention.
Unbounded enthusiasm for AI without any reservations is something that can only be born out of minds utterly deprived of imagination and creativity.
"It seems like you get personally offended by people not using their critical reasoning abilities."
ftfy
As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.
Did you ever ask it to do that?
Did one ever ask if AI can reduce entropy of a growing code base? You mustn’t have read that famous short story by Asimov.
"They Don't Think It Be Like It Is But It Do", is all I can think.
The simple truth is, these senior devs have no idea how to use these new tools to their capabilities.
It's a simple case of PEBCAK; ironic considering most of them would be the ones throwing that term around 20 years ago.
Or more likely you don't have the knowledge and the skills they do. They are judging things in a level that you don't have any idea it exists.
Sure, I hear that all the time here; I used to think that myself. It's all bullshit.
You're all a bunch of ignorant old men yelling at literal cloud servers.
I'm done with this place. /g/ and /r/accelerate are the only places left not filled with idiots anymore.
I sometimes wonder if AI overly enthusiastic cheerleaders are all suffering some kind of LLM induced psychosis.
Or more likely you don't have the knowledge and skills they do. They are judging things in a level that you don't have any idea it exists.
4chan and accelerationists, what a lovely collection of well-adjusted and rational people. Good bye.
When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.
As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.
The only thing that will be embarrassing is how badly your comments, and those like yours will age.
I don't know what happened to this place, but it went from actual young people sharing information on the newest things in tech, tech philosophy, interesting stuff; to now old men yelling at the clouds about the new tech.
I agree with your basic point, but it’s not just an age thing. There are plenty of older people enthusiastically using AI for software development now. Just as an example, Steve Yegge, who vibe-coded the Beads and Gas Town AI projects, is around 57. I’m a bit older than him, and I’m working with Claude, Gemini, and Codex on a daily basis, having great fun and learning tons.
What we seem to be seeing with AI is that the prospect of completely changing the way you work is threatening for a lot of people, and of course so is the prospect of losing your job. When people are faced with something threatening, a common reaction is to criticize it in every possible way - you can’t admit anything about it is good because that risks encouraging the threat. It’s not exactly rational, but it’s what people often do.
HN has never been exempt from that, it’s just that AI is a big change that brings out this instinct in many more people.
>The only thing that will be embarrassing is how badly your comments, and those like yours will age
Hubris.
> You won't be 'underwhelmed' long.
This has been your constant mantra for 3 years now and is part of the reason people are underwhelmed.
Always fun to see the article happen live in the comments section.
> You won't be 'underwhelmed' long.
"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.
Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].
So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.
[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.
" But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer."
Get better rituals. PEBCAK.
Yeah yeah. There exists some secret ritual known only to the selected elite that make these tools perfect, no mistakes.
(The only people who say that are scammers and peddlers, and the only people who believe that are juniors or don't know how to code at all)
I'm a senior at a F200, I'm actually tired of listening to you assholes here. I always thought people here knew more than me; it's not fucking true.
I'm not some starry eyed junior; I have built multi-modal agents being used in production on government contract work in the field, right this second.
You all can go fuck yourselves, you neo-luddites will be the first out the door. Good riddance.
Least deranged AI booster.
Extraordinary claims require extraordinary evidence.
And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.
https://github.com/gca-americas/way-back-home
Anyone here who is currently 'underwhelmed'; please get through all 5 levels here and then say the same thing.
This is just the beginning. I seriously can't believe this place turned into neo-boomerism ideology on tech. I honestly don't get it, just makes me think everyone here talking about being seniors and architecture and blah blah; don't actually know shit, and aren't actually good at what they do.
Levels 2,3,4,5 all say coming soon.
Did you have AI Agent summarise this for you?
Funny how it says that, yet I finished the whole thing.
Maybe you should try reading a bit more.
https://codelabs.developers.google.com/way-back-home-level-5...
That is the completed instructions for the fifth level, I leave it as an exercise to the reader to actually read more and find the rest of the steps on their own.
I spent some time chatting with Google engineer who put this together, Ayo Adedeji, at UCLA's SAIRS conference.
Cool story brother.
It is; thanks.
You asked about Google and what impressed me so much, going through this exercise, while not exactly helpful for me and my work directly (I'm doing similar things but completely in the Azure ecosystem), it is definitely a great display of how agents are more than just an 'LLM' that everyone here seems to think is equivalent to AI.
It's seriously the opposite feeling of imposter syndrome at this point, I'm in my 30's, a senior data engineer myself at a F200 company; I can't believe so many of my peers are so behind and ignorant of what is going on, confident enough to makes publicly lasting comments about how 'unreliable', 'bad', 'slop'; 'AI will never this or that'.
It's incredible what is going on.
I didn’t ask any of that.
I’m not sure why you’re still here, you’re just fueling your own unhealthy grievance at this point.
The same could have been said to someone who had yet to encounter generative AI. "Wait until you try it, you won't be underwhelmed for long".
But here we are.
Over time and usage the limitations of a thing become apparent.
Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.
Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....
And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.
Are those who care about evaluation luddites?
It’s always been this way. It’s an online community.
Is it just me or is there a growing disconnect between AI insiders and everyone…
There are no "AI-insiders".
"AI-insiders" are trying to market their tools to you. See Anthropic's continuous lithany of "all programmers will be replaced in 6 months" while they struggle to make their TUI API wrapper consume less than 2-4 GB of RAM (they brought it down from 68 GB[1]), or have a decent uptime.
[1] Yes, you read it right. They had to buy a team of actual engineers to do the job: https://x.com/jarredsumner/status/2026497606575398987
> When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?
Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.
> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
Cringe, as I was saying.
Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.
Every conference I participated always overhyped every current bullshit.
tl;dr - "I will dismiss this because of the time I've been spending in a pro-AI bubble".
I never needed to be in a pro-AI bubble to dismiss bullshit; I wrote my capstone Philosophy paper on AI and Existentialism back in 2014.
I am dismissing the neo-luddites because they are stupid and wrong, not because I am in a pro-AI bubble.
There is emotional bias and stubbornness in nearly all of your responses in this thread, the very same traits you lambasted HN broadly for in another comment. Rather than calling people, "stupid and wrong", why don't you make your case?
If you don't want to be bothered to argue your points, and this place truly chaps your ass to the degree it does, why even waste your time commenting at all when, according to you, there's a more fun place with bigger brains that-a-way, as far as you're concerned? points
I mean, it takes more energy and effort to be angry and annoyed than to just move on and leave us luddites in the dust.
It is pretty emotional seeing a place with people you respected and learned from for so long, where you could rely on for the place to find the newest and most interesting things happening in tech, where people in the know discussed the technical aspects; to now neo-luddites everywhere bashing shit they don't understand, ON A FUCKING TECH FORUM; like THE tech forum.
I feel like I'm living in some kind of bizzarro world now when I read anything AI related on HN. It's insane.
> bashing shit they don't understand, ON A FUCKING TECH FORUM; like THE tech forum.
Or: they actually understand the tech, and see its limitations. Unlike wide-eyed neophytes and zealots.
Tech forum doesn't mean "uncritically accept and love any and all technology".
Oh, and they don't find the need to sling insults like monkey sling feces just because someone doesn't agree with them.
"Or: they actually understand the tech, and see its limitations. Unlike wide-eyed neophytes and zealots."
I'm sure you super qualified rando's on HN know and understand the tech and know its limitations better than the Google engineers actually making the stuff.
Real ripe coming from a guy who can't even refactor a few lines of code correctly with an AI.
I couldn't roll my eyes any harder.
If you hate everyone here so much, why did you come back today? Further, why did you come back just to spew more negative, unhelpful comments that just parrot what you've ranted about already, rather than attempt to foster the "smart" dialogue that you wax poetic about?
Yes, we know, you've said that a few times already.
You could foster that high-level dialogue you seem to value so much by trying to better articulate your view so that the plebs understand, kinda like I suggested just there. Ya know, "be the change you want to see in the world" and all that, but okay...
This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.
But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.
> But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
It was never any of these things, and you're misremembering if you think it was. There's never been a mono-opinion held by some all-encompassing hivemind.
I'm not misremembering. You can easily find monoculturey threads about all of these things. Just because there's a small slice of counter views, doesn't mean the average HN positions on these things isn't or wasn't decidedly negative.
It's literally unbearable now. I don't know how the place that once used to be exciting and deep in the know; is now old-man-yells-at-clouds ignorant of what is happening. It's actually really sad. /g/ and /r/accelerate seem like the last bastions of actual intelligent people discussing these things.
[dead]
It's the same as the sad drunk man talking bad about the King at a dirty bar. It makes them feel better to compare or say they're better in some way.
Shocking that people who are in data science/ML are excited about data science/ML, and people in jobs not interested in that area are not interested in it.
It's like a programmer being surprised that a worker in $random_job wants to keep doing their job, and not learn how to be a programmer instead.
There's this weird unspoken assumption in a lot of these HN posts that any layoffs or lack of hiring is due to companies shirking on providing the cushy jobs they owe software engineers. Actually, they hire engineers to get stuff done. If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
This is basically how most engineers talk to their managers, politely implying - "can you see how this decision has a short term payoff but a long term consequence?"
Before LLMs I only worked at one place that "only hired seniors and above" and now its the most commonplace thing in the world.
Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
> Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
Seems a bit like asking where the bread will come from, if no-one is forced to bake it.
More like where the bread will come from if nobody learns how to bake it and the knowledge of how to bake it is lost.
Yes, this is what hysteria about bread looks like. People have been saying a disaster of the kids not knowing how to bake is coming since the 1800's. Yet, we still have bread.
How exactly will the knowledge of creating software be lost when the claim is that an ubiquitous software creation tool is going to take over the world? Is it going to refuse to emit anything less complex than a todo app?
I've never baked bread in my life and yet, with the right motivation, I'm sure I could learn from the literature and some trial and error alone. In the hypothetical world where bread demand massively exceeds supply, we'd form a guild and incrementally improve from there. Same way we learned it in the first place. Breadmaking wasn't gifted to us by aliens.
Well, that is the point :) we don't fret about where the bread comes from too much, or talk about how we need to act now lest we never have bread again. People want bread, and the price goes up until someone is willing to make bread.
I imagine the reality lies somewhere in between the two extreme takes that you present here.
> If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
AI works fine to get a vibe coded BS version of the app. No doubt there. But eventually, especially once scale hits your app, it will devolve into an unholy mess of low performance and (extremely) high cost if you do not have a bunch of senior talent able and willing to clean up after the AI mess.
Unfortunately, our capitalist economy only rewards the metrics you mentioned... but by the time the house of cards collapses, either from financial issues stemming from the above or because the tech debt explodes, it's too late to turn the ship around.
And I've even heard rumors of software engineers that don't even write apps or write code that runs on the internet at all. They say some of them don't even use javascript or python! The horror.
I get it, but as a "AI expert and senior leader" myself in my 1,000 people organization (in relative terms), the disconnect I have is:
A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.
But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.
I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.
I am a strong believer and selected as power user because of AI usage metrics, but I also see perverse incentives -- a colleague was desperately searching for me on the Claude token usage leaderboard (I was part of a different group he did not have access to) -- it was clear he was actively trying to climb that leaderboard.
Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.
No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.
Yes, your 3 minutes prompt got merged. So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks. Different perspective? Survivor bias? High authority?
Blame your engineering culture not AI if metrics such as Github stats, number of nitpick reviews and token usage is what is used to judge one's performance.
In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.
there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc
people are annoyed at the force-feeding of llms/ai into everything even when its not needed
somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness
> somethings can be one-shotted and some things cant
True but my point is that people vastly underestimate what is one-shottable.
In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.
I believe that this has happened in some cases but am very skeptical that it is widespread and generalizeable at this point. My own experience is that software engineers thinking they can easily solve a problem in a domain they know nothing about overrate their ability to do so ~99% of the time.
I'm not talking about coding in domains I know nothing about. I'm talking about coding in domains I've worked in for 15 years
Well "non-believers" don't see any gain from being faster, right? That'll just set expectations of "do a lot more for same". Fear of being "toast" will get you the loyalty you'd expect from fear.
Are you European by any chance? I left Europe to avoid your mentality
the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.
on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)
You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.
How impressed someone get from that will depend on the recipient.
output, not recipient. try it on your own code. not everything on the example 3-page markdown you'll agree (much like you push back on the PR) but in significant number of occasions code changes were made based on the provided output
Recipient, as in the person who the output is intended for.
And I have seen what an AI do when it provides a code review, and it is very much like something that plausible looks like a code review. A lot of suggestion and nitpicks that at surface looks like plausible comments, but without any understanding. How much value a programmer get from that depend on the programmer. For me it reminds me of the value that teddy bears has on a support desk, or why some users are actually helped by being forced to go through layers of faq/ai suggested solutions before they are allowed to talk to a real person. Sometimes all that a person need to improve something is time to think about the code from a new perspective, and an AI code review can help the person find that time by throwing a bunch of shallow comments at them.
Unsure of this really tracks tho. How are you evaluating for the bias that they’re not merging it because you’re “their leader of 1000 people org” and not because you’re actually an engineer deep in the trenches that knows the second or third order effects of slop?
This is a genuine question btw, I see plenty of instances of this in my own org.
I see your point, but
1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture. 2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.
I've been on this ride about three or four times over decades. Every new major wave of technology takes a surprisingly long time to be adopted, despite advantages that seem obvious to the evangelists.
I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.
The resistance was just incredible!
"That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.
"It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.
I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.
Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.
Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...
All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.
None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!
This is a true step change in capability.
It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.
Would you trust an LLM to recognize a credit card number more reliably than a regular expression can?
You're not forced to use only an LLM for data loss prevention! You can combine it with regex. You can also feed the output of the regex matches to the LLM as extra "context".
Similarly, I was just flipping through the SQL Server 2025 docs on vector indexes. One of their demos was a "hybrid" search that combined exact text match with semantic vector embedding proximity match.
I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
> I think people are really underestimating how poorly today's tweens think of AI.
I think you might be really underestimating how poorly today's adults think of AI. Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI, I immediately discount what that person is saying as I assume they are unable to think for themselves.
Far as AI gen images they still make me nauseous due to uncanny-valley stuff. Still see a lot of non-standard number of fingers; so much content elicits a weird double-take and gut dropping feel.
The kids are smarter than most people give them credit for. They see their future being destroyed in real time, and AI is only accelerating it and largely being celebrated/promoted/used by the same people currently destroying their future. To them, there are few benefits beyond being able to cheat on their homework, and an enormous amount of downsides.
I think it's only a matter of time before we see some more serious, organized opposition to AI (and perhaps even the internet and other technologies) by these young people.
> The kids are smarter than most people give them credit for
When they aren't consumed by TikTok?
For some kids, they see their parents get themselves in a mountain of college debt, work for 50 years and struggle to afford necessities, and decide maybe trying to be a streamer/tiktokker is worth a gamble and could set them up for life instead.
Makes sense. I think it’s hard to argue against someone that uses the platform and others as an example of entrepreneurial pursuits. Not “all social media is bad” when to use different lens types.
It's the modern day "I'm going to be a hollywood actor." Every one of my kid's friends has said at some point they were going to grow up to be a famous YouTube or TikTok streamer. The vast majority are not serious, and of those who are serious, the vast majority won't make it.
You might be surprised by how many of them are aware of the harms of social media, while acknowledging that it’s impossible not to engage with it. It’s not their fault we built the toxic slot machine world for them that we have. And besides, I’m pretty sure my boomer parents spend about as much time scrolling slop on Facebook as kids do on TikTok.
Oh, parents no better. I have friends that do nothing other than check TikTok in down time. I do what I can to block and gate my own usage.
I really hope it doesn't end in some Butlerian Jihad-esque scenario. I like computers, but they're tools. Nothing more than ethical slaves at most
Like all technology, it's not question about it itself but rather how it's used.
When I get a message from a co-worker that seems to have been written by an LLM, I am incredibly turned off and instantly think less of the person. It can be easy to spot: key words bolded, acknowledging that I'm right, longer and with a different tone than their typical messages, with neat bullet points.
It feels a little disrespectful. It feels a little pointless (why am I bothering talking to you if I can get the same result from the AI). I have no idea whether you've given the problem any actual thought, or if you're just copy-pasting an answer. I have no idea if you actually believe what you're telling me (or if you've even read it or understand it).
pr comments from a human that is generated by ai has got me feeling the same... like why this person even here? its totally disrespectful; i want a person to interact with not a machine with a meatsuit.
My partner was working at an event and a co-worker had prepared a poster using AI - a teenage kid at the event pointed out how the poster "has AI smudges".
Gotta love that - the teenage AI scold.
You know how your parents are weirdly shitty at recognizing obvious photoshops? Kids are constantly surprised that we adults can't recognize obvious AI images.
I've been calling it "AI sheen"; I like "AI smudges" better.
In the 80s, 90s and 00s that's what they thought about coding.
Then when the salaries got good every pretended to have always been a nerd and really into everything nerd. With the result that they kicked all the nerds out.
That was the second iteration of that. Most of the programmers were women until the mid-70s when the nerdy men kicked the women out.
If you consider what assemblers and compilers do programming, sure.
But men didn't kick them out, technology did. Von Numan famously forbid the Eniac from ever being used for assembly when you had a perfectly cheap secretary pool to do the assembly by hand.
Low creativity repetitive work requiring great attention to detail is what the early female programmers did and what was automated first.
If we ever get deterministic AI the same will happen up the chain. I'm not holding my breath for the current generation of models, or the upcoming ones I've seen in papers.
That's underselling their role. One of those ladies doing the assembling for Von Numan was Grace Hopper, who then used that expertise to develop the first compilers.
And the other 100 weren't and didn't.
I'm an adult and I'm beyond tired of AI. All of these posts on linkedin etc. make me sick. The people using this stuff don't know how obvious it is.
God, those self-indulging posts on LI are the worst. Sometimes it feels like half of the world's compute is wasted on that.
I can't recall a piece of technology in which the age distribution of the people embracing it was similar to what we're seeing with AI. In the past, this stuff has almost always been picked up by the young first and foremost, but the embrace of AI seems mostly to be coming from elder millennials through boomers (I'll admit this is anecdotal, so it's possible this is an observation of my own bubble).
Stated another way: Its picked up by all the people who already have jobs and stability (even if only perceived stability until they get affected).
Exactly.
Understandably, the people currently above water want to be able to sleep at night and believe things are just going to continue being acceptable from their perspective, so they may go to unknown lengths to convince themselves of this no matter how unrealistic it is. Then one day reality hits them with a layoff followed by a seemingly endless and fruitless job search.
1000%.
AI is "fuck you; got mine" technology.
I have noticed similar sentiments among some teenagers. It's not a universal sentiment but those who hate AIs really hate them with a passion.
In the meanwhile there is a rising tide of feel good AI content targeted at old people on Facebook. My mother has been sharing with me many "funny videos" that are very obviously AI generated. She evidently does not care, and according what I hear from others she is far from the only old person who gets sucked into "slop." I hesitate to use this word but it captures the feeling too well for me to pass it up.
I don't have data but I sense there is an inverse correlation between age and disgust towards AI generated content.
I'm guessing there's a sizable portion of the HN crowd that are millenials. Millenials who have paid the costs of the Boomer/GenX generations absolute destruction of the "American Dream" for their own benefit. They climbed the ladder and pulled it up behind them leaving millenials holding the bag.
That same set of millenials are now visiting that treatment upon Gen Z. We are building AI that will eviscerate the remaining middle class, raise electricity rates to a level where many people will not be able to power their homes, and poisoning the air and water so that portions of the world will become unliveable.
Gen Z is justified in being upset with millenials. We used to be the victims, but we've become the abusers.
This is poor reporting, almost needs a checklist:
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
They cite a report and a Gallup poll. That’s not just tweets.
I didn't say it was devoid of substance, the poll part is actually interesting (and worth discussing!) it's just that it actually appears *after* the sloppy tweets and "someone pretended to shoot at Sam Altman's house" screenshot as if that was somehow relevant.
That might be rage porn. Whatever. Highlighting despicable people on Twitter gets clicks.
"whatever" is exactly the "bad reporting" complaint GP is making
Fair enough. It would be better if they interviewed actual people. Barring that, the tweets add colour to the data.
Good catch on the 52→50→52 "growth." The actual Stanford report has more interesting data than TechCrunch pulled out - the gap between industry practitioners and academic researchers on safety concerns is arguably the more striking finding, but that doesn't make as good a headline as "public vs elites."
I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.
AI applications that would help normal people in a significant way are pretty lacking, so I'm not surprised. So much conversation about AI products is cycles of "this tech will change everything" without material backup outside of coding agents.
How much of the workforce is organising and other information dissemination or transformation?
I'm more on the skeptical side than the evangelist, but I can see how large parts of such things could theoretically be shifted away from humans. Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking.... AI is a whole lot of hot air when considering the "second 80%" of the work involved in any of these tasks, but that's still a lot of jobs that may make little sense to start studying these years, until you have some idea how the field will develop or if there's a giant surplus of, say, French-native Spanish language experts. At least for those for whom a given study is not a real passion and they might as well choose something else
Yes, for me as well, but large chunks of these tasks seem within the realm of what they can do when you break it up into small enough bits and control the prompt very tightly
Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators (due to some level of context "understanding" - or simulation thereof, at least). At 50x human speed, the energy consumption is also lower than keeping a human alive for that time. There is no scenario in which this capability goes unused
Or grammar checking, if you catch 98% (as even some of the weaker models seem to achieve), the editor who'd otherwise do this can do more intellectually stimulating things
It's not that there's no downsides but it also seems silly to dismiss it altogether
> Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators
Sometimes. I use Google Translate (literally the same architecture, last I heard), and when it works, great. Every single time I've tried demonstrating that it can't do Chinese by quoting the output it gives me from English-to-Chinese, someone replies to tell me that the translated text is gibberish*.
Even with an easier pair, English <-> German, sometimes I get duplicate paragraphs. And there's definitely still cases where even the context-comprehension fails, as you should be able to see from going to a random German website e.g. https://www.bahn.de/ in e.g. Chrome and translating it into English and noticing the out-of-place words like how destination is "goal", the tickets are "1st grade" and "2nd grade" instead of class.
* I'm curious if this is still true, so let's see:
这是一个简单的英文句子,需要翻译成中文。上次我翻译的时候,有人告诉我译文几乎无法理解。
我不懂中文,所以需要懂中文的人告诉我现在是否仍然如此。
(not the downvoter)
I'm not sure if we're on the same page. I mean LLMs right? Not whatever Google Translate and DeepL use. The latter was better than gtrans when it launched, nowadays it's probably similar idk, and both are machine learning clearly, but the products(' quality) predates LLMs. They're not LLMs. They haven't noticeably improved since LLMs. Asking an LLM produces better output (so long as the LLM doesn't get sidetracked by the text's contents). Presumably also orders of magnitude higher energy consumption per word, even if you ignore training
I agree that Google Translate, now on par with DeepL's free product afaik (but I'm not a gtrans user so I don't know), is decent but not a full replacement for humans, and that LLMs aren't as good as human translations either (not just for attention reasons), but it's another big step forwards right?
I'm not sure what DeepL uses, but Google invented the Transformer architecture, the T in GPT, for Google Translate.
IIRC, the original difference between them was about the attention mask, which is akin to how the Mandelbrot and Julia fractals are the same formula but the variables mean different things; so I'd argue they're basically still the same thing, and you can model what an LLM does as translating a prompt into a response.
I didn't know that! I had heard they made transformers and (then-Open)AI used it in GPT, but that explains how come Google wasn't then first to market with an LLM product when the intended application was translation
> these things "lie" subtly
Do you think they have intent?
I assume that's just a manner of speaking, like a judgmental form of hallucination
I remember HN piling on me for saying something along the lines of evolution causing a property (am I stupid, do I not understand that it's not intelligently chosen) rather than some unwieldy statement about a property having a positive selection pressure. I'm also much more familiar with the English phraseology of this non-tech topic now (so I can actually say that in the few words I just used), do we even have that vocabulary for LLMs?
You make it sound as if "coding" was a distinct thing with clear boundaries in the technical world. But this critically misses the fact that coding agents dramatically lowered the barrier to controlling everything with a microchip. The only thing that exists "outside [the reach] of coding agents" is purely the analog world and that boundary will get fuzzier than it is perceived to be.
What kind of AI-adjacent?
If it's fundamentals of ML, I'm surprised to hear that.
If it's "how to use ChatGPT for creative writing" then I'm not surprised. Why would someone take a class from a teacher who has had only just as much experience with these tools as their students have?
Agree… OP said “not CS” so doesn’t seem surprising. If we’re going by anecdotes, AI classes in the CS dept have risen in popularity in the past few years.
I actually feel the opposite. I don't think people from outside CS will have that much interest into the very basics of AI because there is usually a huge gap between "this is how back propagation works" to any AI model that is remotely useful. And if you are interested in the fundamentals themselves you would probably be majoring CS anyway.
A course on how to use existing AI tools will be pointless, but if there is anything I know about college students is they love taking easy courses for easy credits.
Meanwhile Stanford's CS336 (Language Modeling from Scratch) requires an application for enrollment.
https://web.archive.org/web/20260316042004/https://cs336.sta...
Students don't enroll in a class for various reasons, but most likely because it's useless (or at least people perceive it as useless). At top universities, even notoriously challenging courses have a decent class size.
The biggest visible AI impact, for me, is vibe coding. For that, I am convinced that the hype will collapse and will throw back the most enthusiast companies by years. On the downside we have untrustworthy, doom or glory praising CEOs, companies slashing jobs, AI companies going into military business, hacks, spam, psychosis, general anxiety and uncertainty.
Even if you don't believe the hype and know that AI is just statistics, there is nothing to be positive about. I can't blame anyone to dismiss it. Maybe it's even the best thing that can happen, big tech won't take a sane route without civic supervision and calibration.
> there is nothing to be positive about
Even though i'm quite anti AI, recycling in Taiwan, killing weeds with lasers and detecting cancer beg to differ.
From what I know there was progress in AI cancer detection before the hype. I consider the big tech advancements is a side show for them. I may be wrong.
I heard nothing about the other stories. AI can code and write generic texts, can pull off a lot of knowledge. But the frontier models are general purpose idiots and any interesting specialization/innovation has probably nothing to do with them.
> they think
is key
a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
I think most people oustide the area do not care and do not know about who's on top, and the negative perception is much more related to how the tech will enable users to misuse it (replacing phone lines/support, AI art, things losing quality, etc) than about the companies themselves.
Yes I believe we're quickly approaching crypto territory, where distributed ledgers certainly have their valuable use cases, but the overwhelming _mindshare_ is active scamming and/or monkey jpegs.
There needs to be a concerted focus on real value for end users and less "yeah the terminator will take your job and raise your kids in your absence"
I think there is a lot of truth to what you say, particularly when it comes to caring rather than parroting; however as part of my personal and civil life I interact with a lot of non-tech people in non-tech capacities, and a surprising number of them raise unprompted complaints about people like Sam Altman and Elon Musk. Musk I understand everyone knowing about; between Tesla, SpaceX, the Thai boys football team, a very public inclination to raise his hand, and a position in the US government he is meaningfully famous. However how Sam Altman has managed to get his name out there in the wrong way very quickly to a bunch of Brits I don't know.
It is clear AI has value in the pursuit further knowledge.
It is also clear AI will bring even worse poverty levels and skew the wealth disparity even further.
The latter isn't the fault of AI itself, its the fault of the humans who will control it.
AI continues to be a stupidly vague term, and the example I keep going back to is present in this article
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that.
> Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that
I think agentic harnesses add a lot to LLMs, even if many are just simple loops. They are a separate thing from LLMs, are they not?
I get the feeling that even if we stopped shipping new models today, new far more useful products would be getting shipped for years, just with harness improvements. Or, am I way off base here?
You're already overcomplicating it. A normie that says "AI" isn't thinking about "agentic harnesses".
Yeah, fair. I misread the intent of your comment.
Okay, then why is one of the questions these surveys ask folks about medical diagnosis? Do they mean to imply that advances in this field come from transformer-based chatbots and image generation? Because that framing is used in the "clear benefits of AI" section of every damn article about public opinion and controversy surrounding "AI". If you're right about the public perception of the term, this implies that people who write these articles - "journalists" and tech PR people and surveyors alike - are either ignorant of this general usage or deliberately being deceptive
I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public.
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
Pretty simple: The centaur of big-tech/government will pay people not to eat them. (i.e. UBI)
The incentives are, how you say, aligned.
The deeper issue I see is the psychological crisis for a species who believes it doesn't deserve to live if it isn't performing economically valuable activity, entering a world where it is unprofitable for it to be employed. (If I were the AI, I'd come up with some kind of fake jobs to keep the humans sane.)
UBI is just a massive extension of the welfare state. Governments can’t afford the current welfare spending, so where is the money going to come from? What do you think is going to happen to the markets when a large amount of the middle classes get laid off and can’t afford to pay their mortgages? What do you think is going to happen to the tech companies built on advertising to consumers when no-one has disposable income?
UBI ain’t gonna be enough for most white collar types to maintain their current lifestyles.
This assumes costs won't drop. I'm not an economist but the theory I hear is that there will be massive cost savings at every single point in the supply chain. So the same way your money is now amplified by AI in code, eventually with robotics that is the case in every field.
This sounds a lot like UBI is an replacement for salary for many jobs.
UBI funding doesn't come from thin air, every job has to pay for itself, even if it's just UBI. Mixed costing wont hold up because every market, every company, every worker acts on it's own. So companies must pay extra plus UBI, which will only lower prices if the overall salary gets lower at the end of the day.
In my world UBI should be an psychological tool that empowers people. The way UBI is usually discussed, it's a magical solution to a very hard, incomprehensible problem and the simplicity of it just throws 70% of humanity under the bus. It's literally the same we have now, the only difference is that now everyone can claim that everything is fair because of UBI.
Also UBI will inevitably become as fubared as current tax law.
The current group of oligarchs pretty clearly disagrees with your perspective on their incentives. The big tech era has made people like Elon and Bezos some of the richest people in history and they have used their power for negative wealth redistribution. They give essentially none of their money away to the masses and instead use their power to weaken existing social programs and wealth distribution systems. I can't see those people suddenly doing a complete 180 as they amass even more wealth and power.
Agreed, this article seems to be dancing around the point: WHY are the Gen Z hating AI? We have a political ruling class that is all too willing to throw everyone under the bus if they aren't living up to some expectation, and the political class is being driven by an economic ruling class that largely seems to have the same opinion.
Gen Z would likely have a very different opinion if their basic living necessities were available to them.
> a realistic economic scenario for how we're going to transition into our utopian abundant future
One aspect almost certainly has to be data centers being run as utilities. That forces transparency, resists monopolization and gives public commissions a say in e.g. expansion.
Hell no, the current state of centralized AI is bad enough, socializing it won't make it better.
We need to let the AI as a service businesses fail.
But in the meantime you prefer privately-controlled monopsony datacenters?
Yes I'd much rather big investment firms waste their money instead of government.
My wife has a very serious health issue, that has caused more suffering then words could describe. o1-preview was the first ai that actually proved useful. From there on, each improvement on ai caused an incremental improvement in her situation. Even recently we were able to pinpoint exactly what was causing her flare, and solve the situation the same day, just by prompting a claude opus conversation where i’ve shared all her health notes. But if i weren’t a data freak and haven’t been collecting data about her issues (what she does/takes and how she feels) for so long i dont think we would had been able to get this far. So i think ai appeals to people with problems that can be solved by finding patterns in data. People that say ai makes mistakes don’t understand that the power is in finding patterns, not in finding THE right answer. You need to prompt from that prespective
It is worth pointing out that we got here despite all of the “alignment” research and safetyism surrounding the models. As it turns out, the models don’t wake up and start destroying things. We knew this all along, but every time a new article came along and anthropomorphized and exaggerated another experiment it fed the clickbait machine.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
> Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.
Imagine choosing to be an expert in something that you think is a coin flip away from making the world worse.
I don't think you can just invert it like that. There's probably a significant percentage of respondents who think it might not have much impact.
You can be concerned and optimistic at the same time.
Isn't this an extremely reasonable thing to do? To take an extreme example, consider people working on gain-of-function virology research.
I can't imagine most people working on gain-of-function virology expect it to make the world worse.
It doesn't help that AI "thought leaders" can't articulate a vision by which our lives will improve rather than be made worse.
It looks like:
1. They take billions in investment
2. They spend trillions
3. They and their investors profit in the quadrillions from all the "labor saving"
4. ???
5. Everyone's needs are met.
It is just obnoxious the gap between thought leaders and everyone else.
I was at a panel last week. The most pro-AI person was an account executive from a big fintech company.
EVERYONE else - a data scientist that works in AI, regulatory compliance, cybersec, and marketing, took the position of "hey this is great and will change things, but let's pump the brakes... a lot."
Random people cure cancer for their dog, every business can vibe code an app to make their operations more efficient, anyone can launch a business with 10% of the effort it used to take.
The AI companies are only capturing like 5% of the value produced with this tech right now.
Did you miss the article?
https://news.unsw.edu.au/en/meet-the-man-who-designed-a-canc...
In case you're wondering who they mean by "AI experts", I checked the Pew poll:
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
You should be grateful to have got back working links.
"Make something people want" seems so quaint now.
"Make something investors want" is the name of the game now and the reason for the disconnect.
Always has been since the ZIRP era. The ‘make something people want’ phrase was coined by a famous Silicon Valley investor. I heard he runs a popular forum.
“Make people want something, and sell it to them”.
Ignore all environmental, political and social problems, and invest everything in a purely antisocial technology.
Yeah but drugs are illegal.
"Make something, then bribe bosses to shove it down people's throats."
>... with Gen Z reportedly leading the way...
The kids are alright.
Shattered dreams
They've been saying that since the Boomers were kids, look where that led us
I'm biased, but I think Gen X turned out okay ;-).
I'm also biased, but I think millenials turned out okay ;).
As a geriatric millenial[1] myself, I approve this message :)
[1] https://fortune.com/2024/04/23/four-types-millennials-geriat...
In all seriousness, I agree. Millenials got a lot of crap, but by the numbers they look pretty successful to me.
> I'm biased, but I think Gen X turned out okay
As a Gen Xer myself (1973) I disagree.
The widest margin of Trump voters by generation was Gen X.
Gen X has largely morphed into the boomers they used to despise.
Agreed. As a kid it felt there was so much energy to make things better, to fight the system. So depressing growing up and seeing so many peers and idols becoming the same inward-looking grey old farts they used to mock.
Perhaps this is inevitable.
> Perhaps this is inevitable.
There is certainly some logic behind the old joke about young people with no heart and old people with no brain. It's natural to become a bit more conservative as you age. Though I would clarify that I think it is natural to become more of a normal conservative; the current conservative party in the US is ... not.
I'm not seeing that. Trump support in 2024 was pretty strong across the board. The born-in-1960s edged out the other decades, but it was not by a wide margin (and I consider GenX more of a 1970s phenomenon than 1960s anyway).
If you want to pick a generation to complain about, look how hard the younger folks swung in favor of Trump in 2020 and then even more in 2024.
https://www.pewresearch.org/politics/2025/06/26/voting-patte...
I work with LLMs extensively and daily and they are very useful. BUT dear god, absolutely nothing about them is intelligent.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
I think it's pretty clear that the problems with AI are:
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Also 5. Not what the world needs.
Automation can free humanity from toil, but automation in the hand of billionaires that does the work of white collar, educated people in a period of economic and cultural turmoil, with no plan to employ them all than hoping UBI descends from heaven unto the world, is the recipe for societal disaster on a massive scale.
People are anti AI for obvious and valid reasons, but I think we should focus on where the profit goes and not on hating the technology itself.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
People need purpose. They may want money, etc but you see it all the time. People get bored and want something to channel their energy into.
So even universal income won't solve everything, not that it's ever likely.
Been saying this for a bit but the things I’ve seen associated with AI seem to be the things that it’s pretty mid at. Coding, automated actions etc. I wholeheartedly believe adoption and perception would be better if the things it was amazing at were pushed more.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
Yeah but the other guy says his AI is going to cure cancer and mine minerals on asteroids. Who do you think the investors are going to fund?
My experience has been that the disconnect is between the Bay Area and everywhere else. The engineers at my company are split 50% in the Bay Area and 50% elsewhere. The engineers in the Bay treat it as a borderline religion. They evangelize it, and do not allow any form of criticism. It reminds me of the hippie movement: idealistic and not grounded in reality.
My only surprise is that the AI "elite" is surprised.
"You think that AI will take your job, disrupt society, and has a 25% chance of being an EXISTENTIAL threat?! Who told you that?!"
They’ve gotta be feigning it right? I just don’t understand how you could be so out of touch with what happens when wealth becomes this concentrated. This isn’t the first go around at this.
Wealth concentration has been happening for a century. You don't need AI for that.
https://hai.stanford.edu/assets/files/ai_index_report_2026.p...
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends. Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
I think that identifies an issue that is going to cause a real problem for the US in the future. The society is deeply politicised and polarised to the extent that essentially inanimate objects are regarded as having deep political and social significance. When there is political change, it is going to sweep back in the other direction.
It also seems like people on all sides within the AI debate have been fanning those flames thinking is will work in the short-term...and it won't. Big tech played that game in many countries in the early 2010s and it didn't end well.
It must be noted that the U.S. does allow inanimate object makers to fund politicians and such practices are widespread.
If all is well, then it's all good: no need to blame anyone, campaings get funded, etc. If one major crisis occours though, the country self-immolates by design.
Corporate contributions to Federal politicians and candidates are illegal in the US.
The New York Times is allowed to spend money like anyone else praising or slagging politicians, but that’s the First Amendment, not funding candidates.
> Corporate contributions to Federal politicians and candidates are illegal in the US.
And that's why the whole system is divided into two parties that both, each, funnel all their support to the presidential campaign (and then to taking over seats to guarantee more lobbying).
This whole thing would fall apart without lobbying.
Source: https://hai.stanford.edu/ai-index/2026-ai-index-report (https://news.ycombinator.com/item?id=47758120)
The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)
This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
Giant leaps in innovation almost always have a reaction like this.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
Many innovations are also on the refuse pile of history. Indoor gas lighting[1] is one. People were quite justifiably skeptical of electricity, when its relatively short-lived predecessor frequently killed people in explosions, carbon monoxide poisoning, etc.
[1] https://en.wikipedia.org/wiki/Gas_lighting
> when its relatively short-lived predecessor frequently killed people
If only it were this obvious when the polluted air isn't your home but the entire planet, killing not your grandma but taking a few healthy years of life from everyone simultaneously. Maybe people would feel like we need to reverse priorities rather than go full steam ahead on newly created energy demand and see about cleaning it up later
Zeppelins are another notable one
None of the previous innovations haven’t similarly replaced the human itself.
https://en.wikipedia.org/wiki/Power_loom#Social_and_economic...
Industrial Revolution? We're still here.
Nope. Did not replace the brain in general level.
Every invention is touted as the next electricity, or the next internet (crypto scams anyone?)
Meanwhile not every invention is. Electricity and internet are electricity and internet, and very few inventions come even close to that. Meanwhile LLMs have had arguably a net negative effect on the world at large.
Is it irrational to wonder how large swathes of the population will earn a living if their employable skills vanish in a couple of years, with little prospect for retraining into something else that AI hasn't replaced? Is it irrational to wonder what effect an influx of the AI-replaced will have on remaining AI-free fields? Is it irrational to wonder about the psychological impact of work where one simply operates the AI instead of thinking, creating, growing? Is it irrational to wonder if wealth inequality will spiral when these essentially-unobtainable resources are used by a select few to enact the above scenarios?
I can only assume you have easy answers for all of these questions given your casual dismissal of such concerns, likening them to being scared of a light switch.
I don't think the disconnect is very surprising to the "insiders".
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
Ah, but you can have a penthouse apartment overlooking Central Park in a gen AI paradise, and that’s just as good.
AI is a religious icon for capitalist ideologues.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
Everyone is trying to keep their jobs that's all.
My own anecdotal experience is yes, there is a real visceral hatred of AI among Gen-Z. You have to look at it through a lens where they already feel like there's been a massive amount of intergenerational theft against them - particularly with the housing market putting owning a home out of reach, along with the evaporation of the concept of a stable career. Now they are going through education learning skills that they are incessantly hearing will have no purpose and there will not be jobs for them.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
Exactly. They grew up fucked by social media, the financial crisis and covid, can clearly see housing is unaffordable and now they won't have jobs.
No wonder they're all trying to get on benefits. Fuck Maggie Thatcher for selling off the council houses.
can someone explain what kind of ai-related regulations are there in Singapore and Indonesia to get such a high trust score?
Singapore has a modular approach for regulation where they regulate sector by sector, e. g. financial, medical, educational etc. In the financial sector, for example, the board of the institution is responsible for risk assessment as well as implementation. I couldn't find out whether they are liable in case of damage, but I would assume so. They update their regulations frequently, sector by sector.
There aren't. AI despair is mostly a Western mindset, and Asian countries have more positive views.
What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death.
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
They understand, they’re the ones on the top of the ladder pushing everyone else off.
This article seems to be using the word “expert” quite imprecisely.
Regardless, I think we are going to see an acceleration of AI research.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
That is crazy. Why would the next 5-10 years be so insane you'd be forced to survive in the wilderness?
Thinking about AI-induced(or perceived)-layoffs that triggers another depression which then triggers riots in the city, or something like a future war triggering oil going up crazily which in turn triggering the shortage of fertilizer and every other oil products, which further triggers China to put a stop on exporting some key chemical products, which then triggers more shortages and then what not, I think it’s a perfect sane possibility to live in the wilderness for at least a couple of weeks.
Oh the second one is happening right now.
You might have better luck in suburbia, growing vegetables in your yard, trading with neighbors, and taking turns patrolling at night than trying to rough it in the wild.
Haven't we learned anything from The Walking Dead?
This is in my thought too and one of the reason we moved to the suburbs. Yeah I might be a bit too crazy about that.
One of the most hilarious AI-vangelical posts I've seen recently is from Steve Yegg through Simon Willison [0]....
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
[0] https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-ever...
The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
This is 10000% OpenAI's fault.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
I have seen this shift myself. A year ago everyone was super excited by AI. Now, if you exit the tech ecosystem, most people have become decidedly “meh” about the tech.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
Funniest shit I heard all day. lol at all the genius “insiders” who the rest of us justifiably hate
[dead]
[dead]
Makes sense.
Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.
This reads like such a cope. The only people who are hysterical about AI are the people pushing it, pushing the investments, pushing the AGI risk, pushing the marketing and promising to push workers out of their jobs. Listen to Sam or Dario for 10 minutes and tell me they’re not hysterical themselves. Sam compares himself with Oppenheimer, making direct nuclear weapon analogies, and warns of the dangers of what he is producing, yet the people who are concerned about this are hysterical?
You are in a massive bubble my colleague, and I hope you have held some small doubts in your mind so when it pops you will have something to hold onto.
The minor benefits of vibecoding unusable prototypes or lazy cretins "writing" blogs with AI can't quite compare to the benefits of cars and factories, don't you think?
If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.
Fewer, sure, but maybe less than you suggest. Plenty of harms are just as easy under a regime of open models only. Job losses, spamming, scraping the internet, data centers, scams, hacking etc are all possible with open weight models now.
Nah, the issue is more who controls access to these tools. People (rightfully) don't like billionaires or the elite ruling class very much. Without all the hype and investments it wouldn't be seen as such a big deal - just a neat technology.
Free open models are still capable of flooding art communities with slop images, which is worth sympathy, and is not included in your "Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc".
Without the hoards of grifters who latched onto the AI bubble there would be less slop, and the community would find a way to deal with the bad slop, and would be far more accepting of the good slop.