> They are being told, on the one hand, that these tools are going to eliminate millions of jobs, and on the other that they have to use them if they don’t want to fall behind.
I'm currently reading a fascinating book called Blood In The Machine° about the Luddites who opposed certain technologies in 19th century England and the parallels with the current state of affairs. It's important to remember that while history doesn't repeat itself, it often rhymes.
Because history is written by the victors, the Luddites were painted as idiots who just hated machines for no reason or dumb reasons. This couldn't be further from the truth.
The sad thing that I haven't been able to resolve in my mind is that this is a cultural multi-party prisoners' dilemma among sovereign entities.
From a power-centric point of view, if my neighbors intentionally cast off modern technology, they are ripe for domination, economic exploitation, etc. The history of human civilization from the age of city-states onward is about navigating the need for protection from hostile, arrogating outside forces (and/or being one of those hostile forces).
> Freystaetter and Gottlieb both say that instead of their own generation, they are more worried about Gen Alpha and other young people that come after them, who lose their chance to develop healthy relationships with technologies when they become mandatory and ubiquitous.
I remember similar concerns from Millennials about Gen-Z with the Internet and social media. In the end the Internet and Social Media Gen-Z grew up with was quite different from the one Gen-Y was worried about and the reaction of the new generation to it of course not uniform. Similar developments might happen with Gen Alpha and AI, which seems even more polarizing to me.
I have three (gen A) kids, of which two are of the age to have opinions on this.
They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right). If I try to spin them a bullshit story, they tell me how can that be true and maybe I got brainwashed by AI. Also they hate ads with a passion.
If anything, I'm incredibly hopeful for newer generations. They'll probably mostly be fine, like most of us were.
---
Edit: many responses, and I'll add that in isolation "x is not real work" coming from a kid is maybe equally endearing as it is divisive.
I'll add that of course my kids are a product of their upbringing, and I make no secret of my existential confusion about what it is to program a computer when most of the time i'm just steering the clanker away from obviously dumb mistakes.
My wife is a psychologist working with underprivileged kids, so we always joke that she has the real job and I'm just doing a hobby that pays well. Much of this is them simply parroting that, maybe. We do try to teach them to think beyond dogma and the cultural bias they grow up in, but who can tell. Everyone is in the end to a great extent a product of their environments, and parents.
Finally. They (their generation) will probably be fine, but they might equally well not be. Vapes, tiktok, souped up ebikes, sexting, designer drugs, climate change, refugees, extremism. So many challenges, but you could argue every generation had that. So i choose not to have too much of an opinion and try to stick with gleeful, desperate optimism.
> They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right).
For most of computing history this has been the case, too!
Like take finance where people just email broken spreadsheets around all day. If they stop doing that then farmers can't get loans to buy crops which means crops don't get planted and so on-so-forth.
Certainly emailing spreadsheets doesn't seem very "real" but there's actual value in providing liquidity it's just not physically demanding.
On the flip side, professional sports is very physically demanding but can you really call what kids do for fun "real work"?
From the perspective of these kids real work probably involves working with your hands. I don't think we need to get too upset over what people who have yet to enter the workforce have to say about "real work". They need to be employed for a few years before they learn the lesson that almost ALL work is fake work.
My definition of real work is - can I point at something and be proud of it? It might not even be something physical (but often is) and my involvement may not be obvious (say, managing the spreadsheets for a building project), but there it is, the thing I worked on.
They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right)
Hm interesting
So they are making the distinction between regular "human brain" coding and AI-assisted coding?
Regular coding could be described as "not doing the thing yourself, but telling the computer what to do"
(FWIW I do think there is a huge difference; however I am not sure the general public has a very good idea of what "programming" is. I remember having some code up on my screen and my educated family was confused, even at the concept)
"Children are never shy to tell the truth." Your comment makes me hopeful as well.
In general those "Generation XYZ is threatened by this, thinks that" tropes often annoy me. I'm born somewhere between Gen-Y and Gen-Z and those boundaries feel totally arbitrary.
"You're not a real ham if you don't use Morse code",
"You're not a real machinist if you use CNC",
"Your mechanical drawing skills are going to atrophy if you use CAD CAM.
"You should manually tape PCB layouts, so you have more control.
And another grandfather's favorite, "Why do you want to use the forklift? You won't always have one, and a pry bar and rollers are good enough, and you learn the value of real work."
I think there's a big difference between "your drawing skills will atrophy if you use CAD to draw for you" and "your brain will atrophy if you ask an LLM to think for you." Personally I don't judge people for being unable to draw, but I do judge them for being unable to think for themselves.
> If anything, I'm incredibly hopeful for newer generations. They'll probably mostly be fine, like most of us were.
The current state of the world begs to differ with "most of us being mostly fine". Critical thinking skills and the ability to make wise decisions among the various electorates seem to be in a incredibly shitty state.
Anecdotally, Gen Z-ers as a whole are definitely not better at this; they're easily swayed by flashy memes, TikToks and other forms of disinformation. Where younger people used to have a more society minded, leftist lean (before ultimately becoming jaded), they more than ever side with right wing populists from a young age. Not all of them, but a much larger chunk than before.
Yes, well, my youngest (15) tells me all the same things, while simultaneously constantly asking Gemini to help them write code for their game they're building in RPG Maker.
The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
But AI is actually not very good at replacing an entire lower-level worker’s job as a whole. It works well only when that work is broken down into smaller and smaller tasks.
The core problem is this: the coercive force of AI use is felt by the lower classes, while the upper classes still have the freedom not to use it. AI may be able to make decisions based on more data than executives do, and perhaps even make better decisions than management. Yet the people being replaced are the lower-level workers.
This is the problem. The upper classes, who claim that AI is an essential tool, still have the freedom not to use it. But the lower classes cannot survive unless they use it. It becomes a tool required for survival, while at the same time being treated as something wrong, inferior, or low-status if you use it.
To get a job, AI becomes an essential survival tool. But culturally, it is also treated as a tool that damages creativity. I see this in open-source communities as well, in the class discourse around open source.
The same culture appears on Hacker News. Among the upper layer of open-source communities, there is often hostility toward AI-generated code, based on ideas of human purity: AI code is said to have no meaning, no responsibility, no real authorship. So even within open source, this takes on a class character.
But as a freelance developer, I have to trade against my own code-writing ability in order to survive and deliver. Because of AI, the floor price of software delivery has collapsed. If I do not use AI, I cannot meet the new requirements.
In the past, a job that would have given me two months and paid $5,000 is now expected to be completed in two weeks for the same $5,000. Without AI, that volume of work is impossible to handle.
This kind of discourse always makes me uncomfortable. I dislike it, but I have to use it.
AI lowers the barrier to creation and learning, but the way it lowers that barrier can also bypass the training of thought itself. It turns young people into both beneficiaries and damaged subjects at the same time.
And we live under this loop of coercion. Sometimes I think I do not want to use AI.
But if I want to survive, I have to use it. I feel the abilities I once took pride in beginning to decay, and I feel myself becoming increasingly bound to AI companies. At the same time, I also feel another kind of ability beginning to emerge.
Perhaps growing older means learning how to live inside irony.
>The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
AI just repeats whatever the prevailing opinion is at that time. I am a very heavy AI user (Claude, Gemini, ChatGPT) and have queried it on a variety of topics. AI is not thinking, it is repeating.
> AI just repeats whatever the prevailing opinion is at that time.
That would be an improvement. They are generally far too sychophantic to just repeat the prevailing opinion, and instead synthesise the opinion that they think wants to be heard by the user.
I agree with the view that AI does not truly think and cannot produce genuinely original opinions. It is difficult for AI to give answers that lie far outside the average distribution of its training data. In that sense, it is not very good at producing truly novel business insights.
But that is not what most “work” usually means. Work is mostly repetitive. The actual moment of decision is brief.
So what do I mean by work here? I mean the collection, organization, and synthesis of the materials needed before reaching that decision.
For that part of the process, AI is extremely effective.
Because of the nature of freelance work, jobs are not always steady. Most freelancers constantly worry that the work may disappear tomorrow.
In my case, unlike contract freelancers who are hired for a fixed period, I usually work on a project-delivery basis. Of course, well-known programmers may be able to negotiate salary-like contracts, but that is not my situation.
I think my earlier example may have been unclear. What I meant was not that the price increased. I meant that a project that used to take two months for $5,000 is now expected to be delivered in two weeks for the same $5,000.
That point probably needed more explanation. In the current freelance market, prices have collapsed more than many people realize.
My English is not perfect, since I am not from the English-speaking world, so I may have caused some confusion. Please understand my point as: work that used to reasonably take two months is now expected to be completed within two weeks.
> The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
It’s really not though. A lot of sort of incidental communication that people don’t really read carefully it’s fine at. But the actual hard stuff it’s just not that great. It’s basically average by design so it almost definitionally incapable of being great.
I’ve been trying to use AI for putting together job applications, tailor my resume and cover letter and stuff. But it’s just not good. It’s decent if I want a sanity-check analysis as in “how does this come across generically.” But if I ask it to write anything it sound like LinkedIn slop. I expect if everyone starts using this to write their basic communication you’ll be in a world where nobody is reading anything anyone says because it’s so BORING. Everyone sounds the same, everything is phrased in this sort of mushy and generic way. At that point the intended communication isn’t happening! We will have sanded away all the rough edges to the point where you can’t actually get a grip on anything anymore.
At another point I tried to test it out and asked it to wireframe a very basic application for me. Something that should have been a very lightweight thing that runs on device to do a basic background process it architected as some crazy overcomplicated enterprise-scaled thing as if I’m trying to build a unicorn startup out of this rather than just a toy app to organize my shopping list. If I wasn’t technically savvy enough to recognize it was way overcomplicating things I’d have just run with this. And what’s worse is, it burns through all your token budget to figure out a bunch of problems you don’t need to have!
Obviously I’m using it and find it useful, but I’ve started to develop serious doubts about how useful it will ever be without an informed and accountable attendant overseeing it.
> The irony is that AI is best at replacing the work of the upper classes
This is why the harshest critics of AI tend to be white collar workers of this social class. The same kinds that told coal miners and autoworkers to "learn to code" and called them deplorables for voting nativist in 2016.
Any chance to build mutual trust was squandered. The jobs worst impacted by AI are jobs where most of the workers are Democrats and live in blue states that don't swing.
Meanwhile, those manufacturing, construction, and healthcare jobs that are becoming a bigger part of the economy tend to be in the purple part of the country so their needs are heard.
and why the people most prone to praising it are ones who mostly write emails all day.
“Wow, this is very, very good at my job, which must be a difficult job because it pays well and I'm a smart guy. Imagine how well it will work for the dum-dums.”
> told coal miners and autoworkers to "learn to code" and called them deplorables for voting nativist in 2016.
The actual pitch was to bring educational and alternative energy opportunities to an area that is impoverished and facing harsh economic realities. It's worth pointing out that the people WV did end up electing did not improve the region and did nothing for coal miners' ecnonomic wellbeing, as many of those coal plants shut down anyway and no one of their elected officials did anything to stop it, nor did they provide any economic alternatives to the region:
"coal production has declined 31% since Trump took office [first term], and by some estimates, more than five dozen coal-fired power plants have closed."
> called them deplorables for voting nativist in 2016.
She called a spade a spade. As mad as they were in 2016 for being called that, they proved her 100% right when they sacked the capitol in a violent insurrection in 2021 waving KKK and Nazi flags. That's deplorable behavior.
This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.
Putting an llm in front of it helps me focus on the facts.
There are also too many things to read. My default before llms would have been to ignore this article.
At least now I learned some things (mostly about the Gallup poll which had source data)
I do think some people will outsource critical thinking to llms - but it also helps amplify critical thinking by doing a lot of the filtering and organizing and let me focus on the things i think are important.
And then how do you uncover bias in your chatbot? Do you ask it to analyze its own analysis? For that matter, what about the bias in your prompt, which LLMs tend to accept uncritically? Do your own preconceived opinions bias you against the argument made in the article? Are you using a chatbot to think critically about the article, or to avoid thinking critically about your own beliefs?
> At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.
Perhaps you should take a cue from these surveyees and do your own thinking.
I actually did this - I plugged The Verge article into Claude and got the following critique of what biases are there:
> The article accurately cites real Gallup data but selectively omits findings that complicate its "backlash" narrative — most notably that curiosity is Gen Z's single most common emotion toward AI, and that daily users remain substantially more hopeful and excited than the aggregate figures suggest. The 79% "laziness" concern and declining hope figures are presented as evidence of generational rejection, when the researchers themselves describe what they found as "deep ambivalence." *In short, the article uses real numbers to tell a cleaner, more oppositional story than the underlying polling actually supports.*
Then I then put that Claude critique back into Claude and asked it to analyze the critique for bias and agendas and got this:
> The critique accurately catches real flaws in The Verge article — particularly the omission of "curiosity" as Gen Z's top emotion and the failure to distinguish between heavy users (who are more positive) and non-users (who drive most of the negativity). However, *the critique has its own directional bias, consistently framing every correction in ways that soften the negative trend, while ignoring data that cuts the other way — like the sharp positivity decline even among daily users, and the near-majority of Gen Z workers who see AI as a net negative in the workplace. *Both pieces are selectively using the same real data to tell opposite stories; the Gallup findings themselves are more nuanced and more negative than the critique allows.*
So according to Claude, Claude is biased in how it describes The Verge as biased.
LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.
> LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.
> The cool thing about the current generation of AI tools is how easy it is to uncover bias or an agenda in an article like this.
This is only true if you assume that an AI tool is itself unbiased. I'm not sure how anyone can earnestly believe AI tools are unbiased after Grok's MechaHitler episode [0], unless they just aren't giving it much critical thought.
Gen-Z is pretty cool. The problem is a small subset of Gen-X and Millenials who have too much money and power and treat AI as if it is the Dianetics bible from SciFi author Ron L. Hubbard (who according to James Randi knew exactly what he was doing).
There are truly mentally unwell people in charge who would like get out the E-meter and audit everyone who does not follow their new Scientology knockoff. Yes, the advertising methods and suppression of opposition are the same.
My daughter's a senior in college. She recently was part of a group presentation; she did not use AI to prepare for it, but all of the other group members did. She was the only one who could answer follow-on questions.
If you use AI to understand things for you, you're short-changing yourself.
I mean…that’s essentially identical to group presentations in general. The other students didn’t do the work; what they don’t do the work with is irrelevant.
Good for your daughter but doesn’t that example tell us the opposite of what this article is trying to argue? If the majority of a group of young people choose to use AI for their project, that doesn’t indicate that the majority hate it. That would indicate that they like and trust it.
I don't understand all the negativity in the education space. Ever since I read Diamond Age, the ability to interact with and interrogate works of literature, or science, or the world at large is EXTREMELY powerful for understanding.
AI can be used for good especially when you're digging into the details / nitty gritty and asking good questions.
Anything can be used in a saccharine way to take the easy way out. Why not ban Cliff Notes as well? Sure, it won't write the essay, but also you didn't read the book.
You don't have to use these tools in a lazy way. You don't have to use these tools in a way that cheats / compromises your intelligence. Building up the awareness to use them in a way that multiplies instead of subtracts is going to be the key issue for my kids, and for anybody in the workforce today.
However, is this exclusive to young people? I'm a millenial (early 90s) and I share their sentiment. I might not share it for the same reason though. Personally, I'm concerned about what AI usage would do to my cognitive ability, and as such I try to limit my use. I can't avoid using it at work (we're being tracked on "AI Adoption") and it does genuinely speed up some of my tasks. And I do play around with AI coding tools, mostly because I think I _should_ know them in this day and age.
But apart from that, I'm not using it. I'm using DDG searches rather than asking ChatGPT for solutions, I still go around reading websites and papers instead of AI summaries, and I don't outsource my writing to it. (i.e, I write my own emails, my own blogs, my own poorly worded HN comments, etc).
Fellow millenial here. I rarely use AI, for similar reasons. Not only am I worried about cognitive decline, I also have plenty of ethical concerns and I don't want to become even more dependent on US megacorps. Fortunately, I'm writing my own software and nobody can tell me which tools to use :)
I don't think it's exclusive to young people, no. I'm a couple years older than you. All of my friends also hate it and make fun of it. Like some of the people in the article, I'm also looking to get out of the tech industry and find something else to do other than be forced to talk to shitty robots. If they want to fire me for not using their crappy tech enough, fine. I don't care anymore.
> At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.
I don't see how these and other sentiments are unique to Gen Z at all.
The difference I've seen is that many zoomers have given up on learning in the first place. "What's the point?"
We know that AI will ultimately just end up enriching a very small group of people with no change in prosperity for working and middle classes. CEOs are openly saying as much. For the past number of decades the rise in productivity has been completely detached from wages, it'll be no different this time.
We're also no strangers to enshittification, we have first hand experience of technology causing negative societal effects when in the hands of for-profit entities.
It's the latest form of elitism - like in less sunny countries it's fashionable to be tanned because it means you are rich enough to have time to hang around on the beach while in sunny countries it's fashionable to be light skinned because you are rich enough that you don't have to work in the sun. Disdain for AI is a luxury belief of those who are either talented enough to draw / write / code without or are wealthy enough to not have to.
Is it though? I thought elite engineers needed to consume billions of tokens monthly and/or pay >100 USD subscriptions to AI tools to realize their potential.
Or a belief of those scared that an imploding "AI" bubble will ruin their financial futures. Or just that most of the humans in their own white collar professions will be replaced by AI's.
I don’t understand why people act like we just have to submit to the AI revolution. We can make this technology illegal, and shut it down completely. Why don’t we?
You can make AI/LLMs illegal, but other countries won't. There's a real risk as a country you fall behind economically if you ban what turns out to be the next big tech revolution.
Okay. Let China become more technologically advanced than us. I don’t care. Chinese tech companies being able to write software 20% faster than us is a less bitter pill to swallow than having to live in our current LLM world.
> I don’t understand why people act like we just have to submit to the AI revolution.
Some people are genuinely interested and excited about this new technology. Other people have an interest that the AI will succeed. At least on the surface it seems that these two groups are louder (or more successful) than the ones that oppose AI.
> We can make this technology illegal, and shut it down completely. Why don’t we?
Because there are not many (if any) lobby groups that pour money into making it illegal and also because of fear of not being left behind. There are also plenty of lobby groups that invest a lot of money into putting AI into everything.
no government on earth will make ai outright illegal. they are the perfect thing to shrug accountability onto, let alone all of the actual semi-useful reasons of keeping ai legal.
how would you even make it illegal? people have local models everywhere. if your country makes it illegal but mine doesnt, people from your country will just vpn and access them in my country. it would have to be a worldwide effort (lol).
We make child porn illegal which is even easier to copy than local LLMs. It doesn’t work perfectly, of course there are people who still have it, but we could dramatically reduce the number of people with local LLMs if we punished possession with long prison sentences.
i am sorry, i was trying to take you seriously, but you completely lose me at comparing an llm to child sexual abuse material (CSAM) and suggesting possession of an llm should be met with similarly long prison sentences. that is absurd.
a central tenet of justice is that the punishment fits the crime. for CSAM, it is obvious why extremely long prison sentences are fitting. the damage CSAM causes is immense and hard to even capture with words.
Punishment is not only tied to the harm done by the crime, it’s also tied to difficulty of enforcement. For laws that are relatively easy to break without getting caught, you need severe penalties or everyone will just ignore them.
The irony here is this: AI actually has the potential to absolutely level the playing field.
Right now corporations are building the infrastructure out wildly and incorporating it into everything. They’re concerned about a race to the top while creating absolute inefficiency and ignoring responsible, sustainable growth.
The task of GenZ should not be to avoid AI, in my opinion.
Rather, embrace it. Own it.
WEAPONIZE IT.
When Google mainstreamed the Search Engine and added tool after tool, it made things that were previously legacy (Word Processing? Pay a big licensing fee to Microsoft, only save to your local machine or hard media! Along comes Google Drive and Docs and now you can edit your document everywhere and a computer crash doesn’t take it out!) well, digitized.
AI is that integration at warp speed.
We now have the tools to work harder and faster. We have near-instant access to research. If we are discerning, AI is actually not a weapon against us. It is a tool we can use to change the narrative.
Big companies are actually banking on fear of the masses. They want you to believe that AI is too big. That it is all-knowing. They don’t want you to recognize you can download ollama and a localized agent and tune it to your needs. Or to get into Gemini and ask it how you can disconnect from Google’s cloud if that’s really what you want it to do.
AI is the future. But it needs human hands. The question. You need to ask is: your hands? Or Microsoft’s?
The moment you start doing this at any scale, the companies will notice, and after a few winks, nods, and campaign donations, you will not be able to use it anymore.
As if nobody's ever evaded a ban before. We have North Korean spies infiltrating American government agencies, but you can't even create a second Claude account?
This is why local weights are the way to go if you want to embrace the technology and successfully weaponized it against incumbents. I'm not saying you should, mind, I've only dabbled enough to get a sense of how these things will eventually be weaponized against the public. But the unethical genesis keeps me from fully embracing it; likely much to my detriment.
Yes, it is clear - for every compute unit you have that's more efficient and effective I have 10,000,000,000 compute units. I can use the same models that you do. How would you win?
They don't want to embrace the Slop Machine. They hate what it is doing to the internet they used to love. Why would they intentionally become fully dependent on it, leaving them completely at the mercy of AI companies willing to turn off their access at a moment's notice? [0][1]
° https://www.goodreads.com/book/show/59801798-blood-in-the-ma...
Because history is written by the victors, the Luddites were painted as idiots who just hated machines for no reason or dumb reasons. This couldn't be further from the truth.
The sad thing that I haven't been able to resolve in my mind is that this is a cultural multi-party prisoners' dilemma among sovereign entities.
From a power-centric point of view, if my neighbors intentionally cast off modern technology, they are ripe for domination, economic exploitation, etc. The history of human civilization from the age of city-states onward is about navigating the need for protection from hostile, arrogating outside forces (and/or being one of those hostile forces).
> Freystaetter and Gottlieb both say that instead of their own generation, they are more worried about Gen Alpha and other young people that come after them, who lose their chance to develop healthy relationships with technologies when they become mandatory and ubiquitous.
I remember similar concerns from Millennials about Gen-Z with the Internet and social media. In the end the Internet and Social Media Gen-Z grew up with was quite different from the one Gen-Y was worried about and the reaction of the new generation to it of course not uniform. Similar developments might happen with Gen Alpha and AI, which seems even more polarizing to me.
I have three (gen A) kids, of which two are of the age to have opinions on this.
They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right). If I try to spin them a bullshit story, they tell me how can that be true and maybe I got brainwashed by AI. Also they hate ads with a passion.
If anything, I'm incredibly hopeful for newer generations. They'll probably mostly be fine, like most of us were.
---
Edit: many responses, and I'll add that in isolation "x is not real work" coming from a kid is maybe equally endearing as it is divisive.
I'll add that of course my kids are a product of their upbringing, and I make no secret of my existential confusion about what it is to program a computer when most of the time i'm just steering the clanker away from obviously dumb mistakes.
My wife is a psychologist working with underprivileged kids, so we always joke that she has the real job and I'm just doing a hobby that pays well. Much of this is them simply parroting that, maybe. We do try to teach them to think beyond dogma and the cultural bias they grow up in, but who can tell. Everyone is in the end to a great extent a product of their environments, and parents.
Finally. They (their generation) will probably be fine, but they might equally well not be. Vapes, tiktok, souped up ebikes, sexting, designer drugs, climate change, refugees, extremism. So many challenges, but you could argue every generation had that. So i choose not to have too much of an opinion and try to stick with gleeful, desperate optimism.
> They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right).
For most of computing history this has been the case, too!
Can't be doing rEaL woRk unless you're flipping front panel switches to input machine code instructions.
I mean what is real work anyways?
Like take finance where people just email broken spreadsheets around all day. If they stop doing that then farmers can't get loans to buy crops which means crops don't get planted and so on-so-forth.
Certainly emailing spreadsheets doesn't seem very "real" but there's actual value in providing liquidity it's just not physically demanding.
On the flip side, professional sports is very physically demanding but can you really call what kids do for fun "real work"?
From the perspective of these kids real work probably involves working with your hands. I don't think we need to get too upset over what people who have yet to enter the workforce have to say about "real work". They need to be employed for a few years before they learn the lesson that almost ALL work is fake work.
My definition of real work is - can I point at something and be proud of it? It might not even be something physical (but often is) and my involvement may not be obvious (say, managing the spreadsheets for a building project), but there it is, the thing I worked on.
They tell me I don't have a real job because I just tell the computer what to do, and I don't do the thing myself (to which I can't help but respond that they're absolutely right)
Hm interesting
So they are making the distinction between regular "human brain" coding and AI-assisted coding?
Regular coding could be described as "not doing the thing yourself, but telling the computer what to do"
(FWIW I do think there is a huge difference; however I am not sure the general public has a very good idea of what "programming" is. I remember having some code up on my screen and my educated family was confused, even at the concept)
Most actions can be viewed in highly reductionist manner.
Even J.S. Bach was aware of the same concept:
> "There's nothing remarkable about it. All one has to do is hit the right keys at the right time and the instrument plays itself."
~ Johann Sebastian Bach
Iirc the piano was considered uncouth and unskilled because it was “too easy”.
"Children are never shy to tell the truth." Your comment makes me hopeful as well.
In general those "Generation XYZ is threatened by this, thinks that" tropes often annoy me. I'm born somewhere between Gen-Y and Gen-Z and those boundaries feel totally arbitrary.
This smells to me an awful lot like the
"You're not a real ham if you don't use Morse code", "You're not a real machinist if you use CNC", "Your mechanical drawing skills are going to atrophy if you use CAD CAM. "You should manually tape PCB layouts, so you have more control.
And another grandfather's favorite, "Why do you want to use the forklift? You won't always have one, and a pry bar and rollers are good enough, and you learn the value of real work."
I think there's a big difference between "your drawing skills will atrophy if you use CAD to draw for you" and "your brain will atrophy if you ask an LLM to think for you." Personally I don't judge people for being unable to draw, but I do judge them for being unable to think for themselves.
I judge the shit out of people who can't draw AND bill themselves as visual artists so there is that.
I'd say it's more like you're not a real driver if you use Waymo to get around everywhere.
Sure, but I'm a software developer, not a typist.
A delivery person is still delivering stuff, even if it turns out that using a waymo is cheaper/faster.
You have smart kids.
Once you use AI for all your work you won't be growing anymore, just fading away
AI is turning all of us into managers and we’ve KNOW forever that managers don’t know anything ;)
Well the point is you tell it what to do isn't it? Unless your job is so replaceable and generic there's little actual direction needed?
I still can barely have a convo with it where it doesn't just make up total unworkable bollocks.
It can manage some coding though tbf, but again, not sure how far a completely non-tech user would find it.
> If anything, I'm incredibly hopeful for newer generations. They'll probably mostly be fine, like most of us were.
The current state of the world begs to differ with "most of us being mostly fine". Critical thinking skills and the ability to make wise decisions among the various electorates seem to be in a incredibly shitty state.
Anecdotally, Gen Z-ers as a whole are definitely not better at this; they're easily swayed by flashy memes, TikToks and other forms of disinformation. Where younger people used to have a more society minded, leftist lean (before ultimately becoming jaded), they more than ever side with right wing populists from a young age. Not all of them, but a much larger chunk than before.
Maybe requiring large swaths of people to “make the right decisions” as the electorate was a problem from the start.
Yes, well, my youngest (15) tells me all the same things, while simultaneously constantly asking Gemini to help them write code for their game they're building in RPG Maker.
They'll see.
The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
But AI is actually not very good at replacing an entire lower-level worker’s job as a whole. It works well only when that work is broken down into smaller and smaller tasks.
The core problem is this: the coercive force of AI use is felt by the lower classes, while the upper classes still have the freedom not to use it. AI may be able to make decisions based on more data than executives do, and perhaps even make better decisions than management. Yet the people being replaced are the lower-level workers.
This is the problem. The upper classes, who claim that AI is an essential tool, still have the freedom not to use it. But the lower classes cannot survive unless they use it. It becomes a tool required for survival, while at the same time being treated as something wrong, inferior, or low-status if you use it.
To get a job, AI becomes an essential survival tool. But culturally, it is also treated as a tool that damages creativity. I see this in open-source communities as well, in the class discourse around open source.
The same culture appears on Hacker News. Among the upper layer of open-source communities, there is often hostility toward AI-generated code, based on ideas of human purity: AI code is said to have no meaning, no responsibility, no real authorship. So even within open source, this takes on a class character.
But as a freelance developer, I have to trade against my own code-writing ability in order to survive and deliver. Because of AI, the floor price of software delivery has collapsed. If I do not use AI, I cannot meet the new requirements.
In the past, a job that would have given me two months and paid $5,000 is now expected to be completed in two weeks for the same $5,000. Without AI, that volume of work is impossible to handle.
This kind of discourse always makes me uncomfortable. I dislike it, but I have to use it.
AI lowers the barrier to creation and learning, but the way it lowers that barrier can also bypass the training of thought itself. It turns young people into both beneficiaries and damaged subjects at the same time.
And we live under this loop of coercion. Sometimes I think I do not want to use AI.
But if I want to survive, I have to use it. I feel the abilities I once took pride in beginning to decay, and I feel myself becoming increasingly bound to AI companies. At the same time, I also feel another kind of ability beginning to emerge.
Perhaps growing older means learning how to live inside irony.
>The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
AI just repeats whatever the prevailing opinion is at that time. I am a very heavy AI user (Claude, Gemini, ChatGPT) and have queried it on a variety of topics. AI is not thinking, it is repeating.
> AI just repeats whatever the prevailing opinion is at that time.
That would be an improvement. They are generally far too sychophantic to just repeat the prevailing opinion, and instead synthesise the opinion that they think wants to be heard by the user.
I agree with the view that AI does not truly think and cannot produce genuinely original opinions. It is difficult for AI to give answers that lie far outside the average distribution of its training data. In that sense, it is not very good at producing truly novel business insights.
But that is not what most “work” usually means. Work is mostly repetitive. The actual moment of decision is brief.
So what do I mean by work here? I mean the collection, organization, and synthesis of the materials needed before reaching that decision.
For that part of the process, AI is extremely effective.
Thus replacing the work of the upper classes.
> In the past, a job that would have given me two months and paid $5,000 is now expected to be completed in two weeks for the same $5,000.
So you have quadrupled your income? That seems like the opposite of a collapse.
Because of the nature of freelance work, jobs are not always steady. Most freelancers constantly worry that the work may disappear tomorrow.
In my case, unlike contract freelancers who are hired for a fixed period, I usually work on a project-delivery basis. Of course, well-known programmers may be able to negotiate salary-like contracts, but that is not my situation.
I think my earlier example may have been unclear. What I meant was not that the price increased. I meant that a project that used to take two months for $5,000 is now expected to be delivered in two weeks for the same $5,000.
That point probably needed more explanation. In the current freelance market, prices have collapsed more than many people realize.
My English is not perfect, since I am not from the English-speaking world, so I may have caused some confusion. Please understand my point as: work that used to reasonably take two months is now expected to be completed within two weeks.
I think they’re saying there’s another stepdown coming, the job in two weeks for $1,000 is right around the corner.
But because of AI demand also fell. You can quandruple income without customers
> The irony is that AI is best at replacing the work of the upper classes. Synthesizing different opinions, summarizing them, and producing outputs based on statistics are things AI does well.
It’s really not though. A lot of sort of incidental communication that people don’t really read carefully it’s fine at. But the actual hard stuff it’s just not that great. It’s basically average by design so it almost definitionally incapable of being great.
I’ve been trying to use AI for putting together job applications, tailor my resume and cover letter and stuff. But it’s just not good. It’s decent if I want a sanity-check analysis as in “how does this come across generically.” But if I ask it to write anything it sound like LinkedIn slop. I expect if everyone starts using this to write their basic communication you’ll be in a world where nobody is reading anything anyone says because it’s so BORING. Everyone sounds the same, everything is phrased in this sort of mushy and generic way. At that point the intended communication isn’t happening! We will have sanded away all the rough edges to the point where you can’t actually get a grip on anything anymore.
At another point I tried to test it out and asked it to wireframe a very basic application for me. Something that should have been a very lightweight thing that runs on device to do a basic background process it architected as some crazy overcomplicated enterprise-scaled thing as if I’m trying to build a unicorn startup out of this rather than just a toy app to organize my shopping list. If I wasn’t technically savvy enough to recognize it was way overcomplicating things I’d have just run with this. And what’s worse is, it burns through all your token budget to figure out a bunch of problems you don’t need to have!
Obviously I’m using it and find it useful, but I’ve started to develop serious doubts about how useful it will ever be without an informed and accountable attendant overseeing it.
> The irony is that AI is best at replacing the work of the upper classes
This is why the harshest critics of AI tend to be white collar workers of this social class. The same kinds that told coal miners and autoworkers to "learn to code" and called them deplorables for voting nativist in 2016.
Any chance to build mutual trust was squandered. The jobs worst impacted by AI are jobs where most of the workers are Democrats and live in blue states that don't swing.
Meanwhile, those manufacturing, construction, and healthcare jobs that are becoming a bigger part of the economy tend to be in the purple part of the country so their needs are heard.
and why the people most prone to praising it are ones who mostly write emails all day.
“Wow, this is very, very good at my job, which must be a difficult job because it pays well and I'm a smart guy. Imagine how well it will work for the dum-dums.”
gonna push back on this
i don't see a relationship betwern criticism and the chance of automation/replacement
the harshest critics that i see tend to be, almost ubiquitously, creatives
perhaps just my walk of life
There's a reason the "creatives" are called the "chattering class"
> told coal miners and autoworkers to "learn to code" and called them deplorables for voting nativist in 2016.
The actual pitch was to bring educational and alternative energy opportunities to an area that is impoverished and facing harsh economic realities. It's worth pointing out that the people WV did end up electing did not improve the region and did nothing for coal miners' ecnonomic wellbeing, as many of those coal plants shut down anyway and no one of their elected officials did anything to stop it, nor did they provide any economic alternatives to the region:
"coal production has declined 31% since Trump took office [first term], and by some estimates, more than five dozen coal-fired power plants have closed."
https://www.politifact.com/factchecks/2020/oct/14/donald-tru...
> called them deplorables for voting nativist in 2016.
She called a spade a spade. As mad as they were in 2016 for being called that, they proved her 100% right when they sacked the capitol in a violent insurrection in 2021 waving KKK and Nazi flags. That's deplorable behavior.
The cool thing about the current generation of AI tools is how easy it is to uncover bias or an agenda in an article like this.
paste the verge article text into your favorite AI tool and ask for an analysis.
Make sure to ask it to read the source Gallup data that this article leans on and compare the conclusions drawn.
The cool thing about critical reasoning is how easy it is to uncover bias or agenda in an article like this.
I suspect that as you rely more on a robot for this your own skills will atrophy.
This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.
Putting an llm in front of it helps me focus on the facts.
There are also too many things to read. My default before llms would have been to ignore this article.
At least now I learned some things (mostly about the Gallup poll which had source data)
I do think some people will outsource critical thinking to llms - but it also helps amplify critical thinking by doing a lot of the filtering and organizing and let me focus on the things i think are important.
And then how do you uncover bias in your chatbot? Do you ask it to analyze its own analysis? For that matter, what about the bias in your prompt, which LLMs tend to accept uncritically? Do your own preconceived opinions bias you against the argument made in the article? Are you using a chatbot to think critically about the article, or to avoid thinking critically about your own beliefs?
> At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.
Perhaps you should take a cue from these surveyees and do your own thinking.
I actually did this - I plugged The Verge article into Claude and got the following critique of what biases are there:
> The article accurately cites real Gallup data but selectively omits findings that complicate its "backlash" narrative — most notably that curiosity is Gen Z's single most common emotion toward AI, and that daily users remain substantially more hopeful and excited than the aggregate figures suggest. The 79% "laziness" concern and declining hope figures are presented as evidence of generational rejection, when the researchers themselves describe what they found as "deep ambivalence." *In short, the article uses real numbers to tell a cleaner, more oppositional story than the underlying polling actually supports.*
Then I then put that Claude critique back into Claude and asked it to analyze the critique for bias and agendas and got this:
> The critique accurately catches real flaws in The Verge article — particularly the omission of "curiosity" as Gen Z's top emotion and the failure to distinguish between heavy users (who are more positive) and non-users (who drive most of the negativity). However, *the critique has its own directional bias, consistently framing every correction in ways that soften the negative trend, while ignoring data that cuts the other way — like the sharp positivity decline even among daily users, and the near-majority of Gen Z workers who see AI as a net negative in the workplace. *Both pieces are selectively using the same real data to tell opposite stories; the Gallup findings themselves are more nuanced and more negative than the critique allows.*
So according to Claude, Claude is biased in how it describes The Verge as biased.
LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.
So basically sycophantic LLM behavior. Nothing new then
> LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.
THIS. ALL. DAY.
So are you outsourcing your thinking? You just prove the article's point
Would this show bias of Gallup or of Verge or of the Ai training data? How would you determine which?
> The cool thing about the current generation of AI tools is how easy it is to uncover bias or an agenda in an article like this.
This is only true if you assume that an AI tool is itself unbiased. I'm not sure how anyone can earnestly believe AI tools are unbiased after Grok's MechaHitler episode [0], unless they just aren't giving it much critical thought.
0 - https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...
Gen-Z is pretty cool. The problem is a small subset of Gen-X and Millenials who have too much money and power and treat AI as if it is the Dianetics bible from SciFi author Ron L. Hubbard (who according to James Randi knew exactly what he was doing).
There are truly mentally unwell people in charge who would like get out the E-meter and audit everyone who does not follow their new Scientology knockoff. Yes, the advertising methods and suppression of opposition are the same.
My daughter's a senior in college. She recently was part of a group presentation; she did not use AI to prepare for it, but all of the other group members did. She was the only one who could answer follow-on questions.
If you use AI to understand things for you, you're short-changing yourself.
I mean…that’s essentially identical to group presentations in general. The other students didn’t do the work; what they don’t do the work with is irrelevant.
Good for your daughter but doesn’t that example tell us the opposite of what this article is trying to argue? If the majority of a group of young people choose to use AI for their project, that doesn’t indicate that the majority hate it. That would indicate that they like and trust it.
Replace AI with Microsoft Word and it makes sense. Lots of people use it and lots of people hate it.
The article is saying what happens after people do use it, not that they can or do avoid altogether.
I don't understand all the negativity in the education space. Ever since I read Diamond Age, the ability to interact with and interrogate works of literature, or science, or the world at large is EXTREMELY powerful for understanding.
AI can be used for good especially when you're digging into the details / nitty gritty and asking good questions.
Anything can be used in a saccharine way to take the easy way out. Why not ban Cliff Notes as well? Sure, it won't write the essay, but also you didn't read the book.
You don't have to use these tools in a lazy way. You don't have to use these tools in a way that cheats / compromises your intelligence. Building up the awareness to use them in a way that multiplies instead of subtracts is going to be the key issue for my kids, and for anybody in the workforce today.
Paywalled so I can't read the article.
However, is this exclusive to young people? I'm a millenial (early 90s) and I share their sentiment. I might not share it for the same reason though. Personally, I'm concerned about what AI usage would do to my cognitive ability, and as such I try to limit my use. I can't avoid using it at work (we're being tracked on "AI Adoption") and it does genuinely speed up some of my tasks. And I do play around with AI coding tools, mostly because I think I _should_ know them in this day and age.
But apart from that, I'm not using it. I'm using DDG searches rather than asking ChatGPT for solutions, I still go around reading websites and papers instead of AI summaries, and I don't outsource my writing to it. (i.e, I write my own emails, my own blogs, my own poorly worded HN comments, etc).
https://archive.is/F3T8k
Fellow millenial here. I rarely use AI, for similar reasons. Not only am I worried about cognitive decline, I also have plenty of ethical concerns and I don't want to become even more dependent on US megacorps. Fortunately, I'm writing my own software and nobody can tell me which tools to use :)
I don't think it's exclusive to young people, no. I'm a couple years older than you. All of my friends also hate it and make fun of it. Like some of the people in the article, I'm also looking to get out of the tech industry and find something else to do other than be forced to talk to shitty robots. If they want to fire me for not using their crappy tech enough, fine. I don't care anymore.
https://archive.ph/F3T8k
> At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.
I don't see how these and other sentiments are unique to Gen Z at all.
The difference I've seen is that many zoomers have given up on learning in the first place. "What's the point?"
We know that AI will ultimately just end up enriching a very small group of people with no change in prosperity for working and middle classes. CEOs are openly saying as much. For the past number of decades the rise in productivity has been completely detached from wages, it'll be no different this time.
We're also no strangers to enshittification, we have first hand experience of technology causing negative societal effects when in the hands of for-profit entities.
In tech, young people can complain.
Us, the older folks, are not allowed to complain, lest we get branded as old fashioned, unable to adapt, etc.
Previously on the Gallup study:
https://news.ycombinator.com/item?id=47704443
>> AI is the future. But it needs human hands. The question. You need to ask is: your hands? Or Microsoft’s?
2 comments that smack of AI authorship, or if the above is human-created, god I wish they'd used AI.
It's the latest form of elitism - like in less sunny countries it's fashionable to be tanned because it means you are rich enough to have time to hang around on the beach while in sunny countries it's fashionable to be light skinned because you are rich enough that you don't have to work in the sun. Disdain for AI is a luxury belief of those who are either talented enough to draw / write / code without or are wealthy enough to not have to.
Is it though? I thought elite engineers needed to consume billions of tokens monthly and/or pay >100 USD subscriptions to AI tools to realize their potential.
This place sure lost interest in meritocracy fast once mediocrity became available to everyone.
> Disdain for AI is a luxury belief ...
Or a belief of those scared that an imploding "AI" bubble will ruin their financial futures. Or just that most of the humans in their own white collar professions will be replaced by AI's.
I don’t understand why people act like we just have to submit to the AI revolution. We can make this technology illegal, and shut it down completely. Why don’t we?
You can make AI/LLMs illegal, but other countries won't. There's a real risk as a country you fall behind economically if you ban what turns out to be the next big tech revolution.
Okay. Let China become more technologically advanced than us. I don’t care. Chinese tech companies being able to write software 20% faster than us is a less bitter pill to swallow than having to live in our current LLM world.
> I don’t understand why people act like we just have to submit to the AI revolution.
Some people are genuinely interested and excited about this new technology. Other people have an interest that the AI will succeed. At least on the surface it seems that these two groups are louder (or more successful) than the ones that oppose AI.
> We can make this technology illegal, and shut it down completely. Why don’t we?
Because there are not many (if any) lobby groups that pour money into making it illegal and also because of fear of not being left behind. There are also plenty of lobby groups that invest a lot of money into putting AI into everything.
the genie is out of the bottle.
no government on earth will make ai outright illegal. they are the perfect thing to shrug accountability onto, let alone all of the actual semi-useful reasons of keeping ai legal.
how would you even make it illegal? people have local models everywhere. if your country makes it illegal but mine doesnt, people from your country will just vpn and access them in my country. it would have to be a worldwide effort (lol).
We make child porn illegal which is even easier to copy than local LLMs. It doesn’t work perfectly, of course there are people who still have it, but we could dramatically reduce the number of people with local LLMs if we punished possession with long prison sentences.
i am sorry, i was trying to take you seriously, but you completely lose me at comparing an llm to child sexual abuse material (CSAM) and suggesting possession of an llm should be met with similarly long prison sentences. that is absurd.
a central tenet of justice is that the punishment fits the crime. for CSAM, it is obvious why extremely long prison sentences are fitting. the damage CSAM causes is immense and hard to even capture with words.
the damage llms cause is... not even close.
Punishment is not only tied to the harm done by the crime, it’s also tied to difficulty of enforcement. For laws that are relatively easy to break without getting caught, you need severe penalties or everyone will just ignore them.
No, only lawmakers can make things illegal and only if their personal incentives align so they are better off from making those things illegal.
Because lots of people, businesses and governments don't want to.
My impression is that The majority of people do want to.
When has that ever mattered?
My impression is that, seeing the rising AI use rates everywhere, that you are in a bubble.
Could be me too, but seeing China's general societal infatuation with AI outpace the US by orders of magnitude, I think that's a bit less likely.
then you're just thinking in a very naive way.
you'd need everybody to be onboard, be it your neighbor, the guy 8000 miles away from you on the other side of the planet, all the nations
if even one goes "well ill just keep going" it won't work.
it's like with nuclear weapons, nobody wants to be the one without them unless nobody else has them, so in the end they're still prevalent.
The irony here is this: AI actually has the potential to absolutely level the playing field.
Right now corporations are building the infrastructure out wildly and incorporating it into everything. They’re concerned about a race to the top while creating absolute inefficiency and ignoring responsible, sustainable growth.
The task of GenZ should not be to avoid AI, in my opinion.
Rather, embrace it. Own it.
WEAPONIZE IT.
When Google mainstreamed the Search Engine and added tool after tool, it made things that were previously legacy (Word Processing? Pay a big licensing fee to Microsoft, only save to your local machine or hard media! Along comes Google Drive and Docs and now you can edit your document everywhere and a computer crash doesn’t take it out!) well, digitized.
AI is that integration at warp speed.
We now have the tools to work harder and faster. We have near-instant access to research. If we are discerning, AI is actually not a weapon against us. It is a tool we can use to change the narrative.
Big companies are actually banking on fear of the masses. They want you to believe that AI is too big. That it is all-knowing. They don’t want you to recognize you can download ollama and a localized agent and tune it to your needs. Or to get into Gemini and ask it how you can disconnect from Google’s cloud if that’s really what you want it to do.
AI is the future. But it needs human hands. The question. You need to ask is: your hands? Or Microsoft’s?
> WEAPONIZE IT.
The moment you start doing this at any scale, the companies will notice, and after a few winks, nods, and campaign donations, you will not be able to use it anymore.
As if nobody's ever evaded a ban before. We have North Korean spies infiltrating American government agencies, but you can't even create a second Claude account?
This is why local weights are the way to go if you want to embrace the technology and successfully weaponized it against incumbents. I'm not saying you should, mind, I've only dabbled enough to get a sense of how these things will eventually be weaponized against the public. But the unethical genesis keeps me from fully embracing it; likely much to my detriment.
In the end, unfortunately, whoever sits on the most compute will have the power. It’s not going to be you and me.
Honestly it is not even that clear.
Local models are quite efficient as well.
Yes, it is clear - for every compute unit you have that's more efficient and effective I have 10,000,000,000 compute units. I can use the same models that you do. How would you win?
Use them for what? This is a case where what you do with your stuff matters more than its size.
You can also rent cloud compute and run your own models there.
They don't want to embrace the Slop Machine. They hate what it is doing to the internet they used to love. Why would they intentionally become fully dependent on it, leaving them completely at the mercy of AI companies willing to turn off their access at a moment's notice? [0][1]
[0]: https://news.ycombinator.com/item?id=47963204
[1]: https://news.ycombinator.com/item?id=47952722
Gen Alpha have never seen a lovable internet. It died before they were born.