As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy. Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation
Let's go there: this is what the Unabomber was on about, and there has long been an effort to stop people noticing this.
Ultimately you end up with either going for totalitarianism (either to arrest development in the status quo, maintain a state of anarcho primitivism or technocratic tedium) or we resist that and break out by trying to forge forward into some unknown unchartered territory.
In practice we have no choice but to aim for the unknown and hope. Can't lie and say I can see what the way through all this is though.
Not so long ago, I have come to a rather unpleasant realization that whether a lot of that will happen, will depend heavily on whether the ones currently trying to make technology control every facet of our lives decide to allow society get dumber first ( think Idiocracy, which AI very much could allow ) or not in which case it is anyone's guess, because people will still have some basic skills and memories of what could be.
I am hoping for the best, but life has taught me hard not to best against humanity's worst instincts.
>In any technologically advanced society the individual’s fate must depend on decisions that he personally cannot influence to any great extent. A technological society cannot be broken down into small, autonomous communities, because production depends on the cooperation of very large numbers of people and machines. Such a society must be highly organized and decisions have to be made that affect very large numbers of people. When a decision affects, say, a million people, then each of the affected individuals has, on the average, only a one-millionth share in making the decision
I don't know what you're quoting, but I wish it were the case that something affecting a million people granted each affected individual about a one-millionth share in the decision. I don't think that would always yield good outcomes, but at least it would be democratic. Structures that enable that are what we should be building.
In the 1960's there was a young man graduated from the University of Michigan. Did some brilliant work in mathematics. Specifically bounded harmonic functions. Then he went on to Berkeley, was assistant professor, showed amazing potential, then he moved to Montana and he blew the competition away.
New around here, but… For those interested in a deep dive, I highly recommend reading the Technological Society, by French philosopher and sociologist Jacques Ellul.
It really depends on the technology. Different technologies redistribute power differently. LLMs are very "centralizing" indeed. It is hardly feasible to train your own LLM as a private person or even a small company - at best you can download a pre-trained one, which at least nobody can silently change or take away from you.
Very well said. Free software was a revolt against technology that you have no control over and I feel like the people that are whole heartedly embracing "AI" have completely forgotten this. They now use an incredibly expensive proprietary piece of technology that they have no control over to write a bunch of code that they cannot (even if they tried) understand and they talk like it's the most amazing thing ever. This is pure short-sighted foolishness.
Yeah - I really like F/OSS for the freedom aspect and I intensely dislike SaaS LLMs for the same reason. I tolerate them more easily for ancillary tasks like vulnerability search or super-powered LSP-workalikes to learn about a code base. There will eventually be a lot of nuance, I hope and believe - reasonable compromises between going all in and abstaining completely. So far, I'm doing okay just occasionally dabbling in local models. I at least need to know what people are talking about.
That's why we have state. There are many technologies that we, as a society, decided to control in various ways. You can't just build a nuclear weapon for example. There is no particular reason why we let tech bros control many aspects of our lives, apart from legal inertia.
LLMs can be "trivially" decentralized by expanding the concept intellectual property to also cover algorithmic processing. It's just about how we setup our laws and rules.
Nobody had to legislate Free software into existence in order to protect us. Wise people saw the need and did something of their own accord. We are still free to do this!
The State are the people and the people want tech billionaires because they want the same chance at being that (tech) m/billionaire.
Temporarily embarrassed millionaires; I cannot get around that issue toward collective action, toward myself contributing to an answer. I'm stuck. I can't unsee its truth =/. The individual will choose enrichment. We all will.
> Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation.
True during the mainframe. Not true during the PC age. Perhaps true again during frontier model / data center ago. Maybe not true again when hostable open weights models become efficient and good enough.
> As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.
I have to very regularly remind myself many people genuinely believe this shit and are not straight up evil/maniacs, it's getting harder
I'm thinking that personally, technology is not bad in a vacuum and not necessarily bad in society, but it just reveals that our system is ill-equipped to guarantee good usage of it.
We could have fun defining what's good usage but we're so far from it, it would just make me sad.
> until we collectively redefine and enforce a value system that benefits us all
Tons of us called for common sense guard rails and a little bit of actual intention as we rolled out LLM’s, but we were all shouted down as “luddites” who were “obstructing progress.”
We all knew this was coming. It’s been incredibly frustrating knowing how preventable so much of it has been and will continue to be.
Except that it's not preventable. Technology is always an arms race. If you don't create it, someone else will, and then they'll have the advantage and subjugate you, so you might as well be the one to do it first. Whatever it is that you're trying to prevent, someone is going to do it if it gives them power.
It wasn't "preventable" though. How would you prevent what's been happening ? Pass a law making GPUs illegal ? Just ..."convince" everyone that the machine that can write working software, business letters and render good enough banner and print advertising for nearly free is evil and just don't use it (ask Emily Bender how that's going)? There is no realistic way from stopping any of this from happening. Need a different approach.
> As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.
Technology is not a good-only or evil-only thing. You have use cases that are beneficial and you have use cases that is not benefical. The technology in by itself isn't what makes things worse. Even many thousand years ago, humans used weapons to bash in other humans. Remember the Ötzi: https://en.wikipedia.org/wiki/%C3%96tzi#Body he was killed by arrows, most logically from someone else shooting at him (at around 3230 BC). Nuclear energy is used as weapon or source for generation of energy (or rather, transformation of energy). And so on and so forth.
IMO the biggest question has less to do about technology, but distribution of wealth and possibilities. I think oligarchs need to be impossible; right now they are causing a ton of problems. Technology also creates problems, I agree on that, but I would not subscribe to a "technology makes everything worse". That does not seem to be a realistic assessment.
Thing is, technology (particularly automation) could make life better but it not doing that is a choice. Think about it. We could live in a world where people only had to work 20 hours a week or even at some point not at all. We don't do that because we have a system that simply makes a handful of people even wealthier. We will likely see the first trillionaire minted in our lifetimes. That is an unimaginable and unjustifiable amount of money for one person to have.
So you're not really complaining about technology making things worse. You're complaining about wealth inequality, which is a direct result of the mode of production and the organization of the economy.
Internet access should, at this point, be basically free. The best Internet in the country is municipal broadband. It's better and it's cheaper. It's owned by the town, city or county that it's in, which means it's owned by citizens of that municpality.
Instead what we have in most of the country are national ISPs like Verizon, Comcast, Spectrum and AT&T and the prices are sky high. They are only sky high so somebody far away can continue to extract profit from something that's already built and not that expensive to build.
You will get lied to by people saying national ISPs have an economy of scale. Well, if that were true, why is municipal broadband so (relatively) better and cheaper? Why would there be state laws that make municipal broadband illegal? Why would national ISPs lobby for such laws?
Technology is simply a technique to leverage and extend human desire. It's a tool. It's in the hands of those who control and use it.
You shouldn't blame technology. You should blame the maniacs that have latched on to it as a way of extending their power. You should blame the government for their failures of regulation. You should blame the media for failing to cover this obvious problem.
The people who want to subjugate you are the problem.
> it is cutting jobs to offset its A.I. spending, saying last month that it would slash 10 percent of its work force.
> Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.
Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.
> that's a dumb metric whether or not you're using it to evaluate employees
Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?
It occurred to me recently that AI's degradation of the human factor via way of increased pressure on the remaining ranks of humans might actually be far more damaging than the AI's output itself.
The assumption is that orders of magnitude more people will benefit from the efficiency gains, like it was the case in agriculture automation or factory work automation.
In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.
IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.
The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.
> Partially automated cashier did the same thing.
I've not once heard anyone in the service industry make this complaint.
> as the efficiency benefits are too important.
You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.
I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.
Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
People focus a lot on how Zuckerberg is a deranged sociopath, but I think Bosworth should get the same criticism if not worse. The good face he put on while fucking over the world is utterly disgusting.
I got to a point where I just wish ill fate to these people, because there is really no other process by which they can be slowed down or stopped.
Not going to lie, I have no pity for the tech employees of a company that has spent most of its existence making the world a worse place. They are finally getting a taste of the medicine Facebook has been giving to everyone in the last 2 decades.
Every big tech company's embrace of AI is making all of their employees miserable.
Whereas if you're half-competent and at a startup, the AI is an incredible opportunity to try to leap ahead while the prices are subsidized (by the big tech behemoths fighting wth each other)
The reason is a complete inversion of Ownership and Agency.
For a decade of ZIRP, big tech convinced its employees that they're "changing the world", and what we did mattered. Sure the exhorbitant salaries and constantly rising stock value didn't hurt, but honestly other than the FIRE cultists, for most of us the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg). No, most people were missionaries not mercanaries.
2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Then came the yearly layoffs, chipping away further, and reminding every employee that they're at the mercy of a spreadsheet and the whims of people 3 levels above them in the org chart, in spite of the economic reality of their product, or their personal productivity.
And now we're here, and it's clear that all of the above is still relevant. The old-timers that hung around see that their personal output doesn't matter, their product's PnL doesn't matter. All that matters is 1) the company's AI strategy (and if they're not part of it, they're secondary), and 2) tokenmaxing.
How can anyone find joy in this environment unless they're purely in it for the comp?
I couldn't. I left my big tech job in December after 15 years, and have not been this happy at work since pre-COVID.
> the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg)
I can’t believe I read this sentence, lol.
800k is the ability to buy a house and support a family on a single income. Do you see so many people lamenting the days when this was possible? So many memes about the lifestyle Homer Simpson could provide, and may modern families can’t? 800k makes it possible.
It’s a huge lifestyle upgrade, especially if your partner wants to do something artistic, academic, or otherwise less profitable.
>2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Also SVB collapsed in late 2022, notice that AI hype started right after.
As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy. Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation
Let's go there: this is what the Unabomber was on about, and there has long been an effort to stop people noticing this.
Ultimately you end up with either going for totalitarianism (either to arrest development in the status quo, maintain a state of anarcho primitivism or technocratic tedium) or we resist that and break out by trying to forge forward into some unknown unchartered territory.
In practice we have no choice but to aim for the unknown and hope. Can't lie and say I can see what the way through all this is though.
Not so long ago, I have come to a rather unpleasant realization that whether a lot of that will happen, will depend heavily on whether the ones currently trying to make technology control every facet of our lives decide to allow society get dumber first ( think Idiocracy, which AI very much could allow ) or not in which case it is anyone's guess, because people will still have some basic skills and memories of what could be.
I am hoping for the best, but life has taught me hard not to best against humanity's worst instincts.
edit: add whether
>In any technologically advanced society the individual’s fate must depend on decisions that he personally cannot influence to any great extent. A technological society cannot be broken down into small, autonomous communities, because production depends on the cooperation of very large numbers of people and machines. Such a society must be highly organized and decisions have to be made that affect very large numbers of people. When a decision affects, say, a million people, then each of the affected individuals has, on the average, only a one-millionth share in making the decision
I don't know what you're quoting, but I wish it were the case that something affecting a million people granted each affected individual about a one-millionth share in the decision. I don't think that would always yield good outcomes, but at least it would be democratic. Structures that enable that are what we should be building.
In some circles, he goes by Uncle Ted.
To quote a movie:
In the 1960's there was a young man graduated from the University of Michigan. Did some brilliant work in mathematics. Specifically bounded harmonic functions. Then he went on to Berkeley, was assistant professor, showed amazing potential, then he moved to Montana and he blew the competition away.
New around here, but… For those interested in a deep dive, I highly recommend reading the Technological Society, by French philosopher and sociologist Jacques Ellul.
One attempt was open source. Or perhaps libre software? I guess it is not a success since only one of these looks mainstream.
It is curious how successful AI developers have been in trying to redefine "open source" as "the binary is free to download"
Free/Open Source software is very mainstream now I'm not sure what you mean, but maybe we are taking too much of it for granted?
Open source is still around ? It would be vastly improving your life even though you can’t see it.
It really depends on the technology. Different technologies redistribute power differently. LLMs are very "centralizing" indeed. It is hardly feasible to train your own LLM as a private person or even a small company - at best you can download a pre-trained one, which at least nobody can silently change or take away from you.
Very well said. Free software was a revolt against technology that you have no control over and I feel like the people that are whole heartedly embracing "AI" have completely forgotten this. They now use an incredibly expensive proprietary piece of technology that they have no control over to write a bunch of code that they cannot (even if they tried) understand and they talk like it's the most amazing thing ever. This is pure short-sighted foolishness.
Yeah - I really like F/OSS for the freedom aspect and I intensely dislike SaaS LLMs for the same reason. I tolerate them more easily for ancillary tasks like vulnerability search or super-powered LSP-workalikes to learn about a code base. There will eventually be a lot of nuance, I hope and believe - reasonable compromises between going all in and abstaining completely. So far, I'm doing okay just occasionally dabbling in local models. I at least need to know what people are talking about.
That's why we have state. There are many technologies that we, as a society, decided to control in various ways. You can't just build a nuclear weapon for example. There is no particular reason why we let tech bros control many aspects of our lives, apart from legal inertia.
LLMs can be "trivially" decentralized by expanding the concept intellectual property to also cover algorithmic processing. It's just about how we setup our laws and rules.
Nobody had to legislate Free software into existence in order to protect us. Wise people saw the need and did something of their own accord. We are still free to do this!
The State are the people and the people want tech billionaires because they want the same chance at being that (tech) m/billionaire.
Temporarily embarrassed millionaires; I cannot get around that issue toward collective action, toward myself contributing to an answer. I'm stuck. I can't unsee its truth =/. The individual will choose enrichment. We all will.
Well, I love to take showers, which involve a lot of tech like running water and water heaters and soap which I can buy from the supermarket.
I lived in places without any of those and I wouldn't want to do it again.
> Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation.
True during the mainframe. Not true during the PC age. Perhaps true again during frontier model / data center ago. Maybe not true again when hostable open weights models become efficient and good enough.
> As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.
I have to very regularly remind myself many people genuinely believe this shit and are not straight up evil/maniacs, it's getting harder
I'm thinking that personally, technology is not bad in a vacuum and not necessarily bad in society, but it just reveals that our system is ill-equipped to guarantee good usage of it.
We could have fun defining what's good usage but we're so far from it, it would just make me sad.
> until we collectively redefine and enforce a value system that benefits us all
Tons of us called for common sense guard rails and a little bit of actual intention as we rolled out LLM’s, but we were all shouted down as “luddites” who were “obstructing progress.”
We all knew this was coming. It’s been incredibly frustrating knowing how preventable so much of it has been and will continue to be.
Except that it's not preventable. Technology is always an arms race. If you don't create it, someone else will, and then they'll have the advantage and subjugate you, so you might as well be the one to do it first. Whatever it is that you're trying to prevent, someone is going to do it if it gives them power.
It wasn't "preventable" though. How would you prevent what's been happening ? Pass a law making GPUs illegal ? Just ..."convince" everyone that the machine that can write working software, business letters and render good enough banner and print advertising for nearly free is evil and just don't use it (ask Emily Bender how that's going)? There is no realistic way from stopping any of this from happening. Need a different approach.
> As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.
Technology is not a good-only or evil-only thing. You have use cases that are beneficial and you have use cases that is not benefical. The technology in by itself isn't what makes things worse. Even many thousand years ago, humans used weapons to bash in other humans. Remember the Ötzi: https://en.wikipedia.org/wiki/%C3%96tzi#Body he was killed by arrows, most logically from someone else shooting at him (at around 3230 BC). Nuclear energy is used as weapon or source for generation of energy (or rather, transformation of energy). And so on and so forth.
IMO the biggest question has less to do about technology, but distribution of wealth and possibilities. I think oligarchs need to be impossible; right now they are causing a ton of problems. Technology also creates problems, I agree on that, but I would not subscribe to a "technology makes everything worse". That does not seem to be a realistic assessment.
Thing is, technology (particularly automation) could make life better but it not doing that is a choice. Think about it. We could live in a world where people only had to work 20 hours a week or even at some point not at all. We don't do that because we have a system that simply makes a handful of people even wealthier. We will likely see the first trillionaire minted in our lifetimes. That is an unimaginable and unjustifiable amount of money for one person to have.
So you're not really complaining about technology making things worse. You're complaining about wealth inequality, which is a direct result of the mode of production and the organization of the economy.
Internet access should, at this point, be basically free. The best Internet in the country is municipal broadband. It's better and it's cheaper. It's owned by the town, city or county that it's in, which means it's owned by citizens of that municpality.
Instead what we have in most of the country are national ISPs like Verizon, Comcast, Spectrum and AT&T and the prices are sky high. They are only sky high so somebody far away can continue to extract profit from something that's already built and not that expensive to build.
You will get lied to by people saying national ISPs have an economy of scale. Well, if that were true, why is municipal broadband so (relatively) better and cheaper? Why would there be state laws that make municipal broadband illegal? Why would national ISPs lobby for such laws?
Technology is simply a technique to leverage and extend human desire. It's a tool. It's in the hands of those who control and use it.
You shouldn't blame technology. You should blame the maniacs that have latched on to it as a way of extending their power. You should blame the government for their failures of regulation. You should blame the media for failing to cover this obvious problem.
The people who want to subjugate you are the problem.
"The problem is them"
no no, we're not doing that.
> it is cutting jobs to offset its A.I. spending, saying last month that it would slash 10 percent of its work force.
> Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.
Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.
Not that I disagree with you, but I’ve heard of such tactic being used in some orgs at both Google and Microsoft as well.
It seems like a common conclusion from a management that wants to push for AI adoption. I doubt it’s super effective, but we’ll see how it turns out.
> that's a dumb metric whether or not you're using it to evaluate employees
Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?
Having worked in big tech, I can almost guarantee you you’re overthinking this.
It occurred to me recently that AI's degradation of the human factor via way of increased pressure on the remaining ranks of humans might actually be far more damaging than the AI's output itself.
https://archive.is/JUPmz
I believe that any kind of partial automation is going to make the job more soul-crushing.
Ford style assembly lines made the work of the factory workers more miserable. Partially automated cashier did the same thing.
I don't think there is any point in trying to resist automation, as the efficiency benefits are too important.
Efficiency gains are more important than people not having to spend their working life with soul-crushing tasks? I don’t quite follow.
The assumption is that orders of magnitude more people will benefit from the efficiency gains, like it was the case in agriculture automation or factory work automation.
In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.
IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.
> Ford style assembly lines
The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.
> Partially automated cashier did the same thing.
I've not once heard anyone in the service industry make this complaint.
> as the efficiency benefits are too important.
You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.
I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.
Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
Meta has been banning it's core users for months now, above 20 million users are now banned, they are on death spiral after that Metaverse fiasco.
https://www.nbcdfw.com/news/nbc-5-responds/meta-users-contin...
I believe that's the point.
Well, that's the goal of AI Skynet - it has no need for humans. Did nobody learn from that movie?
That's what's making its employees miserable ????!
I love the quote in there from Boz that basically says "no you can't opt out fuck off"
People focus a lot on how Zuckerberg is a deranged sociopath, but I think Bosworth should get the same criticism if not worse. The good face he put on while fucking over the world is utterly disgusting. I got to a point where I just wish ill fate to these people, because there is really no other process by which they can be slowed down or stopped.
MEta made billions on AI in 2025, 10% of their revenue... by allowing scammers to use AI to attack users and steal user's money.
Not going to lie, I have no pity for the tech employees of a company that has spent most of its existence making the world a worse place. They are finally getting a taste of the medicine Facebook has been giving to everyone in the last 2 decades.
Every big tech company's embrace of AI is making all of their employees miserable.
Whereas if you're half-competent and at a startup, the AI is an incredible opportunity to try to leap ahead while the prices are subsidized (by the big tech behemoths fighting wth each other)
The reason is a complete inversion of Ownership and Agency.
For a decade of ZIRP, big tech convinced its employees that they're "changing the world", and what we did mattered. Sure the exhorbitant salaries and constantly rising stock value didn't hurt, but honestly other than the FIRE cultists, for most of us the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg). No, most people were missionaries not mercanaries.
2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Then came the yearly layoffs, chipping away further, and reminding every employee that they're at the mercy of a spreadsheet and the whims of people 3 levels above them in the org chart, in spite of the economic reality of their product, or their personal productivity.
And now we're here, and it's clear that all of the above is still relevant. The old-timers that hung around see that their personal output doesn't matter, their product's PnL doesn't matter. All that matters is 1) the company's AI strategy (and if they're not part of it, they're secondary), and 2) tokenmaxing.
How can anyone find joy in this environment unless they're purely in it for the comp?
I couldn't. I left my big tech job in December after 15 years, and have not been this happy at work since pre-COVID.
> the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg)
I can’t believe I read this sentence, lol.
800k is the ability to buy a house and support a family on a single income. Do you see so many people lamenting the days when this was possible? So many memes about the lifestyle Homer Simpson could provide, and may modern families can’t? 800k makes it possible.
It’s a huge lifestyle upgrade, especially if your partner wants to do something artistic, academic, or otherwise less profitable.
If someone has a 10m portfolio, it really is irrational to chase a higher w-2.
Good post
>2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Also SVB collapsed in late 2022, notice that AI hype started right after.