> It’s Still Your Code... AI maximalists will read this section and scoff. They’re already vibe coding everything and have little to no idea what the generated code looks like.
This frames the argument like a dichotomy. And to be honest, using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.
There are plenty of good engineers getting good results whilst accepting code-ownership as a continuum.
> If Claude goes down tomorrow, can you still do your job?
This is a valid counterpoint, but doing software is already a tricky set of dependencies. The answer here isn't automatically "you need to be able to do everything". It could simply be also use Codex.
I think the overall point is well made, I just don't agree with the absolute framing. There are things you can hand over AI safely. Even if you start small and increment it'll have a decent impact.
Who's name do you put down on a code commit? Yours, or the LLM? If it's yours then it is your code and you are responsible for knowing all about it. If it's the LLM then what do we even need you for?
> You must be able to do your job if your AI tooling disappears
While I agree with the sentiment I fear this won’t last long. I already find myself, when Claude goes down for 15 minutes due to whatever, kind of throwing my hands up and taking a walk assuming it’ll be up by the time I get back. Usually it is.
If it went away for good I’d be able to code, but would I want to? I’d be kind of bummed in a way. Which is odd because I used to tout myself as someone who like programming but I think what I’ve discovered is I like building.
Yes, but also this particular company has the means to justify the expense. I think there's enough opportunity at scale in the industry to really change the business landscape.
Aside from that (and assuming a large enough sample size) I think it's a safe experiment to at least bet on finding profitable use cases. In 1-2 years, after this experiment runs its course, not everyone will have "unlimited" usage.
It seems quite common for the infrastructure teams to put up a dashboard just to keep a sense of what is going on, but it is then misinterpreted as a “leaderboard” and encourages the most prolific users to find creative ways to squander more to stay the “winner”. Management is slightly disappointed by the waste but also happy that staff are engaging with their future replacements.
Academia is the place with the least coherent policy. In the few institutions I'm aware of the AI rules for, the guide is usually 3 lines long and it is basically we don't promote usage of it, which is a meaningless phrase. Therefore you end up with students who are not supposed to use it unless they are international masters students who require it because of language barriers, and in that scenario, it is basically allow them to use it however they like even if it makes a mockery of the rigour of a degree. Lecturers can use it as and when they wish, then you get researchers who either use it endlessly or not at all. Then upper management who use it instead of using their own brain.
> You must understand what your AI generated code does
Absolutely yes.
> You must be able to do your job if your AI tooling disappears
Absolutely not.
Look, I'm an alright programmer. Not good, far from great. Interpreted languages work for me; add all that strong typing and compilation and it starts to go beyond what I'm interested in. Nonetheless, pre-AI, I have shipped many very functional, production-grade applications for many companies.
Now, I can write stuff in Go, and Rust, and it's fantastic. So much faster. The AI likes the strong typing, the test-ability, predictability, it all makes total sense. I'm using this stuff all the time, but I have not learned any Go; I'm too busy focusing on the parts the AI cannot do for me, like real requirements gathering, architecture, fit and finish, engaging stakeholders, etc. that still require the human touch. Maybe I could have learned some Go using that time, but at the end of the day my employer is paying me for results, not for my edification!
There are now huge parts of my job I cannot do without AI. Sure, it's like 800-1200 bucks a month of extra cost; ok; but with that extra low-5-figs a year of cost I am a much better employee in terms of my capabilities. It's easily delivering ROI for me, and therefore for my employer. Instead of sitting around wishing I had a Go developer to ask for help implementing a simple feature in a Terraform provider, I can just fork it and add what I need, try to submit it upstream for inclusion, etc. and the lack of language specific skills is no longer holding me back.
Take away the tool and I can't do that part of the job anymore, sorry. I can still do a lot, but slower, and honestly it would feel like going from a car back to walking, now; walking's fun, I do it recreationally for the sheer joy, but when there's hundreds of kilometres to cover in a short amount of time, the car is clearly the correct choice. So too is it with AI: we've invented the car for computers and only a fool would pretend he can do everything the same without it now.
'If you can't build a TODO list app using only punchcards, then you can't do your job...'
Obviously our ambitions expand due to better tools. I now commit to and deliver much more work than before LLMs, and — before then — ditto for frontend frameworks, generation 4 languages etc.
There are projects I now start without thinking twice that I never would have considered a few years ago.
That's what productivity looks like, and it makes you more valuable, and your job more secure (up until the ASI kills us all...).
It makes you less valuable and your job less secure because as LLMs improve, the level of knowledge/skill required goes down, thus putting more people at the level of "good enough", which is generally what companies optimize for over time with regards to hiring (least amount for good enough).
> There are projects I now start without thinking twice that I never would have considered a few years ago.
I'm sick of seeing this argument because it's not as persuasive as you think. If you were incapable of doing it before, why would I ever trust that you could properly evaluate the result? Even if I did, it's still like saying, "I never would've been able to do this project without a subordinate that knew how to do it, now look at me!" Okay? So why would I choose you when it sounds like I could pick anyone with basic programming knowledge to manage the subordinate since I clearly don't need someone with the know-how to do the thing, just someone capable of wrangling a coding agent? Might as well get the cheapest college CS graduate I can find.
Most people making this argument aren’t saying they were incapable of doing the project without AI, they’re saying the cost benefit equation was unfavorable because it would take too long.
I'm fairly sure that if a compiler stops working or has a bug, the expectation is that an engineer is capable of handling it in some way. I don't think any stakeholder will have much sympathy for "compiler stopped working, we can't do our jobs now" argument.
Is that wrong? My assembly programming sucks, but I can do it (slowly) if I need to. I'd expect most serious developers to have that level of knowledge.
I would not. I know a handful of folks who can kinda sorta make their way through hello world in assembly with the docs open, and a handful more who could maybe implement some of the simplest coreutils like cat, maybe. But most devs I know have never seriously written a line of amd64 or aarch64 assembly. It’s just not commonly practical knowledge- even if it is very cool knowledge and helps one understand why things work (or don’t work) under the hood.
Even knowledge of how to drop to C is fairly rare in much of development, and you know what? That’s okay. We all specialize in our own areas of this beast of a field.
why not? if he's hired to do the task he's supposedly an expert in, and that is now done by AI, what's stopping the company from replacing him with any other person at a lower cost that operates the same AI?
What if I can do everything the AI can like read, interpret, and implement code(and not in a likely copyright-breaking way) but also reason about it better.
A better analogy would be "the trebuchet for computers".
"but when there's hundreds of kilometres to cover in a short amount of time, the trebuchet is clearly the correct choice."
you point it in the rough direction and distance you want to go, pull the lever, see if you hit your mark, adjust, pull the lever again, etc.
And once you have dialed in the variables for that particular piece of rock that one time, you write it down in a "skill.md" file and announce to everyone on the team "this trebuchet has been carefully calibrated. Trust it with your other rocks too."
> only a fool would pretend he can do everything the same without it now
Unless you're working in a coding sweatshop, I don't see why you need AI to do what people have been doing for decades just fine without breaking a sweat.
Your competition's behavior necessarily affects you unless your company has an unassailable moat.
If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
It's fine if you disagree with the idea that AI lets established companies ship faster. I'm not here to argue that. But I think it's pretty easy to empathize with "why might one need to change their behavior due to this new technology?"
> If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
I'm saying that B2B services are very common outside of SV and more focused on stability, compliance, long-term maintenance, and the operational knowhow that comes with all that rather than just shipping new features. It's not that there isn't some competition, but that the business is built on much more comprehensive partnerships than just being a software vendor. I can't believe I'm saying this, but "synergy" sometimes isn't just a meaningless buzzword.
When you try to jam "AI" into the mix, the disruption harms the business value. Many including myself would like to be enlightened if you think otherwise.
Well, I'm commenting from a place of bias, as I'm Head of AI at our company and am in charge of rolling out agentic coding throughout the engineering org. So, bear with me a bit.
We're B2B SaaS in the Ed Tech space. It's very sales-driven. There's only so many players in the space, customers come with a laundry list of things they've seen others do and expect you to have those features, too. There are basic expectations that need to be met, some of those are compliance, but, sadly, a lot of what actually drives sales is just... flashy shit that looks good to those signing the checks not those using the underlying software. We lost a sale recently because someone was upset we didn't have the ability to give digital stickers to children - seriously.
You're more than welcome to tell the customer they're wrong and not give them their stickers. Or you can ask Claude to build stickers for you in two days and keep up with the Joneses.
Don't get me wrong. Customers aren't retained long-term with flashy shit. People churn out because of poor UX, security fears, pricing hikes, etc. Those frustrations tend to build over years and pain has to get pretty high because it's effortful to shift software providers. But, for getting new customers, sales is driven by flashy features and, at least in our experience, we need to be able to build those as quickly as our competitors or we lose out.
Economics are economics. If your company is inefficient by not using AI, then it must make up for it and sacrifice some other economic advantage it has over its competitors to break even. If the loss of efficiency is low enough for your particular business, then perhaps you do not care and are content with sacrificing N% of your economic advantage over your competitors.
The same shit they've always been asking for, judging by what OpenAI and Anthropic are pumping out surrounding their models: bloated, buggy Electron apps that consume gigabytes of memory to display fucking <1kb of text. We are not witnessing better software, even from the people who have unlimited capital and unlimited access to frontier models and are true believers in its potential to replace engineers.
Better software means nearly nothing at the end of the day.
The software that gets used is the metric that really matters.
You can write 'perfect' software that runs at 100% efficient and never makes a mistake, but if no one ever downloads it and uses it, you've just engaged in a bout of intellectual masturbation.
And honestly I've seen the 'write better software' people complain for years as Microsoft just absolutely financially decimates them. And yea, Microsoft loves writing bloated electron crap. And "one of these days Alice, people are going to rise up and use less bloated software and Microsoft is going to die", lol, just fucking kidding, people will never do that.
Bun and uv were better software than their alternatives and gained massive traction quickly, leading to them selling out for a big payday. Better software has to overcome a massive marketing advantage, ecosystem capture, and inertia, but people are absolutely interested in using things that aren't buggy bloated bullshit where they have a choice.
I don't really know why you brought Microsoft up. I don't know anyone who thinks writing better software can displace Windows. Windows has absolute ecosystem capture, notably on the hardware front -- you can't write a better OS even if you want to because hardware vendors simply won't work with you, and even if you did write a better OS you have to contend with not having 30 years of software developed for it. Computers are increasingly falling into the domain of professionals (with smartphones displacing casual consumer usage), who need Photoshop, Excel, and all of the rest of professional tools and would put up with an inferior OS out of necessity, because the tools are more important than the OS.
Even then, Windows is an excellent piece of software, technically. It is handicapped by explicit anti-consumer decisions shoving things users don't want into it, but the kernel is hands down better than Linux's kernel, and the userspace is superior on technical merits if not user-friendliness merits. Windows has been going downhill in more recent years, but the gap between it and Linux is still massive.
Really, I don't know how Microsoft is your go-to example, when they are actually a producer of excellent software. Excel, the modern .NET ecosystem, C#, and Typescript are top-tier, and VSCode is perhaps the only software I've used that justifies being an Electron application, because it actually exposes the capability to completely customise every aspect of it and extend it with sandboxed extensions. To the extent people have grievances with Microsoft, it is largely because of deliberate monopolistic practices rather than the technical quality of their software.
I guess you are conceding that LLMs can't write good software, but are suggesting that good software doesn't matter. I think it does matter very much. In cases of monopoly control, people will begrudgingly use bad software, but they won't be loyal and you will bleed users over time as the frustrations build. But I think most critically, in cases where you don't already have monopoly control, nobody will use your bad software. This is why we haven't seen any vibe coded applications really taking the world by storm despite all the LLM hype. OpenAI and Anthropic can make you use their bad software because it's the gate to a useful proprietary tool, which is their real attraction, but Random Startup #482942 cannot make you use their bad software. Creating good software doesn't guarantee that Random Startup #482942 will succeed, given other market factors, but creating bad software guarantees they won't succeed.
I can do everything the same without it, because I'm still not using it. Why would I want to be a guinea pig for the world's richest companies and also atrophy my brain.
As a fun exercise replace AI with "junior" and "junior" with "mid-level." It holds up pretty well, as a manager you have responsibility for the work your team does and "make everyone put in more hours for no reason" is dumb. Maybe it comes across a bit neglecting of the "juniors" (in particular, it doesn't show any desire for figuring out ways for AI/"the juniors" to grow their responsibilities in a sustainable way).
Imagine reading that version as someone who doesn't know how big companies work. "But then they'll just fire all the mid-level managers, since they don't do any of the actual work!" Haha, boy would you be wrong.
For another type of incoherent policy: don't restrict your employees to 2025 models and then accuse them of being sticks in the mud when they say the models are inadequate.
> It’s Still Your Code... AI maximalists will read this section and scoff. They’re already vibe coding everything and have little to no idea what the generated code looks like.
This frames the argument like a dichotomy. And to be honest, using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.
There are plenty of good engineers getting good results whilst accepting code-ownership as a continuum.
> If Claude goes down tomorrow, can you still do your job?
This is a valid counterpoint, but doing software is already a tricky set of dependencies. The answer here isn't automatically "you need to be able to do everything". It could simply be also use Codex.
I think the overall point is well made, I just don't agree with the absolute framing. There are things you can hand over AI safely. Even if you start small and increment it'll have a decent impact.
Who's name do you put down on a code commit? Yours, or the LLM? If it's yours then it is your code and you are responsible for knowing all about it. If it's the LLM then what do we even need you for?
> using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.
It's not a strawman. A lot of people are doing it.
> You must be able to do your job if your AI tooling disappears
While I agree with the sentiment I fear this won’t last long. I already find myself, when Claude goes down for 15 minutes due to whatever, kind of throwing my hands up and taking a walk assuming it’ll be up by the time I get back. Usually it is.
If it went away for good I’d be able to code, but would I want to? I’d be kind of bummed in a way. Which is odd because I used to tout myself as someone who like programming but I think what I’ve discovered is I like building.
Is anyone actually at a company that is purposely trying to use a ton of tokens? It gets expensive really fast.
Yes, but also this particular company has the means to justify the expense. I think there's enough opportunity at scale in the industry to really change the business landscape.
Aside from that (and assuming a large enough sample size) I think it's a safe experiment to at least bet on finding profitable use cases. In 1-2 years, after this experiment runs its course, not everyone will have "unlimited" usage.
I've personally had people talk about token leaderboards at their work. Amazon and Meta did have ones, but I'd take with a decent grain of salt.
We all know it's such an insanely gameable metric you'd be insane to actually use it...
I came across a comedy clip where the employees are fighting over how many billion tokens they were using and assumed it was a joke.
It seems quite common for the infrastructure teams to put up a dashboard just to keep a sense of what is going on, but it is then misinterpreted as a “leaderboard” and encourages the most prolific users to find creative ways to squander more to stay the “winner”. Management is slightly disappointed by the waste but also happy that staff are engaging with their future replacements.
For about 2 months, then I assume our fearless leaders saw the bill and wet themselves. Since then, Opus is off limits lol.
Academia is the place with the least coherent policy. In the few institutions I'm aware of the AI rules for, the guide is usually 3 lines long and it is basically we don't promote usage of it, which is a meaningless phrase. Therefore you end up with students who are not supposed to use it unless they are international masters students who require it because of language barriers, and in that scenario, it is basically allow them to use it however they like even if it makes a mockery of the rigour of a degree. Lecturers can use it as and when they wish, then you get researchers who either use it endlessly or not at all. Then upper management who use it instead of using their own brain.
> You must understand what your AI generated code does
Absolutely yes.
> You must be able to do your job if your AI tooling disappears
Absolutely not.
Look, I'm an alright programmer. Not good, far from great. Interpreted languages work for me; add all that strong typing and compilation and it starts to go beyond what I'm interested in. Nonetheless, pre-AI, I have shipped many very functional, production-grade applications for many companies.
Now, I can write stuff in Go, and Rust, and it's fantastic. So much faster. The AI likes the strong typing, the test-ability, predictability, it all makes total sense. I'm using this stuff all the time, but I have not learned any Go; I'm too busy focusing on the parts the AI cannot do for me, like real requirements gathering, architecture, fit and finish, engaging stakeholders, etc. that still require the human touch. Maybe I could have learned some Go using that time, but at the end of the day my employer is paying me for results, not for my edification!
There are now huge parts of my job I cannot do without AI. Sure, it's like 800-1200 bucks a month of extra cost; ok; but with that extra low-5-figs a year of cost I am a much better employee in terms of my capabilities. It's easily delivering ROI for me, and therefore for my employer. Instead of sitting around wishing I had a Go developer to ask for help implementing a simple feature in a Terraform provider, I can just fork it and add what I need, try to submit it upstream for inclusion, etc. and the lack of language specific skills is no longer holding me back.
Take away the tool and I can't do that part of the job anymore, sorry. I can still do a lot, but slower, and honestly it would feel like going from a car back to walking, now; walking's fun, I do it recreationally for the sheer joy, but when there's hundreds of kilometres to cover in a short amount of time, the car is clearly the correct choice. So too is it with AI: we've invented the car for computers and only a fool would pretend he can do everything the same without it now.
If you can't do the job without AI, you can't do the job.
Spoiler alert: if you can't do the job, you're not going to be doing the job much longer.
'If you can't build a TODO list app using only punchcards, then you can't do your job...'
Obviously our ambitions expand due to better tools. I now commit to and deliver much more work than before LLMs, and — before then — ditto for frontend frameworks, generation 4 languages etc.
There are projects I now start without thinking twice that I never would have considered a few years ago.
That's what productivity looks like, and it makes you more valuable, and your job more secure (up until the ASI kills us all...).
It makes you less valuable and your job less secure because as LLMs improve, the level of knowledge/skill required goes down, thus putting more people at the level of "good enough", which is generally what companies optimize for over time with regards to hiring (least amount for good enough).
> There are projects I now start without thinking twice that I never would have considered a few years ago.
I'm sick of seeing this argument because it's not as persuasive as you think. If you were incapable of doing it before, why would I ever trust that you could properly evaluate the result? Even if I did, it's still like saying, "I never would've been able to do this project without a subordinate that knew how to do it, now look at me!" Okay? So why would I choose you when it sounds like I could pick anyone with basic programming knowledge to manage the subordinate since I clearly don't need someone with the know-how to do the thing, just someone capable of wrangling a coding agent? Might as well get the cheapest college CS graduate I can find.
Most people making this argument aren’t saying they were incapable of doing the project without AI, they’re saying the cost benefit equation was unfavorable because it would take too long.
false dichotomy.
How is this different from saying “if you can’t do the job without the compiler, you can’t do the job”?
I'm fairly sure that if a compiler stops working or has a bug, the expectation is that an engineer is capable of handling it in some way. I don't think any stakeholder will have much sympathy for "compiler stopped working, we can't do our jobs now" argument.
Is that wrong? My assembly programming sucks, but I can do it (slowly) if I need to. I'd expect most serious developers to have that level of knowledge.
I would not. I know a handful of folks who can kinda sorta make their way through hello world in assembly with the docs open, and a handful more who could maybe implement some of the simplest coreutils like cat, maybe. But most devs I know have never seriously written a line of amd64 or aarch64 assembly. It’s just not commonly practical knowledge- even if it is very cool knowledge and helps one understand why things work (or don’t work) under the hood.
Even knowledge of how to drop to C is fairly rare in much of development, and you know what? That’s okay. We all specialize in our own areas of this beast of a field.
For one thing, compilers actually work and enable you to do useful things.
rolleyes
Why? This doesn't follow at all.
why not? if he's hired to do the task he's supposedly an expert in, and that is now done by AI, what's stopping the company from replacing him with any other person at a lower cost that operates the same AI?
AI allows you to do things you could not do before so it is fair to say they can't do the new job without AI.
What if I can do everything the AI can like read, interpret, and implement code(and not in a likely copyright-breaking way) but also reason about it better.
What if you get hit by a bus?
in before the mods accuse you of being "too mean"
I agree with other posters. You can't actually do the job if you can't do it without a half baked AI doing it for you.
A better analogy would be "the trebuchet for computers".
"but when there's hundreds of kilometres to cover in a short amount of time, the trebuchet is clearly the correct choice."
you point it in the rough direction and distance you want to go, pull the lever, see if you hit your mark, adjust, pull the lever again, etc.
And once you have dialed in the variables for that particular piece of rock that one time, you write it down in a "skill.md" file and announce to everyone on the team "this trebuchet has been carefully calibrated. Trust it with your other rocks too."
> only a fool would pretend he can do everything the same without it now
Unless you're working in a coding sweatshop, I don't see why you need AI to do what people have been doing for decades just fine without breaking a sweat.
What are you working on?
Your competition's behavior necessarily affects you unless your company has an unassailable moat.
If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
It's fine if you disagree with the idea that AI lets established companies ship faster. I'm not here to argue that. But I think it's pretty easy to empathize with "why might one need to change their behavior due to this new technology?"
> unless your company has an unassailable moat
Is not working in SV enough of a moat?
> If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
I'm saying that B2B services are very common outside of SV and more focused on stability, compliance, long-term maintenance, and the operational knowhow that comes with all that rather than just shipping new features. It's not that there isn't some competition, but that the business is built on much more comprehensive partnerships than just being a software vendor. I can't believe I'm saying this, but "synergy" sometimes isn't just a meaningless buzzword.
When you try to jam "AI" into the mix, the disruption harms the business value. Many including myself would like to be enlightened if you think otherwise.
Well, I'm commenting from a place of bias, as I'm Head of AI at our company and am in charge of rolling out agentic coding throughout the engineering org. So, bear with me a bit.
We're B2B SaaS in the Ed Tech space. It's very sales-driven. There's only so many players in the space, customers come with a laundry list of things they've seen others do and expect you to have those features, too. There are basic expectations that need to be met, some of those are compliance, but, sadly, a lot of what actually drives sales is just... flashy shit that looks good to those signing the checks not those using the underlying software. We lost a sale recently because someone was upset we didn't have the ability to give digital stickers to children - seriously.
You're more than welcome to tell the customer they're wrong and not give them their stickers. Or you can ask Claude to build stickers for you in two days and keep up with the Joneses.
Don't get me wrong. Customers aren't retained long-term with flashy shit. People churn out because of poor UX, security fears, pricing hikes, etc. Those frustrations tend to build over years and pain has to get pretty high because it's effortful to shift software providers. But, for getting new customers, sales is driven by flashy features and, at least in our experience, we need to be able to build those as quickly as our competitors or we lose out.
Upton Sinclair: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it."
Economics are economics. If your company is inefficient by not using AI, then it must make up for it and sacrifice some other economic advantage it has over its competitors to break even. If the loss of efficiency is low enough for your particular business, then perhaps you do not care and are content with sacrificing N% of your economic advantage over your competitors.
> Unless you're working in a coding sweatshop
You are obviously unaware of what the silicon valley companies are asking for and commiting to.
The same shit they've always been asking for, judging by what OpenAI and Anthropic are pumping out surrounding their models: bloated, buggy Electron apps that consume gigabytes of memory to display fucking <1kb of text. We are not witnessing better software, even from the people who have unlimited capital and unlimited access to frontier models and are true believers in its potential to replace engineers.
Better software means nearly nothing at the end of the day.
The software that gets used is the metric that really matters.
You can write 'perfect' software that runs at 100% efficient and never makes a mistake, but if no one ever downloads it and uses it, you've just engaged in a bout of intellectual masturbation.
And honestly I've seen the 'write better software' people complain for years as Microsoft just absolutely financially decimates them. And yea, Microsoft loves writing bloated electron crap. And "one of these days Alice, people are going to rise up and use less bloated software and Microsoft is going to die", lol, just fucking kidding, people will never do that.
Bun and uv were better software than their alternatives and gained massive traction quickly, leading to them selling out for a big payday. Better software has to overcome a massive marketing advantage, ecosystem capture, and inertia, but people are absolutely interested in using things that aren't buggy bloated bullshit where they have a choice.
I don't really know why you brought Microsoft up. I don't know anyone who thinks writing better software can displace Windows. Windows has absolute ecosystem capture, notably on the hardware front -- you can't write a better OS even if you want to because hardware vendors simply won't work with you, and even if you did write a better OS you have to contend with not having 30 years of software developed for it. Computers are increasingly falling into the domain of professionals (with smartphones displacing casual consumer usage), who need Photoshop, Excel, and all of the rest of professional tools and would put up with an inferior OS out of necessity, because the tools are more important than the OS.
Even then, Windows is an excellent piece of software, technically. It is handicapped by explicit anti-consumer decisions shoving things users don't want into it, but the kernel is hands down better than Linux's kernel, and the userspace is superior on technical merits if not user-friendliness merits. Windows has been going downhill in more recent years, but the gap between it and Linux is still massive.
Really, I don't know how Microsoft is your go-to example, when they are actually a producer of excellent software. Excel, the modern .NET ecosystem, C#, and Typescript are top-tier, and VSCode is perhaps the only software I've used that justifies being an Electron application, because it actually exposes the capability to completely customise every aspect of it and extend it with sandboxed extensions. To the extent people have grievances with Microsoft, it is largely because of deliberate monopolistic practices rather than the technical quality of their software.
I guess you are conceding that LLMs can't write good software, but are suggesting that good software doesn't matter. I think it does matter very much. In cases of monopoly control, people will begrudgingly use bad software, but they won't be loyal and you will bleed users over time as the frustrations build. But I think most critically, in cases where you don't already have monopoly control, nobody will use your bad software. This is why we haven't seen any vibe coded applications really taking the world by storm despite all the LLM hype. OpenAI and Anthropic can make you use their bad software because it's the gate to a useful proprietary tool, which is their real attraction, but Random Startup #482942 cannot make you use their bad software. Creating good software doesn't guarantee that Random Startup #482942 will succeed, given other market factors, but creating bad software guarantees they won't succeed.
I can do everything the same without it, because I'm still not using it. Why would I want to be a guinea pig for the world's richest companies and also atrophy my brain.
uh oh you guys didn't realize you were guinea pigs for products that can permanently alter your mental health?
Social media? TV? Radio? Newspapers? Books? Which ones in particular this time.
As a fun exercise replace AI with "junior" and "junior" with "mid-level." It holds up pretty well, as a manager you have responsibility for the work your team does and "make everyone put in more hours for no reason" is dumb. Maybe it comes across a bit neglecting of the "juniors" (in particular, it doesn't show any desire for figuring out ways for AI/"the juniors" to grow their responsibilities in a sustainable way).
Imagine reading that version as someone who doesn't know how big companies work. "But then they'll just fire all the mid-level managers, since they don't do any of the actual work!" Haha, boy would you be wrong.
For another type of incoherent policy: don't restrict your employees to 2025 models and then accuse them of being sticks in the mud when they say the models are inadequate.
DORA.dev (DevOps Research And Assessment) also point to having a clearly communicated stance concerning AI to be a foundational capability.
https://dora.dev/capabilities/clear-and-communicated-ai-stan...
When I see "in the year of our Lord" I immediately tune out the writer. Almost as bad as "Unreasonable Effectiveness"