This take confuses the value of a project at inception with its value at maturity. Vibe-coded projects are at the beginning of their life. When Slack was at a comparably stage, it similarly didn't have hundreds of engineers running it. So the question facing vibe coding is not whether it can substitute for a mature tech product. The question is if vibe coding can substitute for genuine engineering expertise at the very beginning of a budding, immature project.
How many projects get to the point where they're at 50 or 100 people online at the same time but fail due to technical issues before they reach 50k? I would say very few. 99% of the time the problem is that they never reach that simultaneous 100 people due to other, non-technical issues like not being a product that people really want. If you've got 50k people wanting to use your product it's a success even if you've got technical problems and it's crashing all the time.
These "vibe coders" would never even have thought before, "Oh, maybe I could build a piece of software too."
The barrier to entry has unquestionably dropped dramatically. As long as you have some programming foundation, a several-fold increase in productivity makes it entirely reasonable to choose technology stacks that you would never have seriously considered before. I have a programming background, had never studied WinUI 3, and yet it was still enough for me to build several native Windows applications that made it onto the Microsoft Store.
Of course, the more knowledge you have, the higher your chances of success. Using WinUI 3 again as an example, I definitely cannot fix bugs that Claude itself cannot fix, nor can I see the deeper potential problems beneath the surface. But it works, and that is already quite good. Just look at how many components in Microsoft's own Windows 11 do not work particularly well. That is what it comes down to: the barrier to entry has fallen dramatically, while the marginal returns of deeper learning are diminishing.
I wonder if vibe coding dev-ops will follow the path blazed by virtual machine managers vs bare metal servers. If the bare metal server crashes, you had to go out and, like a rancher’s calf, nurse it back to health. If the VM crashes, you take it out into the pasture and shoot it (and re-spin up another VM).
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
> In the vibe coded world, if a bug is found [...] do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recebt failure mode?
That sounds like a horrible plan, LLMs are non-deterministic (practically speaking, I know they can be run with temperature=0 locally, but not really relevant to the way anyone is writing code with them now).
Feeding the same spec in with some changes to deal with the one bug you discovered and regenerating all the code is likely to create a system that has new bugs (unrelated to the one you fixed by amending the spec) that may not have existed the last go-around.
Are you wondering if in the future AI will take a spec in natural language and convert it into thousands or millions of lines of code every time a bug is surfaced?
This reminded me of the shift from gambling with cash and a bookie connected to the mafia, to draft kings/fanduel to prediction markets. In the end the house always wins.
This is true, but people also seem to think that means we're going to be getting more worthwhile software, yet that is never really the case. Look at how commercially available game engines made publishing A and AA games more accessible, the expectation is a flood of amazing indie games but it's actually just a flood of slop and cash grabs and asset flips. And now the same thing is happening again in the game industry, like it is with the software industry.
Debatable if those would have existed regardless, since their creators were already experts and uber talented. They often wrote their own engines anyways or used lower level libraries like Monogame.
Are these chat apps built with one giant monolithic architecture? Seems like you could spin up isolated copies per organization and your scaling needs would be a lot lower and simpler. Then run everything in k8s with over subscription to deal with the compute overhead waste.
This take confuses the value of a project at inception with its value at maturity. Vibe-coded projects are at the beginning of their life. When Slack was at a comparably stage, it similarly didn't have hundreds of engineers running it. So the question facing vibe coding is not whether it can substitute for a mature tech product. The question is if vibe coding can substitute for genuine engineering expertise at the very beginning of a budding, immature project.
As opposed to Very Serious Lifelong Programmers, who, of course, see nothing but success in every project.
How many projects get to the point where they're at 50 or 100 people online at the same time but fail due to technical issues before they reach 50k? I would say very few. 99% of the time the problem is that they never reach that simultaneous 100 people due to other, non-technical issues like not being a product that people really want. If you've got 50k people wanting to use your product it's a success even if you've got technical problems and it's crashing all the time.
These "vibe coders" would never even have thought before, "Oh, maybe I could build a piece of software too."
The barrier to entry has unquestionably dropped dramatically. As long as you have some programming foundation, a several-fold increase in productivity makes it entirely reasonable to choose technology stacks that you would never have seriously considered before. I have a programming background, had never studied WinUI 3, and yet it was still enough for me to build several native Windows applications that made it onto the Microsoft Store.
Of course, the more knowledge you have, the higher your chances of success. Using WinUI 3 again as an example, I definitely cannot fix bugs that Claude itself cannot fix, nor can I see the deeper potential problems beneath the surface. But it works, and that is already quite good. Just look at how many components in Microsoft's own Windows 11 do not work particularly well. That is what it comes down to: the barrier to entry has fallen dramatically, while the marginal returns of deeper learning are diminishing.
I wonder if vibe coding dev-ops will follow the path blazed by virtual machine managers vs bare metal servers. If the bare metal server crashes, you had to go out and, like a rancher’s calf, nurse it back to health. If the VM crashes, you take it out into the pasture and shoot it (and re-spin up another VM).
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
> In the vibe coded world, if a bug is found [...] do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recebt failure mode?
That sounds like a horrible plan, LLMs are non-deterministic (practically speaking, I know they can be run with temperature=0 locally, but not really relevant to the way anyone is writing code with them now).
Feeding the same spec in with some changes to deal with the one bug you discovered and regenerating all the code is likely to create a system that has new bugs (unrelated to the one you fixed by amending the spec) that may not have existed the last go-around.
You'll be playing whack-a-mole forever.
Are you wondering if in the future AI will take a spec in natural language and convert it into thousands or millions of lines of code every time a bug is surfaced?
The only thing I wonder is what practices deployment engineers will lobby management for under well-aligned incentive structures
This reminded me of the shift from gambling with cash and a bookie connected to the mafia, to draft kings/fanduel to prediction markets. In the end the house always wins.
This is true, but people also seem to think that means we're going to be getting more worthwhile software, yet that is never really the case. Look at how commercially available game engines made publishing A and AA games more accessible, the expectation is a flood of amazing indie games but it's actually just a flood of slop and cash grabs and asset flips. And now the same thing is happening again in the game industry, like it is with the software industry.
But there was (and still is) a flood of amazing indie games. Due to the lower barrier of entry it was just naturally accompanied by a deluge of crap.
Debatable if those would have existed regardless, since their creators were already experts and uber talented. They often wrote their own engines anyways or used lower level libraries like Monogame.
Links to a screen shot to some rant bro
Are these chat apps built with one giant monolithic architecture? Seems like you could spin up isolated copies per organization and your scaling needs would be a lot lower and simpler. Then run everything in k8s with over subscription to deal with the compute overhead waste.