DevSecOps Engineer
United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
Technical blogs from infrastructure companies used to serve two purposes: demonstrate expertise and build trust. When the posts start overpromising, you lose both.
I don't know enough about this specific implementation to say whether "implemented Matrix" is accurate or marketing stretch. But the pattern of "we did X" blog posts that turn out to be "we did a demo of part of X" is getting tiresome across the industry.
The fix is boring: just be precise about what you built. "We prototyped a Matrix homeserver on Workers with these limitations" is less exciting but doesn't erode trust.
Days after the fake story about Cursor building a web browser from scratch with GPT-5.2 was debunked. Disbelief should be the default reaction to stories like this.
Btw, after I wrote that initial article ("Cursor's latest "browser experiment" implied success without evidence"), I gave it my own try to write a browser from scratch with just one agent, using no 3rd party crates, only commonly available system libraries, and just made a Show HN about it: https://news.ycombinator.com/item?id=46779522
The end result: Me and one agent (codex) managed to build something more or less the same as Cursor's "hundreds of agents" running for weeks and producing millions of lines of code, in just 20K LOC (this includes X11, macOS and Windows support). Has --headless, --screenshot, handles scaling, link clicking and scrolling, and can render basic websites mostly fine (like HN) and most others not so fine. Also included CI builds and automatic releases because why not.
This project is awesome - it really does render HTML+CSS effectively using 20,000 lines of dependency-free Rust (albeit using system libraries for image rendering and fonts).
A poc that would usually take a team of engineers weeks to make because of lack of cross disciplinary skills can now be done by one at the cost of long term tech debt because of lack of cross disciplinary knowledge.
Would be interested to know what people think of the locking implementation for the net worker pool.
I’m no expert but it seems like a strange choice to me - using a mutex around an MPSC receiver, so whoever locks first gets to block until they get a message.
Is that not introducing unnecessary contention? It wouldn’t be that hard to just retain a sender for each worker and just round robin them
The outrageous part of this is nowhere in the blog post or the repository indicates it's vibe coded garbage (hopefully I didn't miss it?). You expect some level of bullshit in AI company's latest AI vibe coding announcements. This can be mistaken for a classical blog post.
> A production-grade Matrix homeserver implementation
It's getting outright frustrating to deal with this.
Fine, random hype-men gets hyped about stuff and tweets about it, doesn't mind me too much.
Huge companies who used to have a lot of good will putting out stuff like this, seemingly with absolutely zero reviews before hitting publish? What are they doing? Have everyone decided to just give up and give in to the slop? We need "engineering" to make a comeback.
As long as you take ownership, test your stuff and ensure it actually does what you claim it does, I don't mind if you use LLMs, a book or your dog.
I'm mostly concerned that something we used to see as a part of basic "software engineering" (verify that what you build is actually doing what you think it is) has suddenly made a very quick exit from the scene, in chase of outputting more LOC which is completely backwards.
I review every line of code I generate, and make sure I know enough that I can manually reproduce everything I commit if you take away the LLM assistant tomorrow.
This is also what I ask our engineers to do, but it's getting hard to enforce.
If you take ownership of the code you submit, them it does not matter if it was inspired by AI, you are responsible from now on and you will be criticized, possibly you will be expected to maintain as well.
Vibing is incompatible with engineering and this practice is disgusting and NOT acceptable.
I get vibe coding a feature or news story or whatnot but how do you go about not even checking if the thing actually works, or fact checking the blog post?
Optics is the only thing that matters, there are people genuinely pushing for vibe coding on production systems. Actually, all of the big companies are doing this and claiming it is MORE safe because reduces human error.
I'm starting to believe they are all right, actually. Maybe frontier models surpassed most humans, but the bar we should have for humans is really really low. I genuinely believe most people cannot distinguish llms capabilities from their own capabilities, and their are not wrong from the perspective they have.
How could you perceive, out in the wild, an essence that scapes you?
it seems as if literally everyone associated with "AI" is a grifter, shill (sorry, "Independent Researcher"), temporarily embarrassed billionaire, or just a flat out scammer
I would not rule out that sometimes they are just incompetent and believe their own story, because they just don't know it better. Seems this is called a "bad apple"?
Everyone (not really, but basically yes) associated with $current_thing is a rent seeking scammer.
Even if Blockchain has tremendous impact, even if transformers are incredible (really) technology, even if NFTs could solve real world problems...you could basically say the same thing and be right, rounding up, 100% of the time, about anything technology related (and everything else as well). This truly is a clown world, but it is illegal to challenge it (or considered bad faith around here)
The version that was live on GitHub the day they published their blog post was missing compilation instructions, didn't cleanly compile and didn't pass GitHub Actions CI.
The project itself did compile most of the time it was being developed - the coding agents had been compiling it the whole time they were running on it.
The "it didn't even compile" criticism is valid in pointing out that they messed up the initial release, but if you think "it never compiled" you have an incorrect mental model.
It used cssparser and html5ever from the Servo project, and it used the Taffy library for flexbox and CSS grid layout algorithms which isn't officially part of Servo but is used by Servo.
I'd estimate that's a lot less than 60% of the "actual work" though.
My bad, I was misinformed, thanks for correcting me, I thought it used the renderer, not just the parser. Thats honestly way better than what I thought.
I believe it was basically a broken, non-functioning wrapper around Servo internals. That’s what I’d expect from a high schooler who says “i wrote a web browser”, but not what I’d expect from a multi-billion dollar corporation.
They aren't really a multi-billion dollar corporation. A lot of it is them just pumping up their valuation. Stuff like this proves that in a lot of ways.
That the original post to HN linked in the blog was done on a throwaway kind of implies a level of awareness (on the part of the dev) that the code/claims were rubbish :)
It is worrying to see a major vendor release code that does not actually work just to sell a new product. When companies pretend that complex engineering is easy it makes it very hard for the rest of us to explain why building safe software takes time. This kind of behavior erodes the trust that we place in their platform.
The real concern is that we've been doing this race to the bottom for so long that it's becoming almost trivial to explain why they are wrong. This over simplification has existed before AI coding and it's the dream AI coding took advantage of. But this market of lemons got too greedy
Honestly I like Cloudflare's CDN and DNS but beyond that I don't really trust much else from them. In the past though their blog has been one of the best in the space and the information has been pretty useful, almost being a gold standard for postmortems, but this seems especially bad. Definitely out of line compared to the rest of their posts. And with the recent Cursor debacle this doesn't help. I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
>I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
Because their CDN/DNS is excellent software but it's not massive moat. Workers on other hand is.
It's like difference between running something on Kubernetes vs Lambdas. One you can somewhat pivot with between vendors vs other one requires massive rewrites to software that means most executives won't transition away from it due to high potential for failure.
Yeah, I like that I can just upload a static html and host it there for free, but anything more I dunno. Its all about vendor lock-in with their products.
Well that is an interesting idea and proof of concept. I agree that the post is not the best I have seen from Cloudflare, and it shouldn't suggest that the code is production ready, but it is an interesting use-case.
I think it's a pretty big deal for a major company to put out a blog post about something that is "production grade" and pushing customers to use it without actually making it production grade.
I am saying that this is a human being who wrote about something they found cool, under they real identity, and they are getting insulted by people hiding behind their screens. It is a thing to insult a company or a generic title ("bad managers suck"), it is a completely different one to harass a specific person... and for what? Overselling their proof of concept?
Would you insult them if you met in person? Just for that? If yes, then maybe you go check yourself in the mirror.
I’m plenty calm. There’s just nothing to debate here: the blog post and repo are a conscious, deliberate, and egregious misrepresentation of fact.
I would absolutely say exactly the same things to the author’s face as I’m saying right now. I would never work for a company that condones this in a million years, as a matter of principle.
The person who wrote the article probably does not benefit from lying, I don't think it was the intent. It is a bad post, don't get me wrong, but maybe there is no need to insult the author just for that.
When called out, they deleted the TODOs. They didn't implement them, they didn't fix the security problems, they just tried to cover it up. So no, at this point the dishonesty is deliberate.
Um what's up with companies trying to recreate really big projects using vibe coding.
Like okay, I am an indie-dev if I create a vibe coded project, I create it for fun (I burn VC money of other people doing so tho but I would consider it actually positive)
But what's up with large companies who can actually freaking sponsor a human to do work make use of AI agents vibe code.
First it was cursor who spent almost 3-5 million$ (Just came here after watching a good yt video about it) and now Cloudflare.
Like, large corpos, if you are so much interested in burning money, atleast burn it on something new (perhaps its a good critique of the browser thing by Cursor but yeah)
I am recently in touch with a person from UK (who sadly got disabled due to an accident when he was young) guy who is a VPS provider who got really impacted by WHMCS increase in bill and He migrated to 1200 euros hostbill. Show him some HN love (https://xhosts.uk/)
I had vibe coded a golang alternative. Currently running it in background to create it better for his use cases and probably gonna open source it.
The thing with WHMCS alternatives are is that I made one using gvisor+tmate but most should/have to build on top of KVM/QEMU directly. I do feel that WHMCS is definitely one of the most rent seeking project and actually writing a golang alternative of it feels sense (atleast to me)
Can there not be an AI agent which can freaking detect what people are being charged for (unfairly) online & these large companies who want to build things can create open source alternatives of it.
I mean I am not saying that it stops being slop but it just feels a good way of making use of this tech aside from creating complete spaggeti slop nobody wants, I mean maybe it was an experiment but now it got failed (Cursor and this)
A bit ironic because I contacted the xhosts.uk provider because I wanted to create a cloudflare tunnels alternative after seeing 12% of internet casually going through cf & I saw myself being very heavily reliant on it for my projects & I wasn't really happy about my reliance on cf tunnels ig
The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...
Professionalism at its finest!
Oh wow I'm at a loss for words.
To the author: see my comment at https://news.ycombinator.com/item?id=46782174, please also clean up that misaligned ASCII diagram at the top of the README, it's a dead tell.
Yeah deleting the TODOs like that is honestly a worse look.
Incoming force push to rewrite the history . Git doesn't lie!
I wouldn't put it past them...
I wouldn't put it in past tense...
Here's the post on LinkedIn
https://www.linkedin.com/posts/nick-kuntz-61551869_building-...
https://www.linkedin.com/in/nick-kuntz-61551869/
DevSecOps Engineer United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Reminds me of Cloudflare's OAuth library for Workers.
>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security
>To emphasize, this is not "vibe coded".
>Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.
...Some time later...
https://github.com/advisories/GHSA-4pc9-x2fx-p7vj
What is the learning here? There were humans involved in every step.
Things built with security in mind are not invulnerable, human written or otherwise.
the problem with "AI" is that by the very way it was trained: it produces plausible looking code
so the "reviewing" process will be looking for the needles in the haystack
when you have no understanding, or mental model of how it works, because there isn't one
it's a recipe for disaster for anything other than trivial projects
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
[delayed]
Wow this is definitely not a software engineer. Hmm I wonder if Git stores history...
No more vulnerabilities then I guess!
I also use this as a simple heuristic:
https://github.com/nkuntz1934/matrix-workers/commits/main/
There exist only two commits. I've never seen a "real" project that looks like this.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
The repository is less than one week old though; having only the initial commit wouldn't shock me right away.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
they should have at least rebased it and removed from git history
Hilarious. Judging by the username, it's the same person who wrote the slop blog post, too.
Technical blogs from infrastructure companies used to serve two purposes: demonstrate expertise and build trust. When the posts start overpromising, you lose both.
I don't know enough about this specific implementation to say whether "implemented Matrix" is accurate or marketing stretch. But the pattern of "we did X" blog posts that turn out to be "we did a demo of part of X" is getting tiresome across the industry.
The fix is boring: just be precise about what you built. "We prototyped a Matrix homeserver on Workers with these limitations" is less exciting but doesn't erode trust.
To be fair, the technical posts from Cloudflare are usually very insightful.
That's demonstrating expertise
Days after the fake story about Cursor building a web browser from scratch with GPT-5.2 was debunked. Disbelief should be the default reaction to stories like this.
Btw, after I wrote that initial article ("Cursor's latest "browser experiment" implied success without evidence"), I gave it my own try to write a browser from scratch with just one agent, using no 3rd party crates, only commonly available system libraries, and just made a Show HN about it: https://news.ycombinator.com/item?id=46779522
The end result: Me and one agent (codex) managed to build something more or less the same as Cursor's "hundreds of agents" running for weeks and producing millions of lines of code, in just 20K LOC (this includes X11, macOS and Windows support). Has --headless, --screenshot, handles scaling, link clicking and scrolling, and can render basic websites mostly fine (like HN) and most others not so fine. Also included CI builds and automatic releases because why not.
The repository itself is here and should run out of the box on most modern OSes, downloads can be found at the Releases page: https://github.com/embedding-shapes/one-agent-one-browser
This project is awesome - it really does render HTML+CSS effectively using 20,000 lines of dependency-free Rust (albeit using system libraries for image rendering and fonts).
Here's a screenshot I took with it: https://bsky.app/profile/simonwillison.net/post/3mdg2oo6bms2...
Yes, this is what Ai assisted coding is good at.
A poc that would usually take a team of engineers weeks to make because of lack of cross disciplinary skills can now be done by one at the cost of long term tech debt because of lack of cross disciplinary knowledge.
Would be interested to know what people think of the locking implementation for the net worker pool.
I’m no expert but it seems like a strange choice to me - using a mutex around an MPSC receiver, so whoever locks first gets to block until they get a message.
Is that not introducing unnecessary contention? It wouldn’t be that hard to just retain a sender for each worker and just round robin them
That's fairly impressive.
The outrageous part of this is nowhere in the blog post or the repository indicates it's vibe coded garbage (hopefully I didn't miss it?). You expect some level of bullshit in AI company's latest AI vibe coding announcements. This can be mistaken for a classical blog post.
Although the tell is obvious if you spent one second looking at https://github.com/nkuntz1934/matrix-workers. That misaligned ASCII diagram, damn.
Why is Cloudflare paying this guy again, just to vibe a bunch of garbage without even checking above the fold content in the README?
> Why is Cloudflare paying this guy again
Perhaps usage of AI is a performance target he's being judged against, like at many tech companies today.
> A production-grade Matrix homeserver implementation
It's getting outright frustrating to deal with this.
Fine, random hype-men gets hyped about stuff and tweets about it, doesn't mind me too much.
Huge companies who used to have a lot of good will putting out stuff like this, seemingly with absolutely zero reviews before hitting publish? What are they doing? Have everyone decided to just give up and give in to the slop? We need "engineering" to make a comeback.
We found that reviewing AI code is bottleneck for performance so we stopped reviewing it
https://github.com/matrix-org/matrix-rust-sdk/blob/main/CONT... is an example of engineering trying to make a comeback, on the Matrix side at least :)
As long as you take ownership, test your stuff and ensure it actually does what you claim it does, I don't mind if you use LLMs, a book or your dog.
I'm mostly concerned that something we used to see as a part of basic "software engineering" (verify that what you build is actually doing what you think it is) has suddenly made a very quick exit from the scene, in chase of outputting more LOC which is completely backwards.
I review every line of code I generate, and make sure I know enough that I can manually reproduce everything I commit if you take away the LLM assistant tomorrow.
This is also what I ask our engineers to do, but it's getting hard to enforce.
That's the only way, but I even doing that I fear I loose some competency.
If you take ownership of the code you submit, them it does not matter if it was inspired by AI, you are responsible from now on and you will be criticized, possibly you will be expected to maintain as well.
Vibing is incompatible with engineering and this practice is disgusting and NOT acceptable.
I get vibe coding a feature or news story or whatnot but how do you go about not even checking if the thing actually works, or fact checking the blog post?
Aren't we all guilty of that? For example I see the domain zone of this blog post and I skip it even without opening.
Why?
Vice signaling
Because I am normal.
Are you sure that's "normal"?
Optics is the only thing that matters, there are people genuinely pushing for vibe coding on production systems. Actually, all of the big companies are doing this and claiming it is MORE safe because reduces human error.
I'm starting to believe they are all right, actually. Maybe frontier models surpassed most humans, but the bar we should have for humans is really really low. I genuinely believe most people cannot distinguish llms capabilities from their own capabilities, and their are not wrong from the perspective they have.
How could you perceive, out in the wild, an essence that scapes you?
it seems as if literally everyone associated with "AI" is a grifter, shill (sorry, "Independent Researcher"), temporarily embarrassed billionaire, or just a flat out scammer
I have yet to see a counter-example
I would not rule out that sometimes they are just incompetent and believe their own story, because they just don't know it better. Seems this is called a "bad apple"?
Everyone (not really, but basically yes) associated with $current_thing is a rent seeking scammer.
Even if Blockchain has tremendous impact, even if transformers are incredible (really) technology, even if NFTs could solve real world problems...you could basically say the same thing and be right, rounding up, 100% of the time, about anything technology related (and everything else as well). This truly is a clown world, but it is illegal to challenge it (or considered bad faith around here)
They did build a browser; it may not be a very compliant or complete browser, or even a useful one, but neither was IE6!
It didn't even compile, which makes me consider wether your comment is just ignorant or outright maliciously misleading
The version that was live on GitHub the day they published their blog post was missing compilation instructions, didn't cleanly compile and didn't pass GitHub Actions CI.
The project itself did compile most of the time it was being developed - the coding agents had been compiling it the whole time they were running on it.
Shortly after the blog post they updated the GitHub repo with compilation instructions and it worked. I took this screenshot with it: https://static.simonwillison.net/static/2026/cursor-simonwil...
The "it didn't even compile" criticism is valid in pointing out that they messed up the initial release, but if you think "it never compiled" you have an incorrect mental model.
Also, didn't it use Servo crates? I don't think you can say 'from scratch' if 60% of the actual work is from an external lib.
If I install an Arch Linux, I don't say I 'installed Linux from scratch'.
It used cssparser and html5ever from the Servo project, and it used the Taffy library for flexbox and CSS grid layout algorithms which isn't officially part of Servo but is used by Servo.
I'd estimate that's a lot less than 60% of the "actual work" though.
My bad, I was misinformed, thanks for correcting me, I thought it used the renderer, not just the parser. Thats honestly way better than what I thought.
My understanding is that it doesn't even compile if you clone the repo.
It didn't and it had some pretty weird commit history and emails. Overall not a super great sign...
It does now. It didn't on initial announcement day.
I believe it was basically a broken, non-functioning wrapper around Servo internals. That’s what I’d expect from a high schooler who says “i wrote a web browser”, but not what I’d expect from a multi-billion dollar corporation.
They aren't really a multi-billion dollar corporation. A lot of it is them just pumping up their valuation. Stuff like this proves that in a lot of ways.
They are running > 300 DC's...
They didn't build a browser from scratch.
I found the source code Jade was referring to, and it looks like the author just noticed this thread: https://github.com/nkuntz1934/matrix-workers/commit/0823b47c...
Your commit is orphaned now; it seems he amended the log to a vague "Clean up code comments" to try to make the purpose less obvious: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...
That honestly makes everything so much worse.
That the original post to HN linked in the blog was done on a throwaway kind of implies a level of awareness (on the part of the dev) that the code/claims were rubbish :)
https://news.ycombinator.com/item?id=46780837
Not to mention they commented on their own post, pretending to ask a question..
Embarrassing, coming from a company like Cloudfare
It is worrying to see a major vendor release code that does not actually work just to sell a new product. When companies pretend that complex engineering is easy it makes it very hard for the rest of us to explain why building safe software takes time. This kind of behavior erodes the trust that we place in their platform.
The real concern is that we've been doing this race to the bottom for so long that it's becoming almost trivial to explain why they are wrong. This over simplification has existed before AI coding and it's the dream AI coding took advantage of. But this market of lemons got too greedy
Honestly I like Cloudflare's CDN and DNS but beyond that I don't really trust much else from them. In the past though their blog has been one of the best in the space and the information has been pretty useful, almost being a gold standard for postmortems, but this seems especially bad. Definitely out of line compared to the rest of their posts. And with the recent Cursor debacle this doesn't help. I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
>I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
Because their CDN/DNS is excellent software but it's not massive moat. Workers on other hand is.
It's like difference between running something on Kubernetes vs Lambdas. One you can somewhat pivot with between vendors vs other one requires massive rewrites to software that means most executives won't transition away from it due to high potential for failure.
Yeah, I like that I can just upload a static html and host it there for free, but anything more I dunno. Its all about vendor lock-in with their products.
I essentially just use them for this and domain DNS/Registrar as their pricing is pretty good for that.
I guess it depends on the author. Seems like it is the first post for this author, and given the reception, maybe the last one...
I've never thought someone should be fired based on a blog post but man, this comes real close.
It’s not a working or complete implementation, but…
But according to the README, it is production grade! Presumably "production" in this case is an isolated proof of concept?
Well that is an interesting idea and proof of concept. I agree that the post is not the best I have seen from Cloudflare, and it shouldn't suggest that the code is production ready, but it is an interesting use-case.
everybody is vibing everything now, code, messages, reviews, everything
Kind reminder that the author of that post is a human who will be affected by all the hate. Is it really worth it?
I agree that the post is wanting, but the idea itself is interesting: running a Matrix homeserver on workerd.
I think it's a pretty big deal for a major company to put out a blog post about something that is "production grade" and pushing customers to use it without actually making it production grade.
> They start by saying they "wanted to see if it was possible"
That's a generous read. From the actual article:
> We wanted to see if we could eliminate that tax entirely. Spoiler: We could.
Sure it's a bad post. But the guy did not kill someone for no reason.
> Is it really worth it?
Unequivocally yes.
Fraud is fraud, and if your first instinct is to defend it in this manner, check yourself in the mirror.
May I kindly ask you to calm the fuck down?
I am saying that this is a human being who wrote about something they found cool, under they real identity, and they are getting insulted by people hiding behind their screens. It is a thing to insult a company or a generic title ("bad managers suck"), it is a completely different one to harass a specific person... and for what? Overselling their proof of concept?
Would you insult them if you met in person? Just for that? If yes, then maybe you go check yourself in the mirror.
I’m plenty calm. There’s just nothing to debate here: the blog post and repo are a conscious, deliberate, and egregious misrepresentation of fact.
I would absolutely say exactly the same things to the author’s face as I’m saying right now. I would never work for a company that condones this in a million years, as a matter of principle.
I also can't help but feel bad for the author. However, when the first line of the README is
> A production-grade Matrix homeserver
this is engineering malpractice. It is also unethical to present the work of an LLM as your own.
We are getting tired of being lied to.
The person who wrote the article probably does not benefit from lying, I don't think it was the intent. It is a bad post, don't get me wrong, but maybe there is no need to insult the author just for that.
When called out, they deleted the TODOs. They didn't implement them, they didn't fix the security problems, they just tried to cover it up. So no, at this point the dishonesty is deliberate.
Um what's up with companies trying to recreate really big projects using vibe coding.
Like okay, I am an indie-dev if I create a vibe coded project, I create it for fun (I burn VC money of other people doing so tho but I would consider it actually positive)
But what's up with large companies who can actually freaking sponsor a human to do work make use of AI agents vibe code.
First it was cursor who spent almost 3-5 million$ (Just came here after watching a good yt video about it) and now Cloudflare.
Like, large corpos, if you are so much interested in burning money, atleast burn it on something new (perhaps its a good critique of the browser thing by Cursor but yeah)
I am recently in touch with a person from UK (who sadly got disabled due to an accident when he was young) guy who is a VPS provider who got really impacted by WHMCS increase in bill and He migrated to 1200 euros hostbill. Show him some HN love (https://xhosts.uk/)
I had vibe coded a golang alternative. Currently running it in background to create it better for his use cases and probably gonna open source it.
The thing with WHMCS alternatives are is that I made one using gvisor+tmate but most should/have to build on top of KVM/QEMU directly. I do feel that WHMCS is definitely one of the most rent seeking project and actually writing a golang alternative of it feels sense (atleast to me)
Can there not be an AI agent which can freaking detect what people are being charged for (unfairly) online & these large companies who want to build things can create open source alternatives of it.
I mean I am not saying that it stops being slop but it just feels a good way of making use of this tech aside from creating complete spaggeti slop nobody wants, I mean maybe it was an experiment but now it got failed (Cursor and this)
A bit ironic because I contacted the xhosts.uk provider because I wanted to create a cloudflare tunnels alternative after seeing 12% of internet casually going through cf & I saw myself being very heavily reliant on it for my projects & I wasn't really happy about my reliance on cf tunnels ig
nkuntz1934 Senior Engineering TPM @ Cloudflare
Of course, this is done by a manager. Classic corporate mindset, I can do what these smelly nerds do every day, hold my bear.
He doesn't even know how git works, huh?
What a clown.
TPM isn't manager. It's basically a PM, but they're (supposed) to be technical