I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
Following up, anecdotally, people I talk to who are excited about LLM development usually either care more about product development, or don't have programming skill enough to see how bad the software is. Nothing wrong with either, but it can get tiresome.
> people I talk to who are excited about LLM development usually either care more about product development
This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vinecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
Ok well I have plenty of serious, production-level professional experience that says otherwise. Not “vibe coding” - we certainly review the code. It’s a tool that has downsides and failure modes, of course, but it’s at the point where it’s definitely speeding us up and we are using it a lot. Trust me, I’d prefer a world, on balance, where this wasn’t true – I don’t like many of the aspects and uses of the technology – but its utility in programming is undeniable now and the capitalists aren’t taking “no” for an answer.
> I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.
> I have yet to run into any serious project in the wild that is using LLMs for development.
How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]
Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
Genuine question: how to distinguish yourself from the stream of slop?
I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.
But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
Seems a lot of commenters here dislike their decision, I like it though.
LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
And the discussion on LLM itself can in the long run be fairly tiring, follow r/LocalLLaMA for a while and you'll see what I mean. But if you are really into LLMs though, that sub is great.
It is simply not fun to go on to a subreddit, seeing 90% being projects and blogs that is obviously created using AI, and authentic content being pushed to the side due to the high volume of artificial works. r/Python was horrible at one point, but the mods have been stepping up their game.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
Reddit turned way more into an echo chamber over time. The moderators and the downvote system destroyed the site. The shift from free speech, libertarian and anarchist ideology into heavily left leaning definitely didn't help.
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective approach than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
We also believe that, generally, the community have been indicating that, by and large, they aren't interested in this content.
How can that be true? Reddit is vote-based. So if people weren't interested, they wouldn't vote it up and it wouldn't appear on the front page. Hacker News has no rule banning posts about Barbie and yet, amazingly, Barbie rarely makes it to the front page, because that's how upvotes work.
People clearly are interested enough to vote LLM related posts up, but a bunch of mods who don't like AI are upset enough to want to dictate what others can find interesting. Which is not unusual for Reddit.
Unlike Hacker News, Reddit's new Best algorithm often surfaces newly posted post (which is a good idea that help mitigate the cold start problem), but that means people who are subscribed to /r/programming will see posts about LLMs and typically downvote them.
From the user responses to the linked ban, said ban was a positive decision for that community.
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
Still salty about it.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
Following up, anecdotally, people I talk to who are excited about LLM development usually either care more about product development, or don't have programming skill enough to see how bad the software is. Nothing wrong with either, but it can get tiresome.
> people I talk to who are excited about LLM development usually either care more about product development
This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.
I think they just don't want every post to be about llm, vibe coding, harness and if claude is down.
Some sub reddits forbid memes, because else they get flooded and the good content drowns in it.
Some sub reddits only allow certain content of certain days to counter this.
What do you want to mods to do?
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vinecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
Ok well I have plenty of serious, production-level professional experience that says otherwise. Not “vibe coding” - we certainly review the code. It’s a tool that has downsides and failure modes, of course, but it’s at the point where it’s definitely speeding us up and we are using it a lot. Trust me, I’d prefer a world, on balance, where this wasn’t true – I don’t like many of the aspects and uses of the technology – but its utility in programming is undeniable now and the capitalists aren’t taking “no” for an answer.
> I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.
> I have yet to run into any serious project in the wild that is using LLMs for development.
How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]
Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.
1. https://www.reddit.com/r/Anthropic/comments/1pzi9hm/claude_c...
2. https://fortune.com/2024/10/30/googles-code-ai-sundar-pichai...
3. https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
AI will be writing the code for shit-slop apps and libraries. The good ones will be written by humans.
> I don't understand the people digging in as zero LLM absolutists.
Relevant read: https://en.wikipedia.org/wiki/Luddite
I feel like it’s easy to understand what’s motivating these individuals to take that stance.
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
Good decision.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
It very much is. It’s more like telling an intern what to do and then reviewing their code. Anyone can do it, and it results in (mostly) slop.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
Genuine question: how to distinguish yourself from the stream of slop?
I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.
But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.
What is the way out of this conundrum?
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
And the difference is that humans will learn not to make that mistake anymore.
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
This is far too negative and reductionist
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
> or how well you can manage 16 parallel agents.
Claude does that for me. :)
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
Seems a lot of commenters here dislike their decision, I like it though. LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
And the discussion on LLM itself can in the long run be fairly tiring, follow r/LocalLLaMA for a while and you'll see what I mean. But if you are really into LLMs though, that sub is great.
It is simply not fun to go on to a subreddit, seeing 90% being projects and blogs that is obviously created using AI, and authentic content being pushed to the side due to the high volume of artificial works. r/Python was horrible at one point, but the mods have been stepping up their game.
There’s something off about Reddit. Either I grew up or it became hollow from within. Just angry people scolding each other all the time.
There are some true gems however but usually in smaller focused subreddits.
It was inevitable given it's a top 7 most popular site.
The reality is, the masses, the real world, the average person. Is an asshole.
It does reflect in the real world, because people learn to hide their assholeness at a very early age (Or they learn how to get punched in the face).
On an anonymous forum. You don't have to hide your assholeness.
Frankly it's amazing the site never devolved into 4chan. I attribute that to all the people doing free labor --> mods.
You should try Lemmy. It feels a lot like Reddit did in like 2012. Small, but a great community.
Yeah, the smaller subreddits are good. The problem is it’s basically killed off alternative forums.
I never thought I’d miss vBulletin so much.
Think any platform becomes terrible over time once it hits a certain level of mass appeal. I loved Reddit and Quora in 2010.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
1: https://www.reddit.com/r/blog/comments/o5tjcn/evolving_the_b...
2: https://news.ycombinator.com/item?id=36040282
One of most magical things about HN is that heavy comments are punished as a negative signal, not rewarded as a positive one.
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
Reddit is still a step above other alternatives.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
Reddit turned way more into an echo chamber over time. The moderators and the downvote system destroyed the site. The shift from free speech, libertarian and anarchist ideology into heavily left leaning definitely didn't help.
HN should also limit all these self-promoting AI posts.
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective approach than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
I interpreted the GP as "personal blog posts about AI/LLMs", not LLM-generated comments.
dang’s comments in the link above address “Show HN” submissions. (That was my interpretation of “self-promoting AI posts”… :)
I've been hiding them all. Makes the front page look a lot better.
Good for them. Keep your projects human made by adopting a good policy. I use this one:
https://sciactive.com/human-contribution-policy/
That sounds absolutely amazing. I will reconsider creating a new account and using Reddit again after walking away about a decade ago.
I deleted my account a few years ago, I might actually create one now, it'll be preferable to HN if they stick with this new rule.
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
Nope:
> Timing just worked out this way. New month, ideal timing for testing a new rule.
Or so one says. (Not necessarily saying that it was a bad decision.)
Not a surprise, reddit userers are clueless.
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
[1] https://old.reddit.com/r/selfhosted/comments/1r9s2rn/musicgr...
[2] https://old.reddit.com/r/selfhosted/comments/1rckopd/huntarr...
[3] https://old.reddit.com/r/selfhosted/comments/1rs275q/psa_thi...
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
Clankers outta here! Wish there was an HN toggle to enable hiding all LLM programming submissions.
A question to people here: what’s a smallish community for tech with a slightly more serious level of discourse that this subreddit?
https://lobste.rs/
Can y’all give me an invite
...Hacker News?
We also believe that, generally, the community have been indicating that, by and large, they aren't interested in this content.
How can that be true? Reddit is vote-based. So if people weren't interested, they wouldn't vote it up and it wouldn't appear on the front page. Hacker News has no rule banning posts about Barbie and yet, amazingly, Barbie rarely makes it to the front page, because that's how upvotes work.
People clearly are interested enough to vote LLM related posts up, but a bunch of mods who don't like AI are upset enough to want to dictate what others can find interesting. Which is not unusual for Reddit.
Unlike Hacker News, Reddit's new Best algorithm often surfaces newly posted post (which is a good idea that help mitigate the cold start problem), but that means people who are subscribed to /r/programming will see posts about LLMs and typically downvote them.
From the user responses to the linked ban, said ban was a positive decision for that community.
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
As others have noticed in the thread, the timing is suspicious - could be April's fools.
The original post was edited with "this is not April Fool's"
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
/r/horsecarriage bans all discussion of cars
/r/assembly bans all discussion of 4GL
LLM programming isn't going away by not talking about it. It's time to move on, and eventually considering farming.
> /r/horsecarriage bans all discussion of cars
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
/r/horsecarriages banning discussion of cars makes sense though. It's not a horse carriage. If you want to discuss cars, go to /r/cars.
It's not about wishing it goes away, it's that people don't want to see JavaScript/Java/Swift blog articles when they visit r/assembly.
OK I see your point, the problem is more being off-topic rather than the LLM programming itself. And that's correct, we are strict people, after all.
More like /r/cars bans all discussion of electric cars.
> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
If only, just this once, it were true. Sigh.
Sweet, so the LLM can interact on topics not about LLM
People still use Reddit?
I wouldn't call them 'people'.
What do you recommend instead? Reddit is like reading YouTube comments nowadays, I miss when discussions were literate and informed.
Ignorance isn't bliss. They never had a downturn yoy in their userbase yet.
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Not being able to discuss the biggest change to our job in living memory is such a reddit thing to do, just sticking their heads in the sand.
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Do you think that this is not happening here?
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
Got any proof?
Check a larger thread. It is pretty clear since there are people doing nothing to hide the writing style.
You are absoawesomeamazingaffirmativeabundantauthenticabsolutely right!
> Check a larger thread. It is pretty clear
It tends to get downvoted and flagged.
If you email comment links to the mods that you believe are AI-assisted, they’ll review and act on that. Footer contact link. It’s not hopeless.