> This post, is written without any tools assistance I just wrote what my brain is instructing to type (might not reread it before posting).
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Because they're self-aware perfectionists and are actively working to stop it, because they reach for all kinds of tools like grammar checkers and AI, but they're aware that using those will make the post lose "their" voice, or the human element of the post.
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Reading what you write for editing does not make a text lose your voice. If anything, it amplifies it, you get to ensure that what you intended to say was said.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
If you use grammar checker as a grammar checker, it wont make you loose your voice. It will make you use correct grammar.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I see both sides here. Wanting to preserve your natural voice is valid, but editing and using tools don't necessarily take that away. In fact, they can help make your intended message clearer. It probably comes down to how much control you keep over the final result rather than wheater you use tools at all.
What annoys me here is that people say "I use AI as style checker to make my writing better" or claim that good writing is unfairly judged as being by AI ... and then proceed to describe inferior writing results they achieved with AI. None of what author wrote there signals that the way he uses AI made his writing better. His use of AI made his output inferior. And not just in a the "loosing own voice" way worst, but literally in the "the final text is less effective writing".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
There is no reliable way to detect AI writing. It probably trains on texts known to be AI, on texts known to be written by humans, then classify the text according to this training.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
> Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
There are a bunch of typos in there which jar a bit ('deterioted'), but I guess that makes sense for this specific article.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
The problem here, the overarching issue is that the subject complaint about AI slop is actually a bigger issue that has been plaguing America in particular for many years now, and of which the AI slop era is only a current top. The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
In some specific work contexts, such as writing pull request descriptions, not sounding like AI is something I've given up on trying to optimize. It's simply not worth the effort for me being non-native and writing detailed PR descriptions being so arduous, and the agent already has full context anyway. Obviously any fluff or inaccuracies are aggressively weeded out but I don't care anymore about the AI voice.
It's not that simple. LLMs were trained on lots of writing, and the "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
Well that isn't what I am suggesting. I'm suggesting people ditch x. Reddit. Probably also ditch hn in the next couple months. If you can run a headless agent to post somewhere, just don't bother visiting that site, honestly a great rule of thumb right there.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
Right. The LLMs' quirks aren't bad in themselves, they're bad when they're in every damn paragraph. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit but not too much.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
I feel ya. I've never been accused of using an LLM, fortunately, but depending on the context I do use “smart quotes” (even in „Dutch” or »German«) and the em-dash obviously… (And that ellips fella there. It's just so simple to type with a compose key set up.)
Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
I think some spaces will try to retain their value by actively combating LLMs, in the same way they combat hackers and trolls, and if they don't, they'll naturally die.
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
Oh well, when the most powerful people on the planet manage to enshittify it enough, we'll be freed from AI...
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!
> This post, is written without any tools assistance I just wrote what my brain is instructing to type (might not reread it before posting).
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Because they're self-aware perfectionists and are actively working to stop it, because they reach for all kinds of tools like grammar checkers and AI, but they're aware that using those will make the post lose "their" voice, or the human element of the post.
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Reading what you write for editing does not make a text lose your voice. If anything, it amplifies it, you get to ensure that what you intended to say was said.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
If you use grammar checker as a grammar checker, it wont make you loose your voice. It will make you use correct grammar.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I see both sides here. Wanting to preserve your natural voice is valid, but editing and using tools don't necessarily take that away. In fact, they can help make your intended message clearer. It probably comes down to how much control you keep over the final result rather than wheater you use tools at all.
What annoys me here is that people say "I use AI as style checker to make my writing better" or claim that good writing is unfairly judged as being by AI ... and then proceed to describe inferior writing results they achieved with AI. None of what author wrote there signals that the way he uses AI made his writing better. His use of AI made his output inferior. And not just in a the "loosing own voice" way worst, but literally in the "the final text is less effective writing".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
There is no reliable way to detect AI writing. It probably trains on texts known to be AI, on texts known to be written by humans, then classify the text according to this training.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
> Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
What makes you think that? I presume that's just the authors (sarcastic) way to say "beware: may contain typos and grammatical errors".
There are a bunch of typos in there which jar a bit ('deterioted'), but I guess that makes sense for this specific article.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
The problem here, the overarching issue is that the subject complaint about AI slop is actually a bigger issue that has been plaguing America in particular for many years now, and of which the AI slop era is only a current top. The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
Definitely think it is. It will be glorious. We will focus more on content than on just aesthetic as people try to signal that they are not llm
Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)
Maybe it is.
Just like hand made items are popular for their imperfections.
And because hands can still make things that machines cannot.
eg: https://ids.si.edu/ids/deliveryService?id=SAAM-2011.6_1
from: https://americanart.si.edu/artwork/mandara-79001 https://www.museumofglass.org/ltlg
I mean yes? I am more likely to read and trust something that is not written or cowritten by ai.
I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion
I want to emphasize a thought you expressed:
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
A friend described it as "there's no blank page any more".
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
In some specific work contexts, such as writing pull request descriptions, not sounding like AI is something I've given up on trying to optimize. It's simply not worth the effort for me being non-native and writing detailed PR descriptions being so arduous, and the agent already has full context anyway. Obviously any fluff or inaccuracies are aggressively weeded out but I don't care anymore about the AI voice.
Don't want to sound like an llm? Don't read llm content. Remove yourself from places where you might be liable to read it.
If you strictly read printed books only and am never exposed to online content, you'd think em-dash is a signal for human writing.
No you'd not think that. The thought of something not being human written didn't even occur to anyone before decent LLMs came around.
It's not that simple. LLMs were trained on lots of writing, and the "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
[0] https://xkcd.com/1133/
Well that isn't what I am suggesting. I'm suggesting people ditch x. Reddit. Probably also ditch hn in the next couple months. If you can run a headless agent to post somewhere, just don't bother visiting that site, honestly a great rule of thumb right there.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
On HN o Reddit you can occasionally read genuine opinions from real people. On newspaper 100% of text is trying to manipulate you.
> like nyt
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
The bad part is that people may start writing a bit worse on purpose, just so they don't get read as AI.
> "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
It does not resembles that. It is usually grammatically correct writing, but it is also pretty ineffective writing bad writing with good gramar.
Not joking, buy and read books. Old books are only written by people. (and the help of an editor)
Fun fact: Editors are usually also people. Except for that one dog I met during a cold winter's day in 1987 in a run-down London pub.
On the internet, no one knows you're an editor
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
If you outsource your thinking and skills, your ability to do either atrophies. You'll become dependent on outsourcing for both.
You're trading ability and competence for convenience.
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
Right. The LLMs' quirks aren't bad in themselves, they're bad when they're in every damn paragraph. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit but not too much.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
I feel ya. I've never been accused of using an LLM, fortunately, but depending on the context I do use “smart quotes” (even in „Dutch” or »German«) and the em-dash obviously… (And that ellips fella there. It's just so simple to type with a compose key set up.)
Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(
Are there any good writing LLMs out there?
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
https://hemingwayapp.com/ gives you advice about your writing.
This is called Hemingway because he was apparently good at communicating efficiently which made him a popular author.
What happens if you take the output of a mainstream LLM and send it through this app? Would that solve the issue of the original article?
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
Yeah, now it's "Here's what nobody else talks about" and "Here's the kicker" all day long.
There is no grandiose "AI era". Or it started like in 1950s already.
What it is going to be is a 'Slop Decade' - a much better label if you insist on having one.
The slop decade will be a slop "rest of humanity." There's no going back from this.
I think some spaces will try to retain their value by actively combating LLMs, in the same way they combat hackers and trolls, and if they don't, they'll naturally die.
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
No technology ever became obsolete?
Oh well, when the most powerful people on the planet manage to enshittify it enough, we'll be freed from AI...
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!