I tried writing a short novel using Claude Opus 4.6, I gave it outline and raw draft, and the style is very similar to this writing.
I tried to steer it away from this kind of writing because it feels weird. But it always try to output something similar to this. Or maybe I am just not used to reading novel.
So I was curious, what kind of training data was Claude trained on, that its very hard to steer it out from this style.
So I opened my kindle and looking through the recommended popular novels. Just reading through its free samples.
And the similarities are striking. Now, I dont know whether the recommended novel is the training data, or its actually written by LLM. Or maybe its just how novelist writes.
I even tried writing full chapter from scratch. And asked Claude to ghost write the second chapter for me using my writing style. It still wont follow my style and keeps writing in this kind of style from the article.
Not accusing the article of using an LLM to ghost write. Even so its fine to use LLM to ghost write. Its just one anecdote from my side, on how LLM fails to follow my writing style and keeps coming back to its training data.
Don't take this as a defense of LLMs, because it absolutely isn't, but:
>Or maybe I am just not used to reading novel.
If you're not even used to reading novels, how can you judge the results of writing one? That is one hell of a confession for someone who's trying to write fiction.
> That is one hell of a confession for someone who's trying to write fiction.
Indeed. A significant part of gaining skills in creative writing is learning to 'read as a writer'. How to examine classic texts to understand how to develop scenes, characters, narrative styles, etc.
An important part of writing is also to write as the reader, eschewing meaningless fluff and sentences that use bombastic emotional language without really communicating.
The latter is prevalent in LLM writing. Imitating "poetry" without the feelings is something that the default, "aligned" chat models with reinforcement all do in one way or another. It's hard to get even a technical essay without empty emotional language.
And I'm only speaking for myself, I like reading novels, but it's perfectly possible to have a slop-meter without doing so.
My own signal-to-noise ratio in writing is also often bad, but with today's "frontier" LLM output I feel there's a specific tendency towards this harmless, emtpy, flowery language full of false dichotomies and rhetorical devices devoid of any purpose to communicate.
A model trained and fine-tuned to generate divisive Reddit threads sure has different tendencies.
But for the friendly assistants, there's often this solipsism and pseudo-poetic aspect.
I agree it's hard to get it to output things in different styles... I started doing a side project for writing with LLMs (ailivrum.com -- my main focus being doing some writing/reading for my younger daughter right now, although I'm structuring for others to use it too).
So far what I found is that doing prompt engineering does not yield great results. LLMs just go with their own style regardless and I had not much luck changing it... it can do some interesting stories though, but it's far from just outline + prompt > gen story to get something that's readable (on the good side, there are many LLMs, so testing a different provider may give better results).
> And the similarities are striking. Now, I dont know whether the recommended novel is the training data, or its actually written by LLM. Or maybe its just how novelist writes.
For traditionally published works, it's trivial to exclude LLM-written content, just look for anything published before Nov 30, 2022.
I think we are discussing the wrong problem here. I have no solution to offer, but I think the problem is not so much generated content, but the surroundings in which it can thrive and become the content you see everywhere.
If we hadn't removed the gatekeepers everywhere (and I know there are problems with them, too), then all that technology would not be able to do much harm.
It might also have to do with incentives. The incentives in our economy are not to help and advance society, the invisible hand nonwithstanding.
Why stop with traditionally published works? Before dead-internet-day, very-nearly all forms of writing were guaranteed to be hand crafted, organic, and made with 100% Natural Intelligence.
The artificial stuff often has an odd taste, but boy it sure is quick and convenient.
You joke, but I bet every person in this forum, when presented the choice between a bot-filled forum and a guaranteed human-only* forum, they'd go with the latter.
* this is a hypothetical scenario. I don't know any guaranteed human-only digital forums.
I converse enough with LLMs for research at this point where I feel I have a good enough structure to hop on/off them to primary sources and stuff, so I don't get annoyed with them too easily.
Whereas I haven't seriously reflected on my social media consumption habits for over 15 years, and over the years I'm getting more and more annoyed at social media.
Not to be a bit misanthropic, but there's something seriously wrong with my social media usage, especially when I know there's a real human on the other side, combined with ever increasing annoyance towards commenters and just the feelings I get after reading social media.
It may be dopamine / self-help related, but no actually, I think all of that is part of the issue (discovered that in high school when it was taking off). Something about the way I'm fundamentally interacting with the medium seems so horrible and icky the more I mature.
Niche hobbyist forums are still safe, for now. There's just not enough commercial interest in petroleum lantern restoration to make it worth anyone's time to poison this particular well.
Even some larger niche hobbies like the saltwater aquarium community seemspretty safe for now (though it also helps that many forums have members who visit each other to trade corals and admire each others tanks).
On the contrary! The dead-day theorem established earlier states that an 11/22 date filter is a necessary condition for verifiable human-only content, when filtered by content-creation date.
A weaker theorem can be postulated that any such filter provides a second order sufficient condition.
This means we can filter content by account creation date, for example, by hiding all posts and comments from accounts created after the digital death event. This won’t always guarantee human-only content but certainly more than otherwise.
But then we wouldn’t be having this most definitively human-to-human conversation, right?
It's not the launch of GPT, but probably about 4 or 4o that it really became solid. I also don't think video is there just yet, at least for video over 10 seconds.
Who's "people"? The bottom X% (40%?) of the population is already falling for AI slop video scams, but before that, they were also falling for pig butchering and nigerian prince scams, so the "average" person benchmark has already been passed for text, photos, videos, etc. For more astute consumers, video isn't there yet.
There's also the question of whether people are even trying to disguise AI content, and how effective that disguise is. Are you or I missing the AI-generated text that just has a veneer of disguise on it?
why does it matter when it "became solid?" there was plenty of slop generated with ChatGPT, that really was the turning point (because of public access)
4-5 words sentences ted talk style, yes. I hated it even when humans were doing it. It's like motivational speakers trying their hand at writing novels
I actually loved this, and felt moved. While reading, my mind fired rapidly through dozens of personal memes (i.e. tags for my regularly trod thought-paths) that I keep in my knowledge-base. This is the 30mb text corpus where I log all my work and peer conversations and thoughts, and (amongst other things) where I think through what I would consider my spiritual practices... my sensemaking around complex systems, including Daoist teachings. This text basically entangled itself with the work I am doing at the outer edges of my own knowing, where I am working on my rawest and most fragile but precious thoughts.
I don't think this is trite, I think there is something in this that is in contact with "living structure" (in the Christopher Alexander sense[1]), and much exists outside the edges of the text.
To those who dislike this, I am genuinely curious: Would you say you dislike metaphor? Do you tend to feel disconnected and lacking resonance with poetic writing?
EDIT: I experience this writing as giving me many quiet A's, or perhaps a smell of A's in a given direction of thought. I interpret others here as getting either B's or U's, in the sense of this A/B/U system: https://openresearchinstitute.org/onboarding/A_B_U.html
Agreed, I found this essay engaging and emotional. While I can see why the unusual style might not be someone's cup of tea, I don't agree at all with the criticism that this is bad writing. It had a Haruki Murakami ethereal feel that I am quite fond of.
Regarding other comments in this thread, the moral panic over AI writing has mostly passed me by. While I certainly have a philosophical preference for things written by an actual human, I don't care to invest the bandwidth in analyzing every single thing I read for hints of llm patterns. If I like it, I'll keep reading. If not, I won't. Sometimes discontinuing a piece of writing also aligns with obvious AI use, but that is generally a secondary issue.
Do you mind me asking what type of system do you use for keeping these notes, the 30mb text corpus with conversations and journaling? Are you using txt, an app like Logseq? I flip flop between apps for this sort of thing and then annoyingly the building of a "system" sucks up my time rather than writing and logging and reflecting. It's a struggle for me any advice would be much appreciated :)
My humanist degree and all those years reading B’s and D’s of French philosophy come extremely useful in strange places. Having had to write long essays sieving through mounds of seemingly near impenetrable (and actually surprisingly banal, after you learn how to read it) prose of post-structuralist philosophy, I learned how to automatically look for the structure of the text first by skimming, starting from the end, creating a mental map of the text so that you can locate the main argument and the amino acid amongst the boilerplate and stock sentences.
Today it saves me time skimming a text, seeking for main sentences by jumoing around and quickly coming to the understanding of “Oh, hi ChatGPT”. In the past it has saved me a lot of time not being tricked to read SEO gurgle, ad-copy and just generally bad writing. If writing is really just editing, reading is mostly filtering, sieving the cereal from the chaff.
I try not to be too critical of things by strangers on the internet, but I feel compelled to say that as a big fan of the type of fiction this seems to be going for - a kind of Franzen/DeLillo/Ishigoro aka mystical non momentousness - I found this really dull.
In reading, which I did before reading the comments here, I wondered too if it was LLM written or assisted. It has the hallmarks of an LLM; the "this but not this" - the proclivity to be profound. In this case as if it had been told "I want to make something meaningful out of these events, but it shouldn't mean anything."
There is not about page. No links to other socials. The entire site could be LLM generated for all we can tell.
If anything at all, this "essay" and site serve as a reminder that there is an uncanny valley to LLM writing, and that real authentic human communication will likely become rarer and more valuable as this slop proliferates.
edit: from the OP's profile it looks like this is probably a well-meaining person interested in post-structuralism and meditation, but is likely using LLM to achieve that goal. Maybe they wrote in Japanese and are translating to English? Also I kind of like coming across stuff like this on HN but I feel it should still be adjacent or at least peripheral to the topics we normally discuss
Pangram says this is 61% AI generated. Thinking of paying for a Pangram subscription so I can spend the valuable time I have to read for pleasure on human created pieces than AI slop.
Interesting, I just did this on some stuff I have personally handwritten, but the writing is about LLMs, and it says 25% AI generated, but because I used terms that are "x% likely AI" generated.
This is not a very useful test; it basically means a person has to "ban" terms like "user experience" and "tireless" etc. because these are "Nx more likely to appear in generated content".
Most amazing art isn't really a product of inspiration, but from severe editing (or severe practice, if it's live).
Good writing needs a lot of "post-production" to get the ideas hammered out. Most of it is removing content that isn't central to what the writer wants.
This LLM trend is part of a larger historical pattern that shifts editing away from us having to think things in our brain:
A. At one time, the editing was mental load, since writing was tedious.
B. The typewriter made writing easy, but modifying it required lots of handwritten scrawling, but the mental load was still within reviewing and rewriting the content.
C. By the end of the 20th century, editing and rewriting was a total breeze, but the mental load was still within handwritten note-taking.
D. Once we made a bazillion forms of productivity and note-taking software, the mental load was only in thinking the thought and getting it into a computer. Everything after that was massaging the idea.
E. Now, the regurgitation machine can get you 3/4 of the way to the finish line of your draft without even trying.
But, I'm convinced we lost something on each of these transitions. There is more power in one well-placed sentence assembled over tremendous meditation than 85 paragraphs of slop.
Paul Graham's essay on good writing (https://paulgraham.com/goodwriting.html) defines "right" written ideas as "developing them well — drawing the conclusions that matter most, and exploring each one to the right level of detail".
My opinion is that the absurd complexities of the Over-Information Age make the "right" level of detail the following:
1. Executive summary that children and dumb people can understand.
2. Tightly-defined specifications for everyone who cares or needs to know.
3. Footnotes and background information that you can throw everything and the kitchen sink onto. This includes attempts to persuade, artful descriptions, feelings you had, associations to other things, and that general elegant "waxing on" that everyone gets the fancy for doing sometimes.
I'm somewhat amazed this got upvoted to the frontpage... I guess the title got upvoted because it's really terrible writing, either LLM generated or someome trying their hardest to sound "deep".
There are so many technical and stylistic issues with this, I would say it's either someone learning to write and trying too hard, or again, using an LLM.
1) Why is the whole thing written in the second person perspective? Is it disguised autobiography? It seems to be a cheap way of trying to claim intimacy with the reader, "here look, this was your experience". It ends up sounding like someone narrating your life at you while actually (secretly) talking about themselves.
2) While mostly written in the second person, there are several jarring switches back into first person. Unsure if this is a mistake or was intentional, either way, it just sounds bad.
3) The tenses are bouncing all over the place in this writing. We have present tense: "You take the train", past tense: "You took it", future tense for a kind of prophetic vibe: "A decade from now, you will not know", and finally a sort of timeless present proverbial tense: "The text is the same. The reader is not. This is what the contemplative traditions mean when they talk about the spiral — the return to the same point, but at a different elevation."
The effect of all the tense switching and weirdness just makes it hard for the reader to feel grounded in any of the scenes.
4) Rhetorical negation. The writer loves this pattern of describing things by what they are not. Examples:
"It was not silly. It was not even reverent. It was just a thing."
"Not with a call. Not with a vision. Not with a voice in the night."
"Not a metaphor for one. Not like a pilgrimage. A pilgrimage."
It can be a nice effect if you use it once... not repeatedly in a short piece, it makes you think the writer is constantly arguing with an imagined critic. "It's not this, it's this!"
5) Performative plainness, e.g. "You walked. You ate. You slept." "You stood for a minute. You looked at the statue." There's a lot of these kinds of fragments, and they feel strange because they're written in an active voice by describing a "character" who never makes any choices, i.e. entirely passive. It's like the author is trying to ape Hemingway's style in these moments but missing the characters and the story which go along with this spartan active voice.
Taken altogether, it feels like the author is trying to sound profound but with enormous effort and trying to use every trick in the writer's toolbox, which ends up sounding confusing, and creating distance between the author and the reader, and the "gap" is obvious, the engineering is visible. It's like the writer is saying "You were not trying to be moved." and the reader feeling that the author has tried desperately hard writing 3000 words to try and move you.
I tried writing a short novel using Claude Opus 4.6, I gave it outline and raw draft, and the style is very similar to this writing.
I tried to steer it away from this kind of writing because it feels weird. But it always try to output something similar to this. Or maybe I am just not used to reading novel.
So I was curious, what kind of training data was Claude trained on, that its very hard to steer it out from this style.
So I opened my kindle and looking through the recommended popular novels. Just reading through its free samples.
And the similarities are striking. Now, I dont know whether the recommended novel is the training data, or its actually written by LLM. Or maybe its just how novelist writes.
I even tried writing full chapter from scratch. And asked Claude to ghost write the second chapter for me using my writing style. It still wont follow my style and keeps writing in this kind of style from the article.
Not accusing the article of using an LLM to ghost write. Even so its fine to use LLM to ghost write. Its just one anecdote from my side, on how LLM fails to follow my writing style and keeps coming back to its training data.
Don't take this as a defense of LLMs, because it absolutely isn't, but:
>Or maybe I am just not used to reading novel.
If you're not even used to reading novels, how can you judge the results of writing one? That is one hell of a confession for someone who's trying to write fiction.
> That is one hell of a confession for someone who's trying to write fiction.
Indeed. A significant part of gaining skills in creative writing is learning to 'read as a writer'. How to examine classic texts to understand how to develop scenes, characters, narrative styles, etc.
An important part of writing is also to write as the reader, eschewing meaningless fluff and sentences that use bombastic emotional language without really communicating.
The latter is prevalent in LLM writing. Imitating "poetry" without the feelings is something that the default, "aligned" chat models with reinforcement all do in one way or another. It's hard to get even a technical essay without empty emotional language.
And I'm only speaking for myself, I like reading novels, but it's perfectly possible to have a slop-meter without doing so.
My own signal-to-noise ratio in writing is also often bad, but with today's "frontier" LLM output I feel there's a specific tendency towards this harmless, emtpy, flowery language full of false dichotomies and rhetorical devices devoid of any purpose to communicate.
A model trained and fine-tuned to generate divisive Reddit threads sure has different tendencies.
But for the friendly assistants, there's often this solipsism and pseudo-poetic aspect.
Related, although just tangentially: https://www.astralcodexten.com/p/the-claude-bliss-attractor
And, regardless of the generation aspect:
An essay that starts with
> On bronze pirates, cloudy days, and the roads we do not know we are walking
just sounds pretentious to me and doesn't spark my interest.
I agree it's hard to get it to output things in different styles... I started doing a side project for writing with LLMs (ailivrum.com -- my main focus being doing some writing/reading for my younger daughter right now, although I'm structuring for others to use it too).
So far what I found is that doing prompt engineering does not yield great results. LLMs just go with their own style regardless and I had not much luck changing it... it can do some interesting stories though, but it's far from just outline + prompt > gen story to get something that's readable (on the good side, there are many LLMs, so testing a different provider may give better results).
> And the similarities are striking. Now, I dont know whether the recommended novel is the training data, or its actually written by LLM. Or maybe its just how novelist writes.
For traditionally published works, it's trivial to exclude LLM-written content, just look for anything published before Nov 30, 2022.
I think we are discussing the wrong problem here. I have no solution to offer, but I think the problem is not so much generated content, but the surroundings in which it can thrive and become the content you see everywhere.
If we hadn't removed the gatekeepers everywhere (and I know there are problems with them, too), then all that technology would not be able to do much harm.
It might also have to do with incentives. The incentives in our economy are not to help and advance society, the invisible hand nonwithstanding.
Which is also a good filter for web searches to exclude a lot of garbage results (if the specific search makes sense for non-recent results)
Except many search engines have a recency bias.
A sane default previously; as news changes and the status quo also, but it makes you even more likely to encounter slop now.
Not sure how that changes the fact that you can filter by date range in searches where you don't actually need anything recent?
Why stop with traditionally published works? Before dead-internet-day, very-nearly all forms of writing were guaranteed to be hand crafted, organic, and made with 100% Natural Intelligence.
The artificial stuff often has an odd taste, but boy it sure is quick and convenient.
Don't you remember the endless SEO spam that swamped the Net even before GPT, allegedly written by real humans?
You joke, but I bet every person in this forum, when presented the choice between a bot-filled forum and a guaranteed human-only* forum, they'd go with the latter.
* this is a hypothetical scenario. I don't know any guaranteed human-only digital forums.
I converse enough with LLMs for research at this point where I feel I have a good enough structure to hop on/off them to primary sources and stuff, so I don't get annoyed with them too easily.
Whereas I haven't seriously reflected on my social media consumption habits for over 15 years, and over the years I'm getting more and more annoyed at social media.
Not to be a bit misanthropic, but there's something seriously wrong with my social media usage, especially when I know there's a real human on the other side, combined with ever increasing annoyance towards commenters and just the feelings I get after reading social media.
It may be dopamine / self-help related, but no actually, I think all of that is part of the issue (discovered that in high school when it was taking off). Something about the way I'm fundamentally interacting with the medium seems so horrible and icky the more I mature.
I agree with you, but as to your addendum:
Niche hobbyist forums are still safe, for now. There's just not enough commercial interest in petroleum lantern restoration to make it worth anyone's time to poison this particular well.
Even some larger niche hobbies like the saltwater aquarium community seemspretty safe for now (though it also helps that many forums have members who visit each other to trade corals and admire each others tanks).
On the contrary! The dead-day theorem established earlier states that an 11/22 date filter is a necessary condition for verifiable human-only content, when filtered by content-creation date.
A weaker theorem can be postulated that any such filter provides a second order sufficient condition.
This means we can filter content by account creation date, for example, by hiding all posts and comments from accounts created after the digital death event. This won’t always guarantee human-only content but certainly more than otherwise.
But then we wouldn’t be having this most definitively human-to-human conversation, right?
Is the ChatGPT launch the "low background steel" date for writing?
What's are the dates for images and video? Nano Banana Pro and Seedance 2.0?
And code? Opus 4.6?
It's not the launch of GPT, but probably about 4 or 4o that it really became solid. I also don't think video is there just yet, at least for video over 10 seconds.
Is it "solid" if people can read it and instantly know it's generated content?
No. But you can easily make and post content that is not easily detectable as generated.
You only notice plastic surgery when it's bad, but that doesn't mean all plastic surgery looks bad...
Who's "people"? The bottom X% (40%?) of the population is already falling for AI slop video scams, but before that, they were also falling for pig butchering and nigerian prince scams, so the "average" person benchmark has already been passed for text, photos, videos, etc. For more astute consumers, video isn't there yet.
There's also the question of whether people are even trying to disguise AI content, and how effective that disguise is. Are you or I missing the AI-generated text that just has a veneer of disguise on it?
>Who's "people"?
If you follow this thread up you will see the context is 'people who want to read content written by humans.'
why does it matter when it "became solid?" there was plenty of slop generated with ChatGPT, that really was the turning point (because of public access)
4-5 words sentences ted talk style, yes. I hated it even when humans were doing it. It's like motivational speakers trying their hand at writing novels
This is such an obnoxious reply holy crap... Why is it upvoted to the top of the thread.
I actually loved this, and felt moved. While reading, my mind fired rapidly through dozens of personal memes (i.e. tags for my regularly trod thought-paths) that I keep in my knowledge-base. This is the 30mb text corpus where I log all my work and peer conversations and thoughts, and (amongst other things) where I think through what I would consider my spiritual practices... my sensemaking around complex systems, including Daoist teachings. This text basically entangled itself with the work I am doing at the outer edges of my own knowing, where I am working on my rawest and most fragile but precious thoughts.
I don't think this is trite, I think there is something in this that is in contact with "living structure" (in the Christopher Alexander sense[1]), and much exists outside the edges of the text.
To those who dislike this, I am genuinely curious: Would you say you dislike metaphor? Do you tend to feel disconnected and lacking resonance with poetic writing?
[1]: https://dorian.substack.com/p/at-any-given-moment-in-a-proce...
EDIT: I experience this writing as giving me many quiet A's, or perhaps a smell of A's in a given direction of thought. I interpret others here as getting either B's or U's, in the sense of this A/B/U system: https://openresearchinstitute.org/onboarding/A_B_U.html
Agreed, I found this essay engaging and emotional. While I can see why the unusual style might not be someone's cup of tea, I don't agree at all with the criticism that this is bad writing. It had a Haruki Murakami ethereal feel that I am quite fond of.
Regarding other comments in this thread, the moral panic over AI writing has mostly passed me by. While I certainly have a philosophical preference for things written by an actual human, I don't care to invest the bandwidth in analyzing every single thing I read for hints of llm patterns. If I like it, I'll keep reading. If not, I won't. Sometimes discontinuing a piece of writing also aligns with obvious AI use, but that is generally a secondary issue.
Do you mind me asking what type of system do you use for keeping these notes, the 30mb text corpus with conversations and journaling? Are you using txt, an app like Logseq? I flip flop between apps for this sort of thing and then annoyingly the building of a "system" sucks up my time rather than writing and logging and reflecting. It's a struggle for me any advice would be much appreciated :)
My humanist degree and all those years reading B’s and D’s of French philosophy come extremely useful in strange places. Having had to write long essays sieving through mounds of seemingly near impenetrable (and actually surprisingly banal, after you learn how to read it) prose of post-structuralist philosophy, I learned how to automatically look for the structure of the text first by skimming, starting from the end, creating a mental map of the text so that you can locate the main argument and the amino acid amongst the boilerplate and stock sentences.
Today it saves me time skimming a text, seeking for main sentences by jumoing around and quickly coming to the understanding of “Oh, hi ChatGPT”. In the past it has saved me a lot of time not being tricked to read SEO gurgle, ad-copy and just generally bad writing. If writing is really just editing, reading is mostly filtering, sieving the cereal from the chaff.
is this meant to be ironic?
“I wish there was a way to know you're in the good old days before you've actually left them”. — Andy Bernard
Why is it hacking the back button in my mobile safari browser? And why the title is different from the page?
I try not to be too critical of things by strangers on the internet, but I feel compelled to say that as a big fan of the type of fiction this seems to be going for - a kind of Franzen/DeLillo/Ishigoro aka mystical non momentousness - I found this really dull.
In reading, which I did before reading the comments here, I wondered too if it was LLM written or assisted. It has the hallmarks of an LLM; the "this but not this" - the proclivity to be profound. In this case as if it had been told "I want to make something meaningful out of these events, but it shouldn't mean anything."
There is not about page. No links to other socials. The entire site could be LLM generated for all we can tell.
If anything at all, this "essay" and site serve as a reminder that there is an uncanny valley to LLM writing, and that real authentic human communication will likely become rarer and more valuable as this slop proliferates.
edit: from the OP's profile it looks like this is probably a well-meaining person interested in post-structuralism and meditation, but is likely using LLM to achieve that goal. Maybe they wrote in Japanese and are translating to English? Also I kind of like coming across stuff like this on HN but I feel it should still be adjacent or at least peripheral to the topics we normally discuss
AI;DR
Horrible soulless dross.
Pangram says this is 61% AI generated. Thinking of paying for a Pangram subscription so I can spend the valuable time I have to read for pleasure on human created pieces than AI slop.
Interesting, I just did this on some stuff I have personally handwritten, but the writing is about LLMs, and it says 25% AI generated, but because I used terms that are "x% likely AI" generated.
This is not a very useful test; it basically means a person has to "ban" terms like "user experience" and "tireless" etc. because these are "Nx more likely to appear in generated content".
LLM or not, this is just terrible kitsch.
Most amazing art isn't really a product of inspiration, but from severe editing (or severe practice, if it's live).
Good writing needs a lot of "post-production" to get the ideas hammered out. Most of it is removing content that isn't central to what the writer wants.
This LLM trend is part of a larger historical pattern that shifts editing away from us having to think things in our brain:
But, I'm convinced we lost something on each of these transitions. There is more power in one well-placed sentence assembled over tremendous meditation than 85 paragraphs of slop.Paul Graham's essay on good writing (https://paulgraham.com/goodwriting.html) defines "right" written ideas as "developing them well — drawing the conclusions that matter most, and exploring each one to the right level of detail".
My opinion is that the absurd complexities of the Over-Information Age make the "right" level of detail the following:
And, in this attitude, LLMs are only good for #3.I'm somewhat amazed this got upvoted to the frontpage... I guess the title got upvoted because it's really terrible writing, either LLM generated or someome trying their hardest to sound "deep".
There are so many technical and stylistic issues with this, I would say it's either someone learning to write and trying too hard, or again, using an LLM.
1) Why is the whole thing written in the second person perspective? Is it disguised autobiography? It seems to be a cheap way of trying to claim intimacy with the reader, "here look, this was your experience". It ends up sounding like someone narrating your life at you while actually (secretly) talking about themselves.
2) While mostly written in the second person, there are several jarring switches back into first person. Unsure if this is a mistake or was intentional, either way, it just sounds bad.
3) The tenses are bouncing all over the place in this writing. We have present tense: "You take the train", past tense: "You took it", future tense for a kind of prophetic vibe: "A decade from now, you will not know", and finally a sort of timeless present proverbial tense: "The text is the same. The reader is not. This is what the contemplative traditions mean when they talk about the spiral — the return to the same point, but at a different elevation."
The effect of all the tense switching and weirdness just makes it hard for the reader to feel grounded in any of the scenes.
4) Rhetorical negation. The writer loves this pattern of describing things by what they are not. Examples: "It was not silly. It was not even reverent. It was just a thing." "Not with a call. Not with a vision. Not with a voice in the night." "Not a metaphor for one. Not like a pilgrimage. A pilgrimage."
It can be a nice effect if you use it once... not repeatedly in a short piece, it makes you think the writer is constantly arguing with an imagined critic. "It's not this, it's this!"
5) Performative plainness, e.g. "You walked. You ate. You slept." "You stood for a minute. You looked at the statue." There's a lot of these kinds of fragments, and they feel strange because they're written in an active voice by describing a "character" who never makes any choices, i.e. entirely passive. It's like the author is trying to ape Hemingway's style in these moments but missing the characters and the story which go along with this spartan active voice.
Taken altogether, it feels like the author is trying to sound profound but with enormous effort and trying to use every trick in the writer's toolbox, which ends up sounding confusing, and creating distance between the author and the reader, and the "gap" is obvious, the engineering is visible. It's like the writer is saying "You were not trying to be moved." and the reader feeling that the author has tried desperately hard writing 3000 words to try and move you.
[dead]