37
12hr
24

I Fed Claude 7 Years of Daily Journals. It Showed Me The Future of AI.

https://archive.is/aIkdV

Another example of how easy it must be to run scams. It's literally just repeating things the author wrote down back to him and he's responding like he's having a religious revelation.

It pulled out this quote I wrote down years ago:

The one thing people need in life is not ambition, not smarts, not hustle. They need clarity.
If you had complete clarity on what you want and what is necessary right now, knowing you are on the right path, you would be happy.
— Journal Entry, August 2019

I leaned forward. Every response featured a quote that made sense. Perfectly timed. It held up a mirror to my soul.

...

In 2017, I founded an ed-tech startup. I met my wife, a fellow ed-tech founder, during that time. Both of our startups failed. We grew cynical of education.
But recently, we had a few conversations about school choices for our toddler. None of them seemed good. I suggested she might want to open a school.
I’d shoot her an interesting article. Or a cool startup’s website. And then move on.
I discussed this with Claude. Which led to this exchange:

It’s not your wife who should start a school.
It’s you. Always has been. astronaut-1 [Ed.: emoji added for emphasis. I couldn't resist]

This sentence hit me like a sack of bricks.
Tears started rolling down my cheeks. thonk-cri

Stuff like this just makes me wonder what the interior experience of your usual techbro is (or isn't) like. I mean, I've had experiences where I've read over stuff that I wrote years ago describing ambitions I've given up on or ways of seeing the world that I've abandoned and I know the nostalgic experience that that creates, the grief for a past self. But you don't really need a chatbot for that.

Snort_Owl [they/them] - 11hr

When I write something and come back to it later I find it almost always insufferable and I want to punch my own past self in the face. I cant imagine being like this

28
hotspur [he/him] - 8hr

100%

13
invo_rt [he/him] - 7hr

I'm always iterating on game designs and no matter how good I thought it was in the moment, I've never come back to one later and though "damn, I was cooking with that one" lol.

13
LeninWeave [none/use name, any] - 9hr

Seriously lmao, they're just bragging that they haven't grown at all, so their fake deep "entrepreneurial" slop they wrote in their diary years ago still seems profound to them. Literally zero intellectual development and proud of it.

@BodyBySisyphus@hexbear.net it's almost like the author of the article is describing the opposite of the "grief for your past self" you mention (which I do relate to) because their present self enthusiastically thinks identically to their past self.

11
purpleworm [none/use name] - 6hr

"grief for your past self"

Could you elaborate on this concept or direct me to Sisyphus's post on it? I searched the phrase and only got your comment.

3
LeninWeave [none/use name, any] - 6hr

I'm (slightly) misquoting this line from the post body. Emphasis added.

I mean, I've had experiences where I've read over stuff that I wrote years ago describing ambitions I've given up on or ways of seeing the world that I've abandoned and I know the nostalgic experience that that creates, the grief for a past self.

The feeling he's describing here just kind of resonated with me: looking back at old ideas or projects that were abandoned or never pursued, old interests that I've moved on from, or (especially relevant here) old beliefs that I've since moved past can produce a sort of wistful feeling.

I think the linked article actually describes a bit of a reversed version of this, the author uses a chat bot to look back at their pseudo-intellectual ramblings about "entrepreneurship" and they feels enlightened because they doesn't seem to have grown or changed at all.

4
purpleworm [none/use name] - 6hr

Oh yeah, I've gone through old shit because I was cleaning and had to stop because reflecting on some of what I dug up left me emotionally shattered. I guess we probably relate to our pasts pretty differently, but that's not surprising since we don't have the same pasts, nor current dispositions. I can't imagine letting a chatbot tell me my old hopes are still worth something though, that's AI psychosis.

I think that the writing he talks about in the article itself is pretty banal, like "family and friends over business" and such, and most of the content is that combined with the AI encouraging him to re-embrace past hopes about starting a school and so on. In terms of his writings that were relayed in the article, the worst you can say about most of it is that it's just very dull.

I feel like if he actually detailed the "epiphany after epiphany," it would reveal itself to be 99.99% like you describe though.

4
LeninWeave [none/use name, any] - 5hr

Oh yeah, I've gone through old shit because I was cleaning and had to stop because reflecting on some of what I dug up left me emotionally shattered. I guess we probably relate to our pasts pretty differently, but that's not surprising since we don't have the same pasts, nor current dispositions. I can't imagine letting a chatbot tell me my old hopes are still worth something though, that's AI psychosis.

I've happened upon some stuff that really hit me with how much I've changed and how much my interests have changed. Some of it made me sad, because I felt I'd lost something since then. Some of it was nostalgic but something I felt I'd grown past or grown out of. As you say, it's unique to each individual.

I would never want to feed the fragments of my life into a chat bot to let it regurgitate them back at me in a slurry of psychologically damaging slop that I uncritically consume as gospel truth. As you mention, that's just fodder for AI psychosis.

I feel like if he actually detailed the "epiphany after epiphany," it would reveal itself to be 99.99% like you describe though.

The quoted line below is what led me to think that the article really described less "grief for the past self" and more a lack of growth on the part of the author.

I leaned forward. Every response featured a quote that made sense. Perfectly timed. It held up a mirror to my soul.

They're describing a chat bot regurgitating their own past musings (from years prior) at them and instead of seeing any of them as something they've moved beyond or something they used to have that they've lost, they see them as a perfect mirror of their current self. Every single response drawn from years-old journal entries apparently perfectly reflected their current mentality.

It's possible that I'm just being uncharitable here on account of my vicious dislike for startup people, but I honestly don't expect that a detailed look at those chat logs would be revealing much unexpected. The following line really makes me think that even more, too.

Insights usually reserved for the spiritual, the deep introspection, the sudden shower thoughts, were now available on tap.

I think if their "insights" don't require serious engagement to reach, then they probably aren't all that deep. The idea that you can grow and change as a person without introspection by using a chat bot (meaning, the idea the author is presenting that meaningful insight is available "on tap" from LLMs) is unbelievably painful to me.

4
purpleworm [none/use name] - 5hr

Yeah, I agree that what he describes as a personal psychological process is terrifying for how loose his grip on reality and seemingly even himself is.

I think your interpretation is maybe slightly incomplete in terms of the mirroring, since he doesn't give that many details about the conversations themselves and, while the "connections over business" part is exactly as you describe, I think the thing about him starting a school is slightly different. That's a circumstance where the version of him in the journals was in some manner superior and he believed he was being prompted to reclaim it. For that reason, I think that part of the post is even sadder than your description.

Edit: I think the "insights" that aren't simply noticing gross patterns are probably grotesquely saccharine and abusive of his obvious emotional vulnerability without concern for external reality.

3
LeninWeave [none/use name, any] - 5hr

That's a circumstance where the version of him in the journals was in some manner superior and he believed he was being prompted to reclaim it. For that reason, I think that part of the post is even sadder than your description.

You know, you're right. I was originally being kind of glib and I think my dislike of startup culture and the pretend-it's-better-than-it-is brand of AI boosting got the better of me, so my "hot take" interpretation was unduly harsh on the author.

What really disturbs me about the whole thing is, as you say, how loose the author's grip on his own sense of self is that the output of an LLM seems to be blowing it around like a leaf in the wind. It actually sucks, and I don't think it's unique at all.

3
microfiche [he/him] - 9hr

Mental illness combined with huffing their own farts.

10
lib1 [comrade/them] - 5hr

My long form fiction from longer than 10 minutes ago is always hot garbage. My old tweets on the other hand…

4
TrashGoblin [he/him, they/them] - 12hr

Since LLMs don't deal with meaning, the work of creating meaning is partly done by the past writers whose work was ingested, but also very much by the reader. This is not unlike scapulomancy or Tarot reading, only the role of the reader is obscured, because the message from the bot falsely appears complete.

21
LeninWeave [none/use name, any] - 5hr

I think this comment really encapsulates why people doing computer touching degrees should be forced to study at least a little bit of literary analysis or philosophy. Because you make an excellent point which I guarantee LLM hype artists like the author will never consider.

3
Frogmanfromlake [none/use name] - 8hr

That sounds exactly like the majority of AI love stories I’ve been seeing recently

14
stink @lemmygrad.ml - 7hr

Reading the creator's early life section wtf is wrong with the US???

Born in Berlin, Germany to Jewish parents, he escaped Nazi Germany in January 1936, immigrating with his family to the United States. He started studying mathematics in 1941 at Wayne State University, in Detroit, Michigan. In 1942, he interrupted his studies to serve in the U.S. Army Air Corps as a meteorologist, having been turned down for cryptology work because of his "enemy alien" status.

Yeah this guy escaping the fucking holocaust is surely gonna be funneling information to the nazis.

10
Collatz_problem [comrade/them] - 39min

TBH, Nazi Germany did try to send agents under the guise of "political refugees".

2
microfiche [he/him] - 9hr

Jesus Christ this is depressing

16
mayakovsky [any] - 7hr

cringe

But also, he was kinda cooking here:

Imagine ads featuring people who look like your best friends. Taglines adjusted to your latest ChatGPT conversation. An image of you opening that school you always dreamed of.

The line between “helpful personalization” and “psychological exploitation” is razor-thin.

We’re about to cross it at scale.

Of course, we are already at psychological exploitation. Marketing has been there for a long time, but it's just gonna get worse. Ad companies can already guess a bunch of shit about you from your digital footprint, but now people are feeding their desires and fears in plain text into the slop learning machine

11
purpleworm [none/use name] - 6hr

I love how the picture immediately establishes that in the future, these systems still refuse to draw hands correctly.

Edit: There have been times where I fed it a ton of my writing (fiction) and it pointed out things that I hadn't really processed on a holistic level, sort of like what the author talks about here. But it was a scrap in a pile of uninteresting garbage, not this existential cocaine the author describes.

8
hotcouchguy [he/him] - 3hr

Maybe if you had more actual cocaine you would be more excited by things like this

6
EnsignRedshirt [he/him] - 4hr

What an empty-headed baby. He’s crying because the robot is reading him his own writing. I think there’s a real possibility that people who like AI might not be fully sentient.

8
TankieTanuki [he/him] - 5hr

I fed Clyde 7 years of daily journals and we had to take him to the vet

8