Skip to main content


“Something Bizarre Is Happening to People Who Use ChatGPT a Lot”

futurism.com/the-byte/chatgpt-…

> Researchers have found that ChatGPT "power users," or those who use it the most and at the longest durations, are becoming dependent upon — or even addicted to — the chatbot.

Possibly related: what I wrote a couple of years ago on how LLM chatbots function like a mentalist's con

softwarecrisis.dev/letters/llm…

reshared this

in reply to Baldur Bjarnason

I tried giving chatGPT one of my poems. Asked for feedback. The feedback was 'pretty OK.' It seemed like real feedback from a person. What I didn't anticipate was how it me very happy, if only in a fleeting way.

Writing is hard and getting someone to read your work and understand it is a big milestone. GPT correctly described the symbolism and techniques in my poem and was generally positive about it.

Part of me responded to reading this as if a person had said those things.

myrmepropagandist reshared this.

in reply to myrmepropagandist

But there isn't a person who has read my poem and has seen all of those things in it, or noticed all of those things about it.

It's nice to know that I've written a poem where it ought to be possible for that to happen with a real person.

But, maybe the poem is too boring, or too strange, or not strange enough and no one will even get that far. Maybe, I have more work to do to make a poem that connects with people... which is what I want.

This entry was edited (4 months ago)

myrmepropagandist reshared this.

in reply to myrmepropagandist

But, reading the GPT feedback gave a little of the joy of reading a good response to my work from a reader. As in I was blushing a little like "aw shucks" because the analysis was so complete and ... had a person written such a response it would mean they really paid attention to and really engaged with my work.

I can easily see how that might become addictive.

And looking at the criticism again days later, it's flattering but not very helpful. I doesn't help me to write better.

myrmepropagandist reshared this.

in reply to Baldur Bjarnason

I've had people read stories I've written and they miss what I thought were major obvious plot points. This can be very frustrating and it's tempting to blame the reader, but I look at it as a sign that either something is off in my story telling or maybe the story just isn't "for" that person.

GPT doesn't do that. It picks up on EVERYTHING... and yet there is something shallow in the response.

in reply to myrmepropagandist

@futurebird interesting! Was it helpful at all beyond the ego boost? (A better question is: would you recommend the process to others?) Also: if you submit your work for criticism/review does that mean you are giving ChatGPT/AI permission to use your work to train itself?
in reply to EpiscoGrrl

@Jay
I'd only ever give it material that has been public on the web for years. I don't really trust the people who run it.

It's hard to say if it was helpful or not.

It showed that theoretically it was possible to "get" many of the things from the poem that I wanted people to get. But, I don't know if that means much?

I think it might be able to show a writer if their work is so confused and opaque that it doesn't even say what they think it says.

myrmepropagandist reshared this.

in reply to myrmepropagandist

@futurebird oh, fascinating! I’d be loathe to give it anything new but if it’s something that may already have been scraped… 🤔
in reply to EpiscoGrrl

@Jay
I often trash talk LLMs because there are many things they are bad at. But, writing "feedback" that passes as real is something they can do rather well.

The statistical associative way that the responses are generated allows the LLM to make comparisons and spot themes in a convincing way.

People who run slightly scammy "writer's workshops" that string amateur writers along may be out of work.

myrmepropagandist reshared this.

in reply to myrmepropagandist

@Jay
The reason I consider some pay-to-play writers workshops scammy is they flatter amateur writers to keep them paying for more workshops. It's a gray area, it's hard to prove that feedback is "fake."

Sometimes "I got bored and didn't finish reading it" is the feedback I need even if I don't want to hear that. I'd much rather read about all of the symbolism someone noticed in my work. But, no one is going to read with that much care if the story isn't compelling.

So it's fake.

myrmepropagandist reshared this.

in reply to myrmepropagandist

@futurebird I’m not sure that’s a bad thing. I do, however, worry that it will also eliminate beta readers and potentially editors (not to mention lead to everything sounding the same).
in reply to EpiscoGrrl

@Jay
It's not a replacement for my favorite "beta reader" my husband. If he gets lost and misses things I know exactly what I need to do. He's not plugged in to the arts and writing and just either enjoys things or finds them confusing. If my work isn't justifying its existence I can tell from his response very quickly. And boy is it disappointing when I realize that something isn't working!

Much more flattering and fun to have GPT tell me about symbolism. Flattering and useless.

in reply to myrmepropagandist

@futurebird congrats on having a great beta reader! Unfortunately, not all of us are that lucky (or write in a genre significant people in their lives are willing to read).
in reply to EpiscoGrrl

@Jay
I have not told him this, but what I look for most is how quickly he reads the story. If he's glued to it and reads it all in one sitting that's a good sign. He always says he likes it, but sometimes I've done a bad job organizing the story, or elements are too vague, or plot is too meandering.

My "writer friends" don't always let me know about these more fundamental problems. Like GPT they force themselves to read with care.

I need to know if the story can *make* a reader care.

myrmepropagandist reshared this.

in reply to myrmepropagandist

@futurebird I don’t think I ever thought of it that way. 🤔 Now I’m going to have to try to recruit new beta readers
in reply to myrmepropagandist

@futurebird @Jay that’s so interesting to me as a sometime peer reviewer of journal articles. I read everything with care and never give feedback like “this was pointless and meandering.” The goal is always to make the work good enough to publish rather than tell them not to publish it. I guess I have *received* feedback like that once or twice but it just seemed mean spirited and weird so I ignored it and the work got published elsewhere.
in reply to myrmepropagandist

@futurebird @Jay I see a huge amount of YouTube advertising for Grammarly, and the focus of the ads seems to be on secondary and university students who speak English as a second language, to produce native-reading prose.
in reply to Matt McIrvin

@mattmcirvin @futurebird @Jay That worries me because every time I've tested Grammarly over the past year or so, it's been extremely error prone and often introduced errors through suggestions, made the text clunkier, and quite often made "corrections" that were unambiguously incorrect, such as suggesting plural words when inappropriate.

Grammarly didn't use to be this bad. It was quite usable 2-3 years ago.

in reply to Baldur Bjarnason

@futurebird @Jay I think it wasn't originally LLM-based. Maybe another example of someone throwing LLM technology at a thing and making it worse, so they can get that AI buzz in the market.
in reply to Matt McIrvin

@mattmcirvin @Jay

IDK I feel like LLM is being applied in a way where it might stand to be useful in this context. Text generation that meets expectations is what LLMs do best.

in reply to myrmepropagandist

LLMs are basically the same technology as Google Translate and its competitors, and that's useful enough to me that I use it all the time. That's "text generation that meets expectations" in the most direct sense.
This entry was edited (4 months ago)
in reply to myrmepropagandist

@futurebird @Jay
This is the only way I use gpt when writing. After I've finished all my editing, I run it through chat and ask the machine to explain the story, characters, and literary techniques back to me. If it can describe what I'm trying to say, I must have inserted what I wanted in the text.

Critical to this is identifying the work to the bot as "a new novel from an unknown author". It blows smoke up your ass if it knows the work is yours.

myrmepropagandist reshared this.

in reply to Aubrey Jones

@aubreyjones @futurebird interesting. I’d be very hesitant to submit something new to it, but that’s my personal issue.
in reply to franebleu

@franebleu
I gave it these paragraphs and it correctly understood that this is about cleaning out a summer home after one of your parents has died. (which my husband missed when I showed it to him) GPT correctly sorted all the imagery into summer and winter contrasts.

futurebird.tumblr.com/post/756…

In reality for microfiction I think it needs a little more work.

in reply to myrmepropagandist

Beautiful !

(In my humblest opinion, it could even do without "the mortgage and documents" sentence, it was so real it made me fall on the ground from my flight)

Magnificent stuff 😀

This entry was edited (4 months ago)
in reply to myrmepropagandist

pretty sure this is part of the RLHF fine-tuning" to please the user" whatever the inputs.

Could even be implicit due to the human feedback loop in the process. It very likley does catch on basic human psychology

This entry was edited (4 months ago)
in reply to ggdupont

@gdupont
When I was in college this guy came to our reading group as a part of his attempts to pick me up. He figured out that he could get me to talk to him if he talked about my writing and put a lot of effort in to it.

It kind of reminded me of that.

in reply to myrmepropagandist

@futurebird
I've done the same for code.. starting it with a blank brain and feeding in source files one by one and then basically saying "discuss, explain". No comments explain what the whole thing is about.

It documents my APIs nicely. Cool, they make sense theoretically.

It deduces how parts interact. It even gets a little excited about it.

But it is no better at offering an explanation of "why" or "what it is for" than I am.

But for testing source readability, its nice.

in reply to myrmepropagandist

Getting someone to read and understand a huge collection of source-code. Also.. rather hard. Especially if you know few actual people and none who have ever learned a programming language 😉.

I felt both very silly and also this very heart-warming relief to see the actual functional ideas documented and correct and extrapolated upon by a voice that wasn't just me to myself in my head.

This entry was edited (4 months ago)