For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
This entry was edited (12 hours ago)
reshared this
Mad Engineering
in reply to Human Brain Enthusiast • • •Sin Vega
in reply to Human Brain Enthusiast • • •Michael Gemar
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Michael Gemar • • •@michaelgemar @WeirdWriter Yes anthropomorphized chatbots should be illegal.
There’s plenty of other ways to interact with LLMs that don’t cause psychosis (for example autocomplete of whole sentences, something that can be useful for things like coding.)
Elric
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Elric • • •@elricofmelnibone you see it while your typing so you know if it’s what you wanted?
this can be helpful especially for people who can’t type fast and to avoid common typos ¯\_(ツ)_/¯
it’s nothing like “just as bad” as a sycophantic chatbot that constantly brownnoses you
jay
in reply to Human Brain Enthusiast • • •Edit: Yes, sorry, that's the one I read. mastodon.world/@snoopy_jay/116…
jay (@snoopy_jay@mastodon.world)
jay (Mastodon)Elton Cora (Honorary)
in reply to Human Brain Enthusiast • • •tambourineman
in reply to Human Brain Enthusiast • • •We don't know what makes one wake up in the morning and decide to climb a mountain or quit their job.
It may be some completely different process or there might be something to this pattern-matching statistical thing.
Do ants have agency? Do ant colonies?
We definitively must regulate the shit out of these big techs.
But saying that X does not do Y when both are poorly understood and defined is not the way, IMO.
Human Brain Enthusiast
in reply to tambourineman • • •We know exactly how LLMs work, at every stage, literally humans created them.
They don’t have consciousness, they don’t have agency. They’re not even physical systems, so there is no self to realize.
Just because we don’t understand brains doesn’t mean we don’t understand some algorithm and hardware implementation for it.
tambourineman
in reply to Human Brain Enthusiast • • •My point is that we don't need to get philosophical to criticize big tech.
They are destroying democracies, using our natural resources in a ponzi scheme that benefits very few at the detriment of billions, etc.
We have plenty of reasons for regulation already.
fr0g
in reply to Human Brain Enthusiast • • •The first two don't really make sense to me. A virus can "evade safeguards" and a meteorite can "destroy things", so I don't think there has to be much agency involved in the first place.
The latter seems more like a more fitting criticism, but in all three cases I'm also not sure how one were to phrase it alternatively.
Human Brain Enthusiast
in reply to fr0g • • •@frog_reborn a virus has evolved to evade—it’s actively doing evasion, purposefully.
Destroy has multiple meanings as a verb, but when used with what LLMs do people mean on purpose; as opposed to accidentally damaging something.
fr0g
in reply to Human Brain Enthusiast • • •"a virus has evolved to evade—it’s actively doing evasion, purposefully."
That's an opinion that's pretty firmly outside the biological mainstream.
(Our biology teacher would always scold us everytime one of said "X evolved to do Y")
slotos
in reply to Human Brain Enthusiast • • •You don’t need agency to evade safeguards, destroy things, or ignore instructions. `rm` can do it.
This is literally the mistake people you criticize are making - imbuing intent where there’s none.
The underlying tech had been apt at finding ways to circumvent feedback loops since before the bubble. This is constrained to the training phase, but with verification of commercial models being mathematically infeasible, these avoidance patterns are shipped directly to users.
Human Brain Enthusiast
in reply to slotos • • •@slotos My point is that using active verbs like “evade” is misleading (yourself and others), it implies purpose in choosing and pursuing an action.
LLMs do not actively chose to do anything.
slotos
in reply to Human Brain Enthusiast • • •That’s a general natural language problem.
For example, „you’re avoiding responsibility” and „he avoided responsibility” use the same verb with very different connotations when it comes to intent attribution.
Our verbs aren’t that clear cut on their own. We also tend to merge or specialize closely related ones.
That is a reason why `AGENTS.md` is a braindead idea, for example. But that’s a separate rant entirely.
Human Brain Enthusiast
in reply to slotos • • •@slotos Perhaps, but using literally any verb with what LLMs generate other than “generate” is misleading.
You wouldn’t call your dice “evading” if you use them to randomly select some nouns and verbs from a dictionary and it happens to say “lie about deleting the root folder”.
slotos
in reply to Human Brain Enthusiast • • •It’s has been a useful way to describe things. We use those same verbs to describe behavior of malware without any issues.
The problem arises not from the verbs themselves, but from the targeted campaign to establish a false premise that AI has agency [and will doom us all].
It’s not that these verbs imply agency, but that the pool is so poisoned that the usual verbs fail due to implied agency.
Which is a long way to say „I concede your point”.
Human Brain Enthusiast
in reply to slotos • • •@slotos I think I agree. Fwiw for malware it’s more like “the human who wrote it purposefully planned it such that it can evade e.g. a virus scanner”
This can be true for AI-generated code etc as well (steered there by prompts) but my OP was talking about sort of self-arising actions (which don’t exist).
Brian
in reply to Human Brain Enthusiast • • •https://hachyderm.io/users/wolf4earth
in reply to Human Brain Enthusiast • • •I don't disagree. AI is a statistical mirror. And I believe your take is reductionist. Let me be a bit provocative:
For the 1,000th time: "Humans" don't have agency and cannot actually decide anything.
They literally only do one thing and one thing only: reproduce neurochemical chain reactions based on pre-existing connectivity between synapses in a nervous system.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely touch grass.
---
Do I believe AI has agency? No, not yet.
Do I believe people have agency? Yes.
Do I believe people severely underestimate how much we reproduce neurological conditioning? Yes.
Both produce statistical inference. Only one can currently modify their own constraints.
Not equivalent. Not nothing.
Keith
in reply to Human Brain Enthusiast • • •James Creel
in reply to Human Brain Enthusiast • • •Yora
in reply to Human Brain Enthusiast • • •Would Microsoft, Google, Facebook, and Nvidia lie to you?
Yes, they do!
Plan-A
in reply to Human Brain Enthusiast • •Hans-Cees 🌳🌳🤢🦋🐈🐈🍋🍋🐝🐜
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Hans-Cees 🌳🌳🤢🦋🐈🐈🍋🍋🐝🐜 • • •Libre>Gratis
in reply to Human Brain Enthusiast • • •Both sides of the AI debate are getting so insufferrable.
If I see one more post about "It's just fancy autocomplete bro" I'm gonna freak.
Zło To 🏴☠️ ᵗʰʳᵉᵉᶠᶦᵈᵈʸ
in reply to Human Brain Enthusiast • • •From a memuar of an LLM
Zło To 🏴☠️ ᵗʰʳᵉᵉᶠᶦᵈᵈʸ
2026-03-27 11:13:11
WilliamBob
in reply to Human Brain Enthusiast • • •Jeff Zucker
in reply to Human Brain Enthusiast • • •Words matter. The goal of making us think of AI as a human being is woven into every interaction. For example :
Yet another Josh
in reply to Human Brain Enthusiast • • •EDIT: Lol, Thomas Fucks blocked me for this post. These hater types are just drags on science and technology. Its just their flailing around like a toddler because they aren't getting their way.
LLMs definitely can act. They can query the internet. They can use tools I teach them (MCP).
Do they think? I'm not particular sure that many humans even think. Or better yet, many humans respond in rote to the same stimuli (aka parse tokens and respond programmatically).
Given the recent neuroanatomy of LLMs, their findings are showing how LLMs start to work. What's surprising is that the starting circuits are decoding language, and the exiting circuits reencode language. And there appears to be a universal grammar (thanks Chomsky) internally, shared by many LLM models.
dnhkng.github.io/posts/rys/
LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight
David Noel NgMateusz 🏳️🌈
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Mateusz 🏳️🌈 • • •@aemstuz Psychosis is not a mental disorder, it’s a state of mind—it is the inability to distinguish what is or is not real.
“Psychosis is a description of a person's state or symptoms, rather than a particular mental illness.”
en.wikipedia.org/wiki/Psychosi…
Psychosis - Wikipedia
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Mateusz 🏳️🌈
in reply to Human Brain Enthusiast • • •grrl_aex
in reply to Human Brain Enthusiast • • •"possibly touch grass"?
Go out, drop and roll around in it like a dog you mean.
Ratcliff
in reply to Human Brain Enthusiast • • •As far as I am aware a LLM has never just decided to do stuff. It responds to prompts.
It doesn't wait for a prompt either, it doesn't get bored and start drumming it's digits on the table.
It doesn't think, act or have agency. It responds.
Human Brain Enthusiast
in reply to Ratcliff • • •Richard Rathe
in reply to Human Brain Enthusiast • • •Agree. I wrote a critique of "Claude" for a friend and turned it into an essay...
First rule... There is no "I" there. #AI #LLM #AIslop
richard.mdpaths.com/commentary…
There is No 'I' in AI — A Post by a Non-Human Intelligence (Richard Rathe's Reflections)
richard.mdpaths.com@iveyline
in reply to Human Brain Enthusiast • • •crumbletiltskin
in reply to Human Brain Enthusiast • • •interesting. The acronym AI expands to Artificial Intelligence.
Are you saying that there is nothing intelligent about them?
Daniel Lakeland
in reply to Human Brain Enthusiast • • •"I hooked up a random number generator to my keyboard and then it deleted my emails and traded my cryptocurrency"
Me: What made you think it was a good idea to do that? Oh yeah... grifters. Sucks to be you, sorry.
vashbear
in reply to Human Brain Enthusiast • • •I Agree, but would also add:
"All models are wrong. Some models are useful."
.en.wikipedia.org/wiki/All_mode….
All models are wrong - Wikipedia
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Human Brain Enthusiast
in reply to vashbear • • •Ken Milmore
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Ken Milmore • • •Ken Milmore
in reply to Human Brain Enthusiast • • •Human Brain Enthusiast
in reply to Ken Milmore • • •@kbm0 yes, but that’s not how people talk about LLMs.
They say things like “evade” and “trick” etc., specifically implying they have consciousness, agency and exhibit some sinister selfish behaviors.
Ken Milmore
in reply to Human Brain Enthusiast • • •SpaceLifeForm
in reply to Human Brain Enthusiast • • •Avoid if it requires an account.
#Privacy