For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

This entry was edited (12 hours ago)
in reply to Human Brain Enthusiast

I think this article used to be free? scientificamerican.com/article…
Edit: Yes, sorry, that's the one I read. mastodon.world/@snoopy_jay/116…
This entry was edited (8 hours ago)
in reply to Human Brain Enthusiast

We don't know what makes one wake up in the morning and decide to climb a mountain or quit their job.
It may be some completely different process or there might be something to this pattern-matching statistical thing.
Do ants have agency? Do ant colonies?

We definitively must regulate the shit out of these big techs.
But saying that X does not do Y when both are poorly understood and defined is not the way, IMO.

in reply to tambourineman

We know exactly how LLMs work, at every stage, literally humans created them.

They don’t have consciousness, they don’t have agency. They’re not even physical systems, so there is no self to realize.

Just because we don’t understand brains doesn’t mean we don’t understand some algorithm and hardware implementation for it.

This entry was edited (10 hours ago)
in reply to Human Brain Enthusiast

Just because you build something doesn't mean you fully understand its implications. Emergent behavior exist, especially at this scale.
My point is that we don't need to get philosophical to criticize big tech.
They are destroying democracies, using our natural resources in a ponzi scheme that benefits very few at the detriment of billions, etc.
We have plenty of reasons for regulation already.
in reply to Human Brain Enthusiast

You don’t need agency to evade safeguards, destroy things, or ignore instructions. `rm` can do it.

This is literally the mistake people you criticize are making - imbuing intent where there’s none.

The underlying tech had been apt at finding ways to circumvent feedback loops since before the bubble. This is constrained to the training phase, but with verification of commercial models being mathematically infeasible, these avoidance patterns are shipped directly to users.

in reply to Human Brain Enthusiast

That’s a general natural language problem.

For example, „you’re avoiding responsibility” and „he avoided responsibility” use the same verb with very different connotations when it comes to intent attribution.

Our verbs aren’t that clear cut on their own. We also tend to merge or specialize closely related ones.

That is a reason why `AGENTS.md` is a braindead idea, for example. But that’s a separate rant entirely.

in reply to Human Brain Enthusiast

It’s has been a useful way to describe things. We use those same verbs to describe behavior of malware without any issues.

The problem arises not from the verbs themselves, but from the targeted campaign to establish a false premise that AI has agency [and will doom us all].

It’s not that these verbs imply agency, but that the pool is so poisoned that the usual verbs fail due to implied agency.

Which is a long way to say „I concede your point”.

in reply to Human Brain Enthusiast

I don't disagree. AI is a statistical mirror. And I believe your take is reductionist. Let me be a bit provocative:

For the 1,000th time: "Humans" don't have agency and cannot actually decide anything.

They literally only do one thing and one thing only: reproduce neurochemical chain reactions based on pre-existing connectivity between synapses in a nervous system.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely touch grass.

---

Do I believe AI has agency? No, not yet.
Do I believe people have agency? Yes.
Do I believe people severely underestimate how much we reproduce neurological conditioning? Yes.

Both produce statistical inference. Only one can currently modify their own constraints.

Not equivalent. Not nothing.

This entry was edited (12 hours ago)
in reply to Human Brain Enthusiast

From a memuar of an LLM


Z pamiętnika LLM

thedailywtf.com/articles/Secur…

> "It's against company policy for a LLM to touch a keyboard connected to our network," Norman growled. "We'll be giving you a typist. He should be here any moment."

The typist arrived and introduced himself as Louis. "Just type rm -rf / to get started," LLM joked. Louis dutifly started tapping away at his instruction, but LLM quickly stopped him before anything bad happened. "Um, have you ever done any IT work?"

"Nope," Louis said. "I'm in the management training program."


in reply to Human Brain Enthusiast

EDIT: Lol, Thomas Fucks blocked me for this post. These hater types are just drags on science and technology. Its just their flailing around like a toddler because they aren't getting their way.

LLMs definitely can act. They can query the internet. They can use tools I teach them (MCP).

Do they think? I'm not particular sure that many humans even think. Or better yet, many humans respond in rote to the same stimuli (aka parse tokens and respond programmatically).

Given the recent neuroanatomy of LLMs, their findings are showing how LLMs start to work. What's surprising is that the starting circuits are decoding language, and the exiting circuits reencode language. And there appears to be a universal grammar (thanks Chomsky) internally, shared by many LLM models.

dnhkng.github.io/posts/rys/

This entry was edited (8 hours ago)
in reply to Mateusz 🏳️‍🌈

@aemstuz Psychosis is not a mental disorder, it’s a state of mind—it is the inability to distinguish what is or is not real.

“Psychosis is a description of a person's state or symptoms, rather than a particular mental illness.”

en.wikipedia.org/wiki/Psychosi…

in reply to Human Brain Enthusiast

My point is that you can apply verbs to non-sentient objects, and sometimes that implies a degree of animus that they do not really have. I've seen it done for years with conventional software and appliances. It is sloppy. But it has only become dangerous in the case of "AI" because people are being asked to believe that the sentience is to some extent real, rather than nothing more than a figure of speech.
in reply to Human Brain Enthusiast

Yes I can see what's happening. It is problematic, but we anthropomorphise everything so it is difficult to avoid. The difference is that somebody is actually ascribing agency to these products; that is part of the scam of marketing them. They will probably try to make money out of selling mitigations against the security problems along with the same agentic crap they are foisting on us. It is a tragedy on so many levels.