Skip to main content


> When I say AGI is impossible, I mean: it requires mathematics that doesn't exist to model biology we don't understand to implement functions nobody can define. That's "impossible" in any practical sense.

fluxus.io/article/alchemy-2-el…

EDIT: christ this post is turning out to be a moron magnet

This entry was edited (7 hours ago)
in reply to David Gerard

The very term "inteligence" in all this nonsensical hype is deceptive per se to say the least.

Radio Free Trumpistan reshared this.

in reply to Erebus

@Erebus_Amauro the term intelligence implies a philosophical concept founded in metaphysics. As such it will always be impossible to construct an intelligent machine without implying that it has analogous properties than a human being.

Therefore it is not a question of science, of physics and biology, but a question of individual choice and social consensus whether to attribute intelligence (and ultimately rights and responsibilities) to the machine.

Radio Free Trumpistan reshared this.

in reply to Benedikt Neuzweig

historically, "intelligence" denotes a social concept founded in colonial preconceptions of the inherent superiority of the white male. You might think that's a stretch, but the entire history is race science and IQ test calibrations were tweaked when they gave unacceptable outputs like women or Kenyans coming out smarter.

the more I look, the more corrupt the entire idea complex denoted by "intelligence" is, in theory and practice.

it's not a coincidence that the artificial intelligence endeavour is also riddled with blatant race scientists

This entry was edited (7 hours ago)

Radio Free Trumpistan reshared this.

in reply to David Gerard

and even this blog post is making some very dubious claims, such as implying that LLMs have solved machine translation or machine summarization, or can write functional code.

smells like boosterism

This entry was edited (15 hours ago)
in reply to David Gerard

Yes, David, but if we DID have AGI, surely we could change common substances into gold?

Radio Free Trumpistan reshared this.

in reply to David Gerard

I don't particularly agree that what we do have so far is worth having, let alone worth the price, but very interesting nonetheless.

Radio Free Trumpistan reshared this.

in reply to David Gerard

AGI = Artificial General Intelligence, in case this helps anyone else but me.

I try to do for initialisms and acronyms what alt text does for images.

As a USian, I immediately interpreted AGI as "adjusted gross income," such as the IRS uses.

"Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks."

Radio Free Trumpistan reshared this.

in reply to David Gerard

I mean, nobody understood fire. AGI is possible, but the techno cultists preaching about how it will elevate to a superintelligence vastly superior to all of humanity are delusional. If we had complex enough computers, they could emulate what a brain does, and it'd be as generally intelligent as any other brain.

The cost of computer complexity is astronomical so we can't even come close to the amount of computer neurons we'd need. And it'd be too complex to program so we'd have to use heuristic training to try and mold it into the program we want. But it is possible. If I were you, I'd just get someone pregnant it's easier.

in reply to Cy

@Cy @David Gerard
Ah. Well, Cy, techbros do have one malady in common: delusions of grandeur.

Radio Free Trumpistan reshared this.

in reply to David Gerard

I can and often do break it down even more simply than that.

Intelligence very obviously requires three states: yes, no, and maybe.

Now, how exactly are you going to implement 'maybe' in a binary system? You aren't. You can't.

in reply to RootWyrm 🇺🇦

@RootWyrm 🇺🇦 @David Gerard
Ahhhh, slight correction is in order. Boolean Algebra doesn't really accommodate the "maybe" state, but solid state logic does: its early name was "toggle state". Today's solid state does a better job of accommodating it.
in reply to Radio Free Trumpistan

@claralistensprechen5th yeah, to be clear, yes|no|maybe is a gross abstraction and oversimplification. Brains and thought aren't boolean things. If they were, there would be no room for uncertainty!

But we really have no idea how many possible states there are exactly, other than "more than two." Could be 3, could be 30,000. (The 30k is way more likely.)

in reply to David Gerard

So you're saying if I develop a model too complex for me to understand, it means I've invented AGI. /s

Radio Free Trumpistan reshared this.

in reply to David Gerard

I took your recommendation and got the audio version of More Everything Forever, and I have to say wow, everyone needs to read this book.

reshared this

in reply to BrianKrebs

@briankrebs
It’s absolutely required reading, and I push it whenever I can. Sadly, people seem not to be interested, no doubt feeling it’s another one of ‘those’ kind of books.
in reply to BrianKrebs

@briankrebs agree! An excellent book, in particular the detailed takedown of the Mars dream. As a complement to that I'd recommend Jill Lepore's podcast about the old science fiction influencing the ideas of Elon Musk and other tech types. pushkin.fm/podcasts/elon-musk-…
in reply to BrianKrebs

@briankrebs I just looked this book up and promptly added it to my read list. Thanks for mentioning it!
in reply to David Gerard

don't tell biology, but not even biology understands it either and just kinda made human intelligence randomly. Unless you believe in god(s), in which case you're in the who made them spiral.
in reply to David Gerard

I disagree with this article. AGI is definitely possible, the AI boosters just haven't moved the goal posts enough yet. I give it about 5 years.
in reply to fancysandwiches

@fancysandwiches OpenAI and Microsoft already agreed it means "Actually Generates Income"

reshared this

in reply to David Gerard

lol, yeah, probably the dumbest of all the definitions. I suspect the goalposts will move around some more though.
in reply to David Gerard

Interesting. But, given how capable AI is at coding, should anyone bother to learn to code anymore? Is it pointless teaching yourself to code in Python, for example? Or any other language?

(Update: Thanks for all the great answers. It’s all very interesting as a phenomenon. I’m only really interested in programming in terms of its use in doing maths. Moreover, it could come in handy one day if I’m stuck outside the pod door and HAL won’t let me in!)

This entry was edited (8 hours ago)

Radio Free Trumpistan reshared this.

in reply to Endor Nim

@EndorNim these LLM-based tools aren't actually that capable, though, as always, it depends on the contwxt. When it comes to producing actual production ready code, they're not there and are not likely to get there before this bubble bursts. I think learning a programming language has lots of value outside of getting a job writing code, so I say go for it!
in reply to Avner

@EndorNim If that is your goal then you probably don't need me to tell you that the market for jr devs is very weak right now. I don't think it will stay that way forever, but I don't know how long it will take for companies currently infatuated with LLMs to turn their hiring strategies post bubble.
in reply to Avner

@Avner @EndorNim rough guess: six to eighteen months, when the numbers start hurting cos stuff stops working
in reply to Endor Nim

@EndorNim It's not particularly capable at coding - as some wag put it, it can do the 80% where you put bugs in but not the 80% where you take them out. People think they're more productive with it, but there's increasing evidence that they're not.
in reply to Major Denis Bloodnok

@denisbloodnok @EndorNim I saw someone on fedi say one time that the statement "people who use AI at work feel like they're more productive" and the statement "people who do cocaine at parties feel like they're more entertaining" are virtually indistinguishable
This entry was edited (7 hours ago)
in reply to David Gerard

I would hesitate to say impossible, but I don’t think it’s going to be done by trying to copy the human brain.

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” - ACC

Radio Free Trumpistan reshared this.

in reply to Paul Walker

@arafel rather than glib quotes, i would ask for addressing the stated objections in the essay, e.g. if you propose a different architecture for intelligence, you have to actually set it out

Radio Free Trumpistan reshared this.

in reply to David Gerard

If you define "AGI" as "whatever imitates the human brain in every single way possible" then I agree. It's basically impossible.

But I don't agree that an artificial superintelligence is impossible. I agree that the biggest roadblock is "A system that actually learns while running" but I don't agree with the other 3 points. You don't need to define intelligence to create a system that eventually is able to predict and prevent humans from stopping it from achieving its goals, whatever they may be. It's a very real risk that must be considered.

in reply to David Gerard

The article makes the argument that you can't keep scaling up infinitely, and I agree. But consider: A recent model that can run in my potato phone outperforms chatgpt 3.5 (175B) on everything except for encyclopedic memory. Sometimes you don't need to scale up. In my opinion the big frontier models are not really getting any closer to super-intelligence because of inherent limits with LLMs, and that the bubble will pop long before they figure out how to overcome these limits. But the better furnace fallacy doesn't exclude alternative architectures which can do similar tasks with a much smaller amount of energy. There are already many papers with alternative to transformers, some of which can modify their own weights to have memory. It's too early to say if any of those will amount to something.

One thing is certain: tech bros are wrong.

in reply to Piko Starsider

@starsider you just said "But consider:" then followed it with something that absolutely doesn't lead to intelligence. This is wasting both our time.
in reply to Piko Starsider

You appear to be confusing Knightian uncertainty for risk. Risk is something that can be quantified. It's what insurance companies trade in. Knightian uncertainty cannot be quantified. Since you are putting aside defining what this "risk" is supposed to be about ("you don't need to define intelligence"), then it cannot possibly be a "very real risk". For one thing because it literally is not real!

Granted, most of the AI shills and xrisk barkers out there are either ignorantly or purposely conflating risk and (Knightian) uncertainty, probably because it serves their cause and makes their sci fi stories sound more exciting. No one is going to jump on the "We have no clue what intelligence is or how to synthesize it or whether that would be a dangerous thing to do. In fact we don't even know when we might know any of that" bandwagon. So confusion is understandable.

@davidgerard@circumstances.run

This entry was edited (10 hours ago)
in reply to Anthony

@abucci Maybe I'm confusing terms, but keep in mind that you don't know what you don't know. I've been interested about AI safety long before the AI craze. I don't think that transformer based LLMs will pose a risk in the future, but LLMs (possibly of a different architecture) will be a piece of the puzzle of a system that we possibly could not control. And it's a very different type of risk. Most other risks are recoverable. A nuclear catastrophe? Society will still exist and it will recover. A runaway AI that gets into power and we cannot control? It's game over. At the very least, life long dictators end up dying of old age. But having a dictator that can clone itself is a different story.

You don't need a conscience in a machine that is capable of manipulating people to reach the keys of the realm.

in reply to Piko Starsider

a piece of the puzzle of a system that we possibly could not control. And it's a very different type of risk.


I'll say it again: it's not a risk if you cannot quantify it. That's what the word "risk" is usually taken to mean.

When you say such and such is something that we possibly could not control, what's your basis? I assert that we possibly could control it. End of conversation forever, because it boils down to nothing more than a clash of two meaningless, evidence-free assertions that obligate no one to assign any weight to. Another "risk" of this nature is that a jar of mustard in your fridge goes out of control and kills everyone. This "risk" has exactly the same weight as the "risk" that "AI" does. You don't see this?

in reply to Anthony

@Anthony @Piko Starsider
A risk you can't quantify doesn't nullify the fact that it's a risk. Consider the mischaracterization contained in the term "speed of light" where "light" is the confined bandwidth that the human eye can detect, and redefining it later was required. Just because you can't something doesn't mean there's nothing there.

Evidence-free assertions fall into the realm of philosophy. Evidence falls into the realm of science--anybody conflating the two has a religion problem.

Viewed as computation, the DNA/RNA operation of biology is that of programming memory. A caterpillar doesn't have more than a ganglion for brains and yet it can, for the most part, execute dietary, pupation, adulthood programming impeccably. It can also execute adaptation semi-impeccably, and in both examples, the variants are likely to simply perish while the successful versions adapt and replicate.

And I know of human beings who are outsmarted by insects. THERE's intelligence for you.

Radio Free Trumpistan reshared this.

in reply to Anthony

@abucci It is not, because a jar of mustard can't learn to deceive. There's plenty of research on AI safety that outlines many scenarios on which we may fail to control it, as well as practical examples at a small scale.
in reply to Piko Starsider

How do you know a jar of mustard cannot learn to deceive?

AI safety is not a real field of study; it's a grift, and it will steer you wrong. xrisk is an application of the inductive disjunctive fallacy. It conjures scary scenarios out of thin air using the failure of Bayesian inference to distinguish between lack of evidence, very low evidence, and impossibility.

in reply to Anthony

@abucci Because we can study its properties and predict how will the chemical composition change with time.

How is AI safety a grift, exactly? If anything, AI companies are incentivized into avoiding AI safety research.

in reply to Anthony

@abucci What do you mean? It is clearly an answer to your question, directly related to the topic, and indirectly answering why an AI could learn to deceive (i.e. because we can observe the emerging behaviour).

Or do you mean that you don't have a counterargument?

in reply to Piko Starsider

Because we can study its properties and predict how will the chemical composition change with time.


is not an amswer to the question I posed. It is a non answer.

in reply to Anthony

@abucci You asked "How do you know a jar of mustard cannot learn to deceive?" and I gave an answer that implies that we cannot see any behaviour that implies the possibility of deception on the part of the jar of mustard. We cannot see patterns that correspond to any input stimuli.

If that's a non-answer, what does an answer look like?

in reply to Piko Starsider

It is a non answer because you have not indicated how you would be able to observe this about a computer program. The context was your fears about computer programs. What is it about computer programs that lets you discern their potential ability to deceive, compared to a jar of mustard (or anything else)?
in reply to David Gerard

Children riding bikes are impossible, for you must know the math and physics, before attempting to ride. - Not quite, kids ride bikes, and you presumably have conciousness too, also not setup or solved formally.

To my knoweledge turing complete is the limit of any computation the universe really does. Random code permutations if it replicates itself will be it's one digial evolution, and is often tagged as genetic algorithms.

in reply to David Gerard

I disagree.

Until you understand the biology and can define the functions, you can't answer whether or not the math exists.

Maybe the math is as simple as 42, we just don't know that.

It's still impossible because of the other two, of course.

Radio Free Trumpistan reshared this.

in reply to David Gerard

AI isn't being hyped up by the fossil fuel industry for its accuracy.

It's very inaccuracy can be used as "plausible deniability" narratives to obfuscate future investigations into election interference.

"Plausible Sentence Generators" don't need to be accurate to manipulate public sentiment & con people out of their money & their vote.
pbs.org/newshour/classroom/dai…

brennancenter.org/our-work/ana…

pbs.org/newshour/show/how-russ…

cigionline.org/articles/then-a…

pbs.org/video/real-or-not-real…

Radio Free Trumpistan reshared this.

in reply to David Gerard

thanks for sharing. One of the best articles I've read on the subject, if only for the concept of "pre-wrong"
in reply to David Gerard

@emory
This is a brilliant read. Thank you.

I like how you point out the alchemists weren’t actually wrong either. They just didn’t have particle accelerators. We have since proven you can make lead into gold. It’s just not economical.

This entry was edited (8 hours ago)

Radio Free Trumpistan reshared this.

in reply to David Gerard

Everything is fuzzy, what you perceive is not distinct, full sharply defined, you may think it is, but it is not, anti-aliasing exists in graphics cards to make things seem round, but they are not, it's all just an illusion.

In this same vein, so is life, we live on a round planet in our solar system, but if you zoom in it is not round at all, it has mountains, valleys, rivers, seas, and so on, life is fuzzy, so is intelligence, but now & then if you have a little ingenuity you can perform illusions to make things seem real.

in reply to David Gerard

my problem with this article is the following: the fundamental assumption is that because we don't (and possibly cannot) know how a neuron works, we cannot build AGI. However, a recurring pattern in statistical mechanics / large systems is that the macroscopic behaviour only depends on the microscopic properties on a quantitative level, but their qualitative properties often don't depend on their microstructure. 1/2
in reply to David Gerard

"EDIT: christ this post is turning out to be a moron magnet"

Just like AGI!

in reply to David Gerard

What I've studied is that it requires mathematics that we've only hinted at in the realm of infinite cardinals, which digital computers are inherently incapable of dealing with, they can only approximate some of it at best.

The human brain, if we take that as intelligence, because that's the only 'true' intelligence we know, deals with both the discrete and the continuous. Digital computers can only approximate the continuous as discrete numbers of a very specific resolution (floats aren't really vectors, they're a discrete number of bits of a finite, relatively small resolution).

The realm of ideas and concepts that modern computers can't access is inherently vast and unreachable for today's technology.

This entry was edited (12 hours ago)
in reply to @haitchfive

By way of a metaphor, if continuous thought is a curve, the approximations a digital computer can handle are control points of a Bézier curve.
That thought can perhaps discuss the slope and inclination of a beautiful curve. But that discrete thought can never be the curve itself.
in reply to @haitchfive

@haitchfive pretty sure Nyquist disproved this mathematically for practical purposes, but I'm sure i can't stop you going off
in reply to David Gerard

No you can't because you haven't begun to understand what I'm talking about.
in reply to @haitchfive

Nyquist doesn’t really address the objection. It shows that certain continuous signals can be reconstructed from discrete samples under strict conditions. But the claim here isn’t about bandlimited signals — it’s about cognition and thought. Human intelligence seems to involve structures that aren’t reducible to neatly sampleable functions, especially when you bring in infinities, self-reference, and biochemical dynamics.

Approximations can be powerful, but approximation is not identity. A Bézier control net isn’t the curve. Saying “for practical purposes” is a move into engineering sufficiency, not a rebuttal of the point that digital computers may be ontologically limited in ways brains are not.

in reply to @haitchfive

This is close to what Penrose argued: approximation and reconstruction don’t capture what’s actually going on in cognition. Nyquist solves an engineering problem, but not the ontological gap.
in reply to @haitchfive

Nyquist shows you can model a certain class of continuous signals with discrete samples. But a model is not the thing itself. A simulation of fluid isn’t wet, a simulated flame doesn’t burn, and a Bézier net isn’t the curve. My claim (and I believe Penrose's as well, although for different reasons) is that intelligence belongs to the latter category: the thing itself, not merely its model. Approximations may be useful, but they don’t collapse the ontological gap.
in reply to @haitchfive

@haitchfive i think if you're using Penrose as your reference you're one of the magnetised morons referred to in the edit, bye now
in reply to David Gerard

I think a better word would be "intractable" or "improbable". "Impossible" suggests there is a formal proof that it cannot be done like it is "impossible" to make a perpetual motion machine, but it is "intractable" to decrypt a quantum safe cipher.
in reply to uncouple8720

@uncouple8720 differences that don't make a difference are not any sort of useful contribution
in reply to David Gerard

I find the alchemy comparison interesting because it turns out it *is* possible to turn lead into gold via fission and fusion (or helium into gold, etc.). It requires conditions we understand but can’t currently replicate, and even if we could replicate them it would be so energy-intensive it won’t be remotely worthwhile. Still, we know it’s possible. The alchemists simply didn’t get far enough in their understanding of the materials.

We definitely don’t understand what makes a mind, so it’s currently impossible for us to make one intentionally. Current efforts directed that way are guaranteed to fail because they’re scaling up things which we know aren’t sufficient to create a mind.

I suspect that what makes a mind is *knowable*, though.

in reply to David Gerard

it's not that the math doesn't exist, because it does. But we don't know what the math should be. And there are some incredibly plausible ways the brain works, and if we model it, all those ideas work. So that tells us there may not be a single way the brain operates. And with more and more neuropsychiatry being done on autism, we're see that may be the case.