> When I say AGI is impossible, I mean: it requires mathematics that doesn't exist to model biology we don't understand to implement functions nobody can define. That's "impossible" in any practical sense.
fluxus.io/article/alchemy-2-el…
EDIT: christ this post is turning out to be a moron magnet
Alchemy 2: Electric Boogaloo
Why the dream of AGI rests on undiscovered mathematics, biochemical hand-waving, and Silicon Valley's accidental religion.Django Beatty (Fluxus - expert AWS consulting services)
This entry was edited (3 hours ago)
reshared this
Erebus
in reply to David Gerard • • •Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Benedikt Neuzweig
in reply to Erebus • • •@Erebus_Amauro the term intelligence implies a philosophical concept founded in metaphysics. As such it will always be impossible to construct an intelligent machine without implying that it has analogous properties than a human being.
Therefore it is not a question of science, of physics and biology, but a question of individual choice and social consensus whether to attribute intelligence (and ultimately rights and responsibilities) to the machine.
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
David Gerard
in reply to Benedikt Neuzweig • • •historically, "intelligence" denotes a social concept founded in colonial preconceptions of the inherent superiority of the white male. You might think that's a stretch, but the entire history is race science and IQ test calibrations were tweaked when they gave unacceptable outputs like women or Kenyans coming out smarter.
the more I look, the more corrupt the entire idea complex denoted by "intelligence" is, in theory and practice.
it's not a coincidence that the artificial intelligence endeavour is also riddled with blatant race scientists
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
crab
in reply to David Gerard • • •and even this blog post is making some very dubious claims, such as implying that LLMs have solved machine translation or machine summarization, or can write functional code.
smells like boosterism
Guill.Jones, Honorary Canadian
in reply to David Gerard • • •Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Midnight Spire Games
in reply to David Gerard • • •Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Snippety Snap (she/her)
in reply to David Gerard • • •AGI = Artificial General Intelligence, in case this helps anyone else but me.
I try to do for initialisms and acronyms what alt text does for images.
As a USian, I immediately interpreted AGI as "adjusted gross income," such as the IRS uses.
"Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks."
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Cy
in reply to David Gerard • • •I mean, nobody understood fire. AGI is possible, but the techno cultists preaching about how it will elevate to a superintelligence vastly superior to all of humanity are delusional. If we had complex enough computers, they could emulate what a brain does, and it'd be as generally intelligent as any other brain.
The cost of computer complexity is astronomical so we can't even come close to the amount of computer neurons we'd need. And it'd be too complex to program so we'd have to use heuristic training to try and mold it into the program we want. But it is possible. If I were you, I'd just get someone pregnant it's easier.
Radio Free Trumpistan
in reply to Cy • •Ah. Well, Cy, techbros do have one malady in common: delusions of grandeur.
Cy likes this.
RootWyrm 🇺🇦
in reply to David Gerard • • •I can and often do break it down even more simply than that.
Intelligence very obviously requires three states: yes, no, and maybe.
Now, how exactly are you going to implement 'maybe' in a binary system? You aren't. You can't.
Radio Free Trumpistan likes this.
Radio Free Trumpistan
in reply to RootWyrm 🇺🇦 • •Ahhhh, slight correction is in order. Boolean Algebra doesn't really accommodate the "maybe" state, but solid state logic does: its early name was "toggle state". Today's solid state does a better job of accommodating it.
RootWyrm 🇺🇦
in reply to Radio Free Trumpistan • • •@claralistensprechen5th yeah, to be clear, yes|no|maybe is a gross abstraction and oversimplification. Brains and thought aren't boolean things. If they were, there would be no room for uncertainty!
But we really have no idea how many possible states there are exactly, other than "more than two." Could be 3, could be 30,000. (The 30k is way more likely.)
Radio Free Trumpistan likes this.
John Anderson
in reply to David Gerard • • •Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
BrianKrebs
in reply to David Gerard • • •reshared this
David Gerard and Radio Free Trumpistan reshared this.
mtconleyuk
in reply to BrianKrebs • • •It’s absolutely required reading, and I push it whenever I can. Sadly, people seem not to be interested, no doubt feeling it’s another one of ‘those’ kind of books.
Oren Levine
in reply to BrianKrebs • • •X Man: The Elon Musk Origin Story
Pushkin IndustriesRadio Free Trumpistan likes this.
Sean Reed
in reply to BrianKrebs • • •Radio Free Trumpistan likes this.
David Gerard
in reply to Sean Reed • • •Sasha
in reply to David Gerard • • •fancysandwiches
in reply to David Gerard • • •David Gerard
in reply to fancysandwiches • • •Radio Free Trumpistan likes this.
reshared this
David Gerard and Radio Free Trumpistan reshared this.
fancysandwiches
in reply to David Gerard • • •Endor Nim
in reply to David Gerard • • •Interesting. But, given how capable AI is at coding, should anyone bother to learn to code anymore? Is it pointless teaching yourself to code in Python, for example? Or any other language?
(Update: Thanks for all the great answers. It’s all very interesting as a phenomenon. I’m only really interested in programming in terms of its use in doing maths. Moreover, it could come in handy one day if I’m stuck outside the pod door and HAL won’t let me in!)
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Avner
in reply to Endor Nim • • •Avner
in reply to Avner • • •David Gerard
in reply to Avner • • •Major Denis Bloodnok
in reply to Endor Nim • • •Leafy Greens
in reply to Major Denis Bloodnok • • •David Gerard
in reply to Leafy Greens • • •Exindiv
in reply to David Gerard • • •Wow... the 80s in repeat.
Everything old is new again.
Paul Walker
in reply to David Gerard • • •I would hesitate to say impossible, but I don’t think it’s going to be done by trying to copy the human brain.
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” - ACC
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
David Gerard
in reply to Paul Walker • • •Radio Free Trumpistan reshared this.
Piko Starsider
in reply to David Gerard • • •If you define "AGI" as "whatever imitates the human brain in every single way possible" then I agree. It's basically impossible.
But I don't agree that an artificial superintelligence is impossible. I agree that the biggest roadblock is "A system that actually learns while running" but I don't agree with the other 3 points. You don't need to define intelligence to create a system that eventually is able to predict and prevent humans from stopping it from achieving its goals, whatever they may be. It's a very real risk that must be considered.
David Gerard
in reply to Piko Starsider • • •Piko Starsider
in reply to David Gerard • • •The article makes the argument that you can't keep scaling up infinitely, and I agree. But consider: A recent model that can run in my potato phone outperforms chatgpt 3.5 (175B) on everything except for encyclopedic memory. Sometimes you don't need to scale up. In my opinion the big frontier models are not really getting any closer to super-intelligence because of inherent limits with LLMs, and that the bubble will pop long before they figure out how to overcome these limits. But the better furnace fallacy doesn't exclude alternative architectures which can do similar tasks with a much smaller amount of energy. There are already many papers with alternative to transformers, some of which can modify their own weights to have memory. It's too early to say if any of those will amount to something.
One thing is certain: tech bros are wrong.
David Gerard
in reply to Piko Starsider • • •Anthony
in reply to Piko Starsider • • •You appear to be confusing Knightian uncertainty for risk. Risk is something that can be quantified. It's what insurance companies trade in. Knightian uncertainty cannot be quantified. Since you are putting aside defining what this "risk" is supposed to be about ("you don't need to define intelligence"), then it cannot possibly be a "very real risk". For one thing because it literally is not real!
Granted, most of the AI shills and xrisk barkers out there are either ignorantly or purposely conflating risk and (Knightian) uncertainty, probably because it serves their cause and makes their sci fi stories sound more exciting. No one is going to jump on the "We have no clue what intelligence is or how to synthesize it or whether that would be a dangerous thing to do. In fact we don't even know when we might know any of that" bandwagon. So confusion is understandable.
@davidgerard@circumstances.run
Piko Starsider
in reply to Anthony • • •@abucci Maybe I'm confusing terms, but keep in mind that you don't know what you don't know. I've been interested about AI safety long before the AI craze. I don't think that transformer based LLMs will pose a risk in the future, but LLMs (possibly of a different architecture) will be a piece of the puzzle of a system that we possibly could not control. And it's a very different type of risk. Most other risks are recoverable. A nuclear catastrophe? Society will still exist and it will recover. A runaway AI that gets into power and we cannot control? It's game over. At the very least, life long dictators end up dying of old age. But having a dictator that can clone itself is a different story.
You don't need a conscience in a machine that is capable of manipulating people to reach the keys of the realm.
Anthony
in reply to Piko Starsider • • •I'll say it again: it's not a risk if you cannot quantify it. That's what the word "risk" is usually taken to mean.
When you say such and such is something that we possibly could not control, what's your basis? I assert that we possibly could control it. End of conversation forever, because it boils down to nothing more than a clash of two meaningless, evidence-free assertions that obligate no one to assign any weight to. Another "risk" of this nature is that a jar of mustard in your fridge goes out of control and kills everyone. This "risk" has exactly the same weight as the "risk" that "AI" does. You don't see this?
Radio Free Trumpistan
in reply to Anthony • •@Anthony @Piko Starsider
A risk you can't quantify doesn't nullify the fact that it's a risk. Consider the mischaracterization contained in the term "speed of light" where "light" is the confined bandwidth that the human eye can detect, and redefining it later was required. Just because you can't something doesn't mean there's nothing there.
Evidence-free assertions fall into the realm of philosophy. Evidence falls into the realm of science--anybody conflating the two has a religion problem.
Viewed as computation, the DNA/RNA operation of biology is that of programming memory. A caterpillar doesn't have more than a ganglion for brains and yet it can, for the most part, execute dietary, pupation, adulthood programming impeccably. It can also execute adaptation semi-impeccably, and in both examples, the variants are likely to simply perish while the successful versions adapt and replicate.
And I know of human beings who are outsmarted by insects. THERE's intelligence for you.
Piko Starsider likes this.
Radio Free Trumpistan reshared this.
Piko Starsider
in reply to Anthony • • •Anthony
in reply to Piko Starsider • • •How do you know a jar of mustard cannot learn to deceive?
AI safety is not a real field of study; it's a grift, and it will steer you wrong. xrisk is an application of the inductive disjunctive fallacy. It conjures scary scenarios out of thin air using the failure of Bayesian inference to distinguish between lack of evidence, very low evidence, and impossibility.
Piko Starsider
in reply to Anthony • • •@abucci Because we can study its properties and predict how will the chemical composition change with time.
How is AI safety a grift, exactly? If anything, AI companies are incentivized into avoiding AI safety research.
Anthony
in reply to Piko Starsider • • •Piko Starsider
in reply to Anthony • • •@abucci What do you mean? It is clearly an answer to your question, directly related to the topic, and indirectly answering why an AI could learn to deceive (i.e. because we can observe the emerging behaviour).
Or do you mean that you don't have a counterargument?
Anthony
in reply to Piko Starsider • • •is not an amswer to the question I posed. It is a non answer.
ttoocs
in reply to David Gerard • • •Children riding bikes are impossible, for you must know the math and physics, before attempting to ride. - Not quite, kids ride bikes, and you presumably have conciousness too, also not setup or solved formally.
To my knoweledge turing complete is the limit of any computation the universe really does. Random code permutations if it replicates itself will be it's one digial evolution, and is often tagged as genetic algorithms.
Leeloo
in reply to David Gerard • • •I disagree.
Until you understand the biology and can define the functions, you can't answer whether or not the math exists.
Maybe the math is as simple as 42, we just don't know that.
It's still impossible because of the other two, of course.
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
Nicole Parsons
in reply to David Gerard • • •AI isn't being hyped up by the fossil fuel industry for its accuracy.
It's very inaccuracy can be used as "plausible deniability" narratives to obfuscate future investigations into election interference.
"Plausible Sentence Generators" don't need to be accurate to manipulate public sentiment & con people out of their money & their vote.
pbs.org/newshour/classroom/dai…
brennancenter.org/our-work/ana…
pbs.org/newshour/show/how-russ…
cigionline.org/articles/then-a…
pbs.org/video/real-or-not-real…
How Russia is using artificial intelligence to interfere in elections
Simon Ostrovsky (PBS News)Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
David Gerard
in reply to Nicole Parsons • • •Radio Free Trumpistan likes this.
Oren Levine
in reply to David Gerard • • •The Animal and the Machine
in reply to David Gerard • • •@emory
This is a brilliant read. Thank you.
I like how you point out the alchemists weren’t actually wrong either. They just didn’t have particle accelerators. We have since proven you can make lead into gold. It’s just not economical.
Radio Free Trumpistan likes this.
Radio Free Trumpistan reshared this.
David "Dave" Treloar
in reply to David Gerard • • •Everything is fuzzy, what you perceive is not distinct, full sharply defined, you may think it is, but it is not, anti-aliasing exists in graphics cards to make things seem round, but they are not, it's all just an illusion.
In this same vein, so is life, we live on a round planet in our solar system, but if you zoom in it is not round at all, it has mountains, valleys, rivers, seas, and so on, life is fuzzy, so is intelligence, but now & then if you have a little ingenuity you can perform illusions to make things seem real.
gri573
in reply to David Gerard • • •Poloniousmonk
in reply to David Gerard • • •"EDIT: christ this post is turning out to be a moron magnet"
Just like AGI!
@haitchfive
in reply to David Gerard • • •What I've studied is that it requires mathematics that we've only hinted at in the realm of infinite cardinals, which digital computers are inherently incapable of dealing with, they can only approximate some of it at best.
The human brain, if we take that as intelligence, because that's the only 'true' intelligence we know, deals with both the discrete and the continuous. Digital computers can only approximate the continuous as discrete numbers of a very specific resolution (floats aren't really vectors, they're a discrete number of bits of a finite, relatively small resolution).
The realm of ideas and concepts that modern computers can't access is inherently vast and unreachable for today's technology.
@haitchfive
in reply to @haitchfive • • •That thought can perhaps discuss the slope and inclination of a beautiful curve. But that discrete thought can never be the curve itself.
David Gerard
in reply to @haitchfive • • •@haitchfive
in reply to David Gerard • • •@haitchfive
in reply to @haitchfive • • •Nyquist doesn’t really address the objection. It shows that certain continuous signals can be reconstructed from discrete samples under strict conditions. But the claim here isn’t about bandlimited signals — it’s about cognition and thought. Human intelligence seems to involve structures that aren’t reducible to neatly sampleable functions, especially when you bring in infinities, self-reference, and biochemical dynamics.
Approximations can be powerful, but approximation is not identity. A Bézier control net isn’t the curve. Saying “for practical purposes” is a move into engineering sufficiency, not a rebuttal of the point that digital computers may be ontologically limited in ways brains are not.
@haitchfive
in reply to @haitchfive • • •@haitchfive
in reply to @haitchfive • • •David Gerard
in reply to @haitchfive • • •uncouple8720
in reply to David Gerard • • •David Gerard
in reply to uncouple8720 • • •The Dubster
in reply to David Gerard • • •David Gerard
in reply to The Dubster • • •Zimmie
in reply to David Gerard • • •I find the alchemy comparison interesting because it turns out it *is* possible to turn lead into gold via fission and fusion (or helium into gold, etc.). It requires conditions we understand but can’t currently replicate, and even if we could replicate them it would be so energy-intensive it won’t be remotely worthwhile. Still, we know it’s possible. The alchemists simply didn’t get far enough in their understanding of the materials.
We definitely don’t understand what makes a mind, so it’s currently impossible for us to make one intentionally. Current efforts directed that way are guaranteed to fail because they’re scaling up things which we know aren’t sufficient to create a mind.
I suspect that what makes a mind is *knowable*, though.