holy fucking based. thanks for typing this out so i didnt have to
The LLM discourse on the Fediverse has really irked me the last few days.Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
xs4me2
in reply to Reading Recluse • • •LLM are not an expression of speech nor creativity and simply digest, explore and reorder information available. They are a tool and can be useful to digest and explore information at great speed but essentially are not more than that.
For anything in opinion, creativity, art and commenting I will be looking at human expression, always..
The problem is society will be confronted with loads of LLM nonsense and disinformation in due time. Seeing it online more and more.
Liam Proven
in reply to xs4me2 • • •> can be useful to digest and explore information at great speed
Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.
xs4me2
in reply to Liam Proven • • •@lproven
Well as I said it is a tool, a hammer is not right or wrong. It can be used right or wrong.
As a domain expert, I use LLM in my work, but I will always judge and validate if it is right... I have indeed seen colleagues use it out of their zone of work, where I had to tell them yes this is right what LLM said, but not in this context. The real problem is LLM will never tell you context or probability of it telling you something is correct.
dynamite_ready
in reply to Liam Proven • • •@lproven @xs4me2
For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.
But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.
The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.
That's a worthy advance.
Hannah Steenbock
in reply to dynamite_ready • • •@dynamite_ready
The problem is that LLMs just make things up. There are no new discovers, there is no accurate information retrieval. But people don't notice, because they lack the expertise, they lack the ability to check.
LLMs cannot be trusted with anything. They are a sheer waste of our world's resources.
@lproven @xs4me2 @reading_recluse
xs4me2
in reply to dynamite_ready • • •It is the user and their skills indeed. A hammer can be used skillfully or wrong...
Liam Proven
in reply to xs4me2 • • •@xs4me2 @dynamite_ready But it can't be used for brain surgery.
No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.
xs4me2
in reply to Liam Proven • • •@lproven @dynamite_ready
In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.
Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.
xs4me2
in reply to xs4me2 • • •LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.
Ben Tasker
in reply to xs4me2 • • •@xs4me2 @lproven @dynamite_ready
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).
I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something
xs4me2
in reply to Ben Tasker • • •I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.
It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...
Liam Proven
in reply to xs4me2 • • •xs4me2
in reply to Liam Proven • • •@lproven @ben @dynamite_ready
Let us respectfully disagree then.
You are right in the sense that a lot can go wrong as I elaborated on!
Time will tell!
Phil
in reply to Liam Proven • • •Hasn't been my experience. What have you tested it with?
Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).
My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.
I have found the available models entirely sufficient for these tasks.
Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.
Now to be clear - I'm not saying they're always accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.
I wouldn't use them for any subject which I'm not already well grounded in, and in
... Show more...Hasn't been my experience. What have you tested it with?
Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).
My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.
I have found the available models entirely sufficient for these tasks.
Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.
Now to be clear - I'm not saying they're always accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.
I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.
But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.
Ultimately it's on the user to ensure the tool's output meets requirements.
Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.
I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.
I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.
(Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)
xs4me2
in reply to Phil • • •Exactly, and as always truth and reality are nuanced. I will be using it, and I will use my critical thinking (always).
Liam Proven
in reply to Phil • • •@phil @xs4me2 My current favourite paper on this:
ea.rna.nl/2024/05/27/when-chat…
xs4me2
in reply to Liam Proven • • •Phil
in reply to Liam Proven • • •1. Paper from nearly 2 years ago. A lot has changed. Not to mention the 'test' the author (can't find their name, sorry) did is pretty dumb. It's much better to use an API, where you can control the full input pipeline to ensure the vendor isn't adding hidden instructions without your knowledge.
2. I already addressed the point in my previous comment - it's on the user to verify that tools have correct output. Relying on an LLM to do the reading in one's stead is a recipe for disaster.
You haven't said anything about YOUR use-case, experience, or the tests you tried.
I'm genuinely curious, what do you imagine using an LLM is like?
The reason I ask is because a lot of the criticism and panicking (sometimes crossing into outright disrespect and bigotry) I see online comes from an assumption that using an LLM is predicated on turning off one's brain and taking the output at face value... something that we shouldn't be doing with any software anyway.
I guess put another way: I don't believe that the problems people attribute to LLMs are specific to LL
... Show more...1. Paper from nearly 2 years ago. A lot has changed. Not to mention the 'test' the author (can't find their name, sorry) did is pretty dumb. It's much better to use an API, where you can control the full input pipeline to ensure the vendor isn't adding hidden instructions without your knowledge.
2. I already addressed the point in my previous comment - it's on the user to verify that tools have correct output. Relying on an LLM to do the reading in one's stead is a recipe for disaster.
You haven't said anything about YOUR use-case, experience, or the tests you tried.
I'm genuinely curious, what do you imagine using an LLM is like?
The reason I ask is because a lot of the criticism and panicking (sometimes crossing into outright disrespect and bigotry) I see online comes from an assumption that using an LLM is predicated on turning off one's brain and taking the output at face value... something that we shouldn't be doing with any software anyway.
I guess put another way: I don't believe that the problems people attribute to LLMs are specific to LLMs. How many instances were there where management/ execs took Excel output as fact, when the formulas were set up wrong?
These statistical models are no different.
Liam Proven
in reply to Reading Recluse • • •@JRepin
Violet Madder reshared this.
Reading Recluse
in reply to Liam Proven • • •Taylor Drew
in reply to Reading Recluse • • •Reading Recluse
in reply to Taylor Drew • • •Charles Bédard
in reply to Reading Recluse • • •Daniel Gibson
in reply to Taylor Drew • • •@mollymay5000
putting middle-finger emojis in my texts because almost all other emojis indicate that it was written by an LLM
(JFC I fucking hate those texts or even technical documentation full of 🤓 ✨ 😊 🚀 and IDK what other shit)
Octavia Con Amore Succubard's Library
in reply to Taylor Drew • • •Kevin Russell
in reply to Reading Recluse • • •Yes indeed.
Brava, bravo.
Leah
in reply to Reading Recluse • • •Stephen 🌈 (he/him)
in reply to Reading Recluse • • •Papageier
in reply to Reading Recluse • • •You do wear machine-woven cloth, though, no?
Seriously: Why?
It's exploitative, the quality is mediocre, it kills jobs, it's a waste of resources, consumes vast amounts of energy, hinders creativity, destroys small businesses, forces uniformity onto people ... why wear it?
Because not doing so would be a waste of time. And time is the one resource that's (still) strictly limited for all of us. We compromise on the quality of clothing (debatable), in order to do other things we couldn't if we were still weaving cloth manually.
When mechanical weaving machines came about, the workers threw their wooden shoes, in French 'Sabot', into the machines to stop them.
All that is left of this effort is a word describing the futile attempt: Sabotage.
So protest all you like, it's just not going to get you anywhere.
social elephant in the room
in reply to Papageier • • •@papageier machine-woven cloth was answering an essential need in a profitable capitalistic way. Can we say the same about LLM?
I think it is not inevitable, but time will tell.
Johnny ‘Decimal’ Noble
in reply to social elephant in the room • • •@tseitr @papageier My problem with this framing is: who gets to decide?
Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.
But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.
I get the pushback. I'll never use one to write prose, because prose comes from my human heart.
But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.
Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.
So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.
skua
in reply to Johnny ‘Decimal’ Noble • • •@johnnydecimal @tseitr @papageier
"nobody's boycotting the progress being made in chip design"
[waving hand]
Over here.
We're boycotting chips that offer us nothing more that we want or need.
Run the web browser, word processor, printer drivers, scan drivers, network connections, do security updates. And don't make the humans waste time with the damned computers. It's a lot to ask but new chips are not going to do this any better.
Arianna Masciolini
in reply to Johnny ‘Decimal’ Noble • • •Alex, the Hearth Fire reshared this.
Johnny ‘Decimal’ Noble
in reply to Arianna Masciolini • • •@harisont @tseitr @papageier Perhaps the difference is that my job is not, and never has been, 'professional software developer'.
My current job involves trying to help people to be more organised. As part of that, it's very helpful if I can write computer programs and websites. In that aspect of my business, I find Claude Code very useful.
It provides much the same utility as does my accountant. As a business owner I must file taxes. But it's not what I do. It's not the function I serve.
My job, arguably, is much closer to that of a writer. The _ideas_ that I present are mine, from my human brain. So I value the act of creation.
I can see how a software developer might think differently. But for that person to deny me the utility of an LLM is like me telling my accountant that they can't use Xero and that they have to enter everything by hand in a double-entry ledger.
Papageier
in reply to Johnny ‘Decimal’ Noble • • •@johnnydecimal @harisont @tseitr The 64k$ question: it's obviously a rearguard battle. Technology is advancing, Humanity is retreating. Tech has just captured a base we thought invulnerable until yesterday.
So the 128k$ question will be: Is your job as writer / creator of ideas still safe? I seriously doubt that. But what if not?
Me thinks we don't need next level AI, we need next level economics.
Court Cantrell does not comply
in reply to Papageier • • •@papageier @johnnydecimal @harisont @tseitr As an artist and a writer, I could not agree with you more.
I don't want AI to paint pictures for me or research/figure something out for me or write novels for me. Those activities are part of what makes being human actually *fun*.
"But Courtney, what about the people who can't do those things? Why shouldn't they use AI to make up for that?"
Because we artists & writers put in hella work to learn how to do what we do....
1/2
Court Cantrell does not comply reshared this.
Court Cantrell does not comply
in reply to Court Cantrell does not comply • • •@papageier @johnnydecimal @harisont @tseitr
...Every piece of AI-gen "art" is a theft from me and people like me. Every AI-gen story, poem, novel is a theft from me and people like me. WE PUT IN THE TIME, BLOOD, SWEAT, AND TEARS (not to mention money for education and supplies) to *learn* these things -- and now people are stealing our works just because they want a shortcut *while* causing incalculable damage to the only bit of dirt any of us can survive on.
2/
Court Cantrell does not comply
in reply to Court Cantrell does not comply • • •@papageier @johnnydecimal @harisont @tseitr
I don't want AI to do *any* of our creative work or *any* of our learning for us. I want us all to do the work of learning and becoming, improving our damn skillsets and growing our damn brains.
All I want from AI is robot mice to clean my floors at night so I have more time to write and paint.
3/3
Huntn00
in reply to social elephant in the room • • •But not in a profit driven capitalist system that relies on disenfranchising fellow citizens to make profits. And the haphazard manner of competitive development putting excess strain on energy and resources. $$$ is the lure, it seems to undermine us at every turn.
MacCruiskeen
in reply to Papageier • • •b) we may still complain about the bad practices of said industry, do what we can to mitigate it, demand legislation to regulate it, choose providers that operate more repsonsibly to the degree that we can afford it. Plenty of people with the skills still actually make some of their own clothes. We don't have to silently accept the bad things.
eta: what the AI companies want to sell is not the clothing, but the machine to enslave the people who make the clothing.
Bwatch
in reply to Papageier • • •@papageier
You are right that there has always been a protest to mechanising jobs. Black smiths when a nail cutting machine was invented for example.
There is a difference here since a notable portion of its function is at the academic level.
So let's say I need to write a book report, instead of reading the book I read an LLM summary, then write and publish my report.
Bwatch
in reply to Bwatch • • •is referencing my report based on an LLM summary. This repeats until all academic value has been drained from the source material. Are we at a net gain or loss of intelligence after this happens?
@reading_recluse @papageier
Papageier
in reply to Bwatch • • •@brokenshell I honestly don't know. I had presumed LLM output to deteriorate over time, as AI output appears on the Web and is used to train NextGen AI. However, so far I stand corrected. Latest LLM versions are doing astonishingly well, and the limits are not yet in sight.
Yes, it is probably a hard experience for academics to suddenly face the same fate as simple workers (like weavers) 150 years ago. Because they always felt superior, and therefore safe? Maybe. This alone should teach us a lesson.
But the underlying truth is: if you can automate something in a disruptive manner, someone will always do it. All others have no choice: follow suit, find a niche or suffer economic death.
William Canna-bass
in reply to Bwatch • • •Violet Madder
in reply to Papageier • • •@papageier
Go back and read up on what the Luddites were actually protesting, jackass. They were not mindless technophobes.
Machine-woven cloth IN AND OF ITSELF is NOT inherently exploitative. It could have been used instead to elevate and improve the textile trade, making life easier for the workers.
Instead, the way the capitalists weaponized the tech to devalue labor was fucking evil.
Tech is not inherently good or bad. It's just a tool.
"AI" and LLMs, as they are currently being designed and deployed, are a tool being used as a WEAPON. Child-raping technofascist planetwreckers are using them to enclose the digital commons, jam any useful signals they don't control, and surveil the everloving shit out of everyone everywhere.
If we don't protest like our lives depend on it, NOW, things are going to get unimaginably and horrifyingly fucking bad.
Western Infidels
in reply to Papageier • • •sortius
in reply to Papageier • • •very normal person
in reply to Papageier • • •This is such a lazy argument:: You can WEAR clothing.
No one wants to read AI generated text, AI images are hideous. Beyond some niche industrial cases, which are not the focus of the hyperscalers, LLM's are generally useless and a massive waste of resources. The entire industry is based on a speculative, utopian fantasy of created an AGI that will solve all problems. It's utopian fantasy mixed with sunk cost fantasy.
It's like saying "why eat human shit when you can now eat robot generated shit.
Also sabotage and worker resistance got workers everything they ever had.
okanogen VerminEnemyFromWithin
in reply to Papageier • • •Hand woven clothes are not generally superior, tho.
You know that, right? I mean, you do, right?
Hand weave a cotton t-shirt, please. Or a fleece jacket. Or tights. I would like to see that done.
LLMs are inherently racist, sexist, and reductive, because the online society they sample is racist, sexist, amd reductive. It is baked in.
Alex McLean
in reply to Papageier • • •Glenn Seto
in reply to Reading Recluse • • •Our refusal to engage with or become in any way reliant on LLMs is also a conscious effort to run out the clock.
Tech giants are trying to crowbar "AI" into everything right now, because they need to make these services indispensable for society at large. This way our leaders might not have have a choice, but to bail out or otherwise coddle the industry once all this circular financing comes crashing down as it inevitably must.
Shannon Prickett reshared this.
lifewithtrees
in reply to Glenn Seto • • •@glennseto totally agree.
It’s like how recipe websites become useless as they added so much useless text for SEO and so they have “jump to recipe” buttons to actually find what you want.
The AI chatbots are like that button except, terribly, the only reason it’s needed is because of all the AI slop in the first place.
And then we have to rely on it as the entire internet is noise and no signal. Tech companies made the mess then push the tool to clean
Violet Madder
in reply to lifewithtrees • • •@lifewithtrees @glennseto
Selling the disease along with the "cure".
What's scary is... they're shoehorning "AI" into everything, while also manipulating hardware prices and supply such that it's becoming increasingly harder for anyone to get their hands on home computing-- they're trying to make everyone depend on cloud computing through tools that allow them to surveil damn near everything we say and do, while poisoning the wells of information and claiming to be our rescuers if we only swallow the shit being spewed by the mindless digital oracles running on algorithms they can warp any way they like.
This shit was never supposed to make money or be useful in any direct way. It's a fascist's panopticon torment nexus wet dream.
lifewithtrees
in reply to Violet Madder • • •@violetmadder @glennseto selling the disease along with the cure is the same thing I’ve felt from religion. I guess that’s where the crossover is: it’s scams all the way down.
If what my separation from religion has taught me holds for this as well, then we don’t need to trust AI or tech, we need to throw it out and learn to trust ourselves. ❤️
Violet Madder
in reply to lifewithtrees • • •@lifewithtrees @glennseto
You got it!
This culture, the scams, the money-- it IS a religion.
We have been treating the ultra-wealthy as royalty, entitled to rule over us by divine virtue of their superior wallets. It's prosperity gospel, the golden calf. Big bucks daddy knows what he's doing, he's super smart and fancy and if we worship him he'll give us some treats etc.
All we'd have to do is stop believing in them, deprive them of our devotion, stop buying what they're selling, and most of their wealth and power would vanish overnight.
lifewithtrees
in reply to Violet Madder • • •@violetmadder @glennseto yes exactly. There is a ladder that society says some people are more important than others
It’s not true. Nobody is more important than anyone else. The only thing that makes it so is that we keep holding the ladder up.
We can each stop holding it up and walk away 🙏
Cy
in reply to Violet Madder • • •You can't stop buying food, though.
CC: @lifewithtrees@mstdn.social @glennseto@mastodon.social @reading_recluse@c.im
Violet Madder
in reply to Cy • • •@cy @lifewithtrees @glennseto
That's why I said "most".
The guys with guns controlling the land and resources like food are doing it because they're getting paid, though. The less that money is worth, and the fewer of them are true believers, the harder it will be to maintain those armies.
No Way
in reply to Reading Recluse • • •Simon Dückert
in reply to Reading Recluse • • •Morten Juhl-Johansen
in reply to Reading Recluse • • •fedithom
in reply to Reading Recluse • • •Giliell
in reply to Reading Recluse • • •Steph Vee
in reply to Reading Recluse • • •Maria Langer | 📝💎🌵🛥️
in reply to Reading Recluse • • •Ruby Jones
in reply to Reading Recluse • • •Ambassador Plebeian lA∴lA∴I° ☑
in reply to Reading Recluse • • •David Coronel
in reply to Reading Recluse • • •Adrian W
in reply to Reading Recluse • • •Absolutely. LLMs are the biggest, most bloody useless con ever invented by the vacuous arseholes in charge of the tech industry.
The extra annoying thing is that there are other potential approaches to AI out there that are ultimately likely to be more useful, less destructive and work better (e.g. some expert systems, decision support systems, etc.) But so many folks are just playing with probabilistic horseshit generators instead.
Violet Madder
in reply to Adrian W • • •The only way anything this aggressively useless gets investment on this scale, is when it's a weapon.
luca
in reply to Reading Recluse • • •I do agree, but I'd like to add something. After all, the manipulative scheme on users isn't much different from what has happened in the last twentysomething years. The companies behind it are still the same ones, almost all of them were born less than three decades ago.
LLMs have just refined the decoy, polished the deceptive honey-pot.
CaptainCoffee
in reply to Reading Recluse • • •Martin Hamilton
in reply to Reading Recluse • • •Violet Madder
in reply to Martin Hamilton • • •I think they also are having a hard time grasping the concepts of morality, ethics, or conscience in general.
Martin Hamilton
in reply to Violet Madder • • •Violet Madder
in reply to Martin Hamilton • • •I'd been trying to get rid of that one particular part of my face for a long time anyway, if the face-eating leopard would just stop there it could be so useful!
Fluffgar 🏴
in reply to Reading Recluse • • •Matti J.
in reply to Reading Recluse • • •stux⚡️
in reply to Reading Recluse • • •exactly!
As long as I can, I will resist. And to be honest, I don’t really care what people think of it
plan-A (゚ヮ゚)
in reply to Reading Recluse • — (Proud Eskimo!) •@Reading Recluse Add Corporate LLM and I'll agree, not generalizing A.I. as a whole Infrastructure that exist since earlier then you think.
The debate alone gets annoying, sure you can tell your opinions but it's a hype overflow lately that gets on the nerves of many people mind you.
Think what you want and do not use it at all must be satisfying enough while I agree corp AI is pure trash and immoral, not all AI is.
I wish you a good day ahead 😉
edit: Give room to others to explore and exchange how to make it better for all instead shooting it, while the gun is not the weapon in that case but you that trigger it is same as comparing it to a nuke or fire.
Fergabell 😷 🌱
in reply to Reading Recluse • • •Reading Recluse
in reply to Fergabell 😷 🌱 • • •@fergabell Completely true, I fully agree.
I really dislike that most LLM-defenders in my comments right now say something like: "Well actually, in this specific case LLM usage was actually helpful for me personally, so..."
Even entertaining the thought that it's somehow useful for someone somewhere, it doesn't erase the extreme damage it's doing to the world and us collectively, and the massive scale of exploitation it's engaging in to keep it all afloat.
derptron
in reply to Reading Recluse • • •@fergabell "I didn't kill him because he was crazy, I killed him because he was making sense."
Miller, The Expanse -- one of the episodes I just watched in S2.
Thing is, the LLM thing wouldn't be a thing if it wasn't this puffed up thing. Yeah, making an LLM would be costly and would burn up some GPU. But it wouldn't be this Earth sucking thing because it would only be applied where it's worth it.
Could be that the given situation makes that possible balance irrelevant.
Artstories
in reply to Reading Recluse • • •Ox1de
in reply to Reading Recluse • • •FOSStastic
in reply to Reading Recluse • • •I have to disagree on one thing: I've used LLMs for complex social issues I faced in real-life in the past and they (in hindsight) correctly determined that it wasn't my fault or anything wrong with me. So for me, they improved my mental health in difficult times and successfully prevented me from getting depressed.
So there are definitely beneficial use cases for them. But they're also very overrated and love to hallucinate a lot and are unable to comprehend nuance in writing.
Violet Madder
in reply to FOSStastic • • •@fosstastic
They would enthusiastically tell you it's not your fault and there's nothing wrong with you even if you're a damned axe murderer.
A glorified Furby is no substitute for therapy or peer support from actual caring, empathetic, properly trained humans.
Matt Hamilton
in reply to Reading Recluse • • •Bredroll
in reply to Reading Recluse • • •i feel pretty much the same, save to say, its not to concept of LLMs that I'm against, rather it is the theft of material for training, the impunity of that theft and the determination to disclaim any possibility of giving fair payment or recognition to those whose work is responsible for the stolen data.
on top, i really really dislike the cultish hype and forced use going on
Frederic
in reply to Reading Recluse • • •ΞVΞ🌸
in reply to Reading Recluse • • •@reading_recluse
There shouldn’t even be a discourse. It’s a nobrainer that the tradeoffs aren’t worth the means. Sadly there’s no way to completely avoid using it. Doctors offices and every business and their mom’s is using it even if you’re not aware they are.
What pisses me off is the job losses behind it. It’s like hiring a 3rd grader to do a job a human adult can do more efficiently. I’m pretty sure it costs a whole lot more financially and ecologically to maintain the job with a robot than to just hire a human. Businesses really go hard in not wanting to pay people. Even to the point of not making sense.
Robotistry
in reply to Reading Recluse • • •There are fundamental differences between
1. "the person who had the idea was bad, so I will not touch things they tainted with their badness" (purity argument)
2. "the tool was created using bad (or catastrophic) means, so the ends don't matter" (purity)
3. "the tool creates bad ends every time it is used, so the means don't matter" (function)
4. "the tool creates bad ends when used inappropriately" (define "appropriate")
5. "the tool is sometimes helpful under limited circumstances". (define "limited")
and they can all be true.
Right now I'm somewhere between 2 and 3 - the means are bad but it may be possible to avoid adding to them,
and the bad ends are hard to quantify.
But as someone whose ability to code is almost completely gone due to long covid, but who sees a need for unprofitable software tools that no-one else will build, I may eventually end up in 5, supervising an LLM out of desperation.
For now I'm continuing to try to avoid LLM-generated content.
Orb 2069
in reply to Robotistry • • •Good luck with whatever the clankers define as 'appropriate', since - to date - they seem to have settled on 'Whatever I can get away with, and then some.'
Robotistry
in reply to Orb 2069 • • •@Orb2069 I'm explicitly not passing judgement in the above toot, just describing the categories I see.
I'm also not judging people like @pluralistic for their choices - he has consistently honestly engaged with identifying reasonable definitions for "limited" and "appropriate", and he seems to be attempting to limit both the personal and external harms of his choices given a sunk means cost.
I do pass judgement on the people who ignore (or celebrate) the bad means and the bad ends. And I pass judgement on the people whose definitions of "appropriate" and "limited" ignore (or celebrate) the costs and external harms of their choices.
thesofafox
in reply to Reading Recluse • • •I don't generally like LLMs at all and in the creative field especially I think they are an absolute disaster.
But, that being said, Pandora's Box has been opened. Companies are finding it as a way to get lots of ignorant investors on board right now and people who don't give a shit how it works will always be impressed.
So sure, we can be in our corner tucking ourselves away from where the world is headed. Or we can push for these things to be heavily regulated and more environmentally friendly. I mean, just look at how promising something like this is to save power and maybe get us some of our PC components back on the market:
taalas.com/the-path-to-ubiquit…
10x the performance, 20x cheaper, and another 10x less power consumption than current methods.
TL;DR let's push for this stuff to go in a better direction rather than hide from it.
sidereal
in reply to thesofafox • • •thesofafox
in reply to sidereal • • •it's mainly unprofitable because of the vast amounts of power and space the big players' data centers take up. That chip I linked would make this a profitable venture.
But aside from that again, more ignorant people love this shit so there's now a demand. You either make it less harmful so those people don't destroy our planet, or you insist the technology as a whole shouldn't go anywhere and we stagnate at this horrible stage for a very long time.
sidereal
in reply to thesofafox • • •@thesofafox Maybe. The tech industry hasn’t seemed to care very much about satisfying consumer demand ever before. I think AI is about pumping tech stocks to stave off another bubble bursting.
I don’t think this tech requires any resistance, personally, because it’s already failing. There’s no “advancing through it,” our computer systems were more advanced (worked better, faster, and more securely) ten years ago when fewer people were using this stuff.
sidereal
in reply to sidereal • • •Violet Madder
in reply to sidereal • • •@sidereal @thesofafox
They don't actually give a shit about its profitability. Making money was never the point.
Control is the point. Surveillance, signal jamming, enclosing the digital commons. Destroying anything useful or free about the internet, poisoning the wells of information, trapping everyone.
thesofafox
in reply to Violet Madder • • •all the stuff you named off that AI is purposed for are things that have been happening long before LLMs and genAI were a thing for the public to consume. And in some cases, maybe even more efficiently without AI.
I don't buy this at all. If large AI companies were forced to stop operating tomorrow nothing would change. The same shit would happen with a different face to it.
Violet Madder
in reply to thesofafox • • •@thesofafox @sidereal
The industrial generation of plausibly human-sounding bullshit on this scale would not be possible without these tools. Already more than half of the internet's content is slop. Burning the library at Alexandria is one thing-- silently running all the books through a funhouse filter that distorts what they say is quite another thing.
The analysis of writing and video footage etc on this scale is not possible without these tools. They're using it to digest all available data and "summarize" who might be an enemy of the state and target them for much, much worse things than advertising.
Of course they'll use every resource at their disposal to build their hellish panopticon no matter what, but the giant data centers ramp it up to a level that would make Goebbels faint.
grepe
in reply to Reading Recluse • • •Lee 🏖️
in reply to Reading Recluse • • •Huntn00
in reply to Reading Recluse • • •Helge Wurst
in reply to Reading Recluse • • •I like to remember when we realized that all the nice imported surveillance cameras were suddenly phoning home and that it would be really expensive to remove them again from all our infrastructure, which is when the wonderful term "digital asbestos" was brought up in 2022:
bbc.com/news/uk-politics-63749…
With AI, I mean "artifice infliction", it's much the same. It's the new wonder material that gets put into everything and then we'll have to "live with it".
reddit.com/r/Suomi/comments/1k…
Violet Madder
in reply to Helge Wurst • • •And it's no accident this time.
Ω 🌍 Gus Posey
in reply to Reading Recluse • • •DeadPresident
in reply to Reading Recluse • • •Orb 2069
in reply to Reading Recluse • • •Mx. Alba
in reply to Reading Recluse • • •Jerk
in reply to Reading Recluse • • •This way, I can communicate with people, I userwise wouldn't be able to and give answers, that are meaningful for the receiver.
That's my only use case for LLMs.
Gabriela Roßbach
in reply to Reading Recluse • • •dypsis
in reply to Reading Recluse • • •beem
in reply to Reading Recluse • • •I am of the same stance and the people who are justifying the use and complete support of it have not yet realise that they are in fact at a loss since they subscribed to it personally.
I hope they all realise, big tech CEOs have been brainwashed to think AI will increase output by x10 and its by far one of the fastest push for enterprise that I've ever seen in my career.
None of this is a coincidence. People need to wake up imo
commvenus
in reply to Reading Recluse • • •love this post.
im wondering.. canva has put in a lot of ai tools into their app recently.
should we be pushing back on the use of canva and boycotting posts / users that use it?
does anyone know if the ai tools integrated into canva are LLMs or are something else?
Leonard H. Deas
in reply to Reading Recluse • • •@haitchfive
in reply to Reading Recluse • • •On the one hand, I agree wholeheartedly about the irksome and irreflexive nature of discourse surrounding AI-adjacent topics on here.
At the same time, true human creators don't exist and we remain attached to a Romantic era ideal of the Wagnerian hero, that's been extensively exploited by capitalism in various forms for the last 100 years or so, selling culture back to us, who actually make it collectively.
I'm ambivalent about this in the sense that everybody seems to be nearly absolutely wrong for one reason or another.
Tock
in reply to Reading Recluse • • •The most straight-forward issue: how can anyone call a deterministic algorithm that is essentially a cloud-based spell check engine running backwards to write plausible sentences, thinking?
This cannot be artificial intelligence. It is contemptuous of living creatures everywhere who think.
Badri
in reply to Reading Recluse • • •so am I!
Besides being a social stance, it's also a way of ensuring self-sufficiency: by not using such products, I do not fall into the trap of becoming dependent on them. Broligarchs can't threaten to take away my access to those products because I have no reason to access those products anyway. Instead, I keep my brain working so that I can use it instead, because I know it's one thing that's probably going to remain with me
I say "products" rather than "technologies" or "tools" because that's what all the LLM based things being peddled to us are: shiny new inventions desiged to attract us until we fall into the trap of being dependent on them
Jax Bayne
in reply to Reading Recluse • • •Why is no one pointing out how ableist it is to boycott AI/LLMs instead of decolonize them?
LLMs, when trained properly, are changing the lives of disabled folks who are nonspeaking or need AAC or support with executive functioning.
Even the water and resource consumption doesn't touch the coal/oil/fossil fuel industries or the meat and dairy industries so this feels very much like fear of change...
europlus
in reply to Reading Recluse • • •lovely synopsis of what I've been trying to distill into a (series of) blog post(s) about AI use in my highly regulated and not infrequently litigated industry, in parallel with the sustainability issues I see for companies and customers in said industry (strata management in NSW, Australia).
I'm even seeing strata lawyers (and you really need to be a specialist to do *that* job right) urging simply "caution", and applauding a red/amber/green system for what to and not to use AI for, which a strata manager has proposed.
The professional liability (and personal risks) are such that I say it's a total red light for the whole industry.
There are just some things which have to be undertaken by a licensed strata manager to be legal, which they are all giving a pass to get help from a stochastic parrot on!
Where does the personal responsibility lie? The company responsibility? The legal and commercial risk is certainly not going to be taken on by the AI suppliers.
And what happens when they are breached, manipulated, or driven to bankruptcy other that just
... Show more...lovely synopsis of what I've been trying to distill into a (series of) blog post(s) about AI use in my highly regulated and not infrequently litigated industry, in parallel with the sustainability issues I see for companies and customers in said industry (strata management in NSW, Australia).
I'm even seeing strata lawyers (and you really need to be a specialist to do *that* job right) urging simply "caution", and applauding a red/amber/green system for what to and not to use AI for, which a strata manager has proposed.
The professional liability (and personal risks) are such that I say it's a total red light for the whole industry.
There are just some things which have to be undertaken by a licensed strata manager to be legal, which they are all giving a pass to get help from a stochastic parrot on!
Where does the personal responsibility lie? The company responsibility? The legal and commercial risk is certainly not going to be taken on by the AI suppliers.
And what happens when they are breached, manipulated, or driven to bankruptcy other that just commercial failure?
Each strata manager is personally responsible, as are their supervisors and their employers, for bad advice.
They (we, I'm still licensed) can be barred from practicing in the industry for *life* for behaving badly!
I just don't get the level of "guarded" complacency around this.
Allan Svelmøe Hansen
in reply to Reading Recluse • • •If they can't be bothered to write it, I can't be bothered to read it.
Faye
in reply to Reading Recluse • • •for me, it comes down to what people are trying to sell.
Small, targeted models that can be run locally? Yes. I think it’s still a terrible idea but it’s fundamentally under our control, so there’s room for discussion.
Massive data centres that can only be run by actors with a trillion dollars to front? Universally championed by the web3/crypto-bro crowd? Managed in what I can only describe as “can’t guarantee maliciousness, but if I were to write a plan to do as much damage as possible then that’s what I’d do”? Running up hardware costs that, by happy coincidence, turn computing into even more of a landlord based market?
No. I’m sorry.
The fundamental architecture of the web was designed to be open, and that’s has consequences even into today.
The fundamental architecture being put in place for LLMs is being designed by the sort of people who read Cyberpunk and their thinking stops at “cool gadgets bro we should do that!”.
... Show more...for me, it comes down to what people are trying to sell.
Small, targeted models that can be run locally? Yes. I think it’s still a terrible idea but it’s fundamentally under our control, so there’s room for discussion.
Massive data centres that can only be run by actors with a trillion dollars to front? Universally championed by the web3/crypto-bro crowd? Managed in what I can only describe as “can’t guarantee maliciousness, but if I were to write a plan to do as much damage as possible then that’s what I’d do”? Running up hardware costs that, by happy coincidence, turn computing into even more of a landlord based market?
No. I’m sorry.
The fundamental architecture of the web was designed to be open, and that’s has consequences even into today.
The fundamental architecture being put in place for LLMs is being designed by the sort of people who read Cyberpunk and their thinking stops at “cool gadgets bro we should do that!”.
okanogen VerminEnemyFromWithin
in reply to Reading Recluse • • •LisPi
in reply to Reading Recluse • • •> Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture
It's really as simple as "why should I bother to read something someone couldn't be bothered to write?"
Sure there's all the ethical aspects and would-be externalities but really, the complete lack of basic respect for readers/users is enough on its own.
elizabeth worm🔅
in reply to Reading Recluse • • •plan-A (゚ヮ゚)
in reply to elizabeth worm🔅 • — (Majority Export) •A complementarity not reliability at 100% of it, as trying to make that RAG even better.
I hate as much as else those search engines and the enforcement in places where they do not belong as Github, search engines, OS level etc.
keep all homebrew and private and local.
Alex, the Hearth Fire
in reply to Reading Recluse • • •Alex, the Hearth Fire
in reply to Alex, the Hearth Fire • • •