The LLM discourse on the Fediverse has really irked me the last few days.

Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

in reply to Reading Recluse

LLM are not an expression of speech nor creativity and simply digest, explore and reorder information available. They are a tool and can be useful to digest and explore information at great speed but essentially are not more than that.

For anything in opinion, creativity, art and commenting I will be looking at human expression, always..

The problem is society will be confronted with loads of LLM nonsense and disinformation in due time. Seeing it online more and more.

This entry was edited (2 days ago)
in reply to Liam Proven

@lproven
Well as I said it is a tool, a hammer is not right or wrong. It can be used right or wrong.

As a domain expert, I use LLM in my work, but I will always judge and validate if it is right... I have indeed seen colleagues use it out of their zone of work, where I had to tell them yes this is right what LLM said, but not in this context. The real problem is LLM will never tell you context or probability of it telling you something is correct.

in reply to Liam Proven

@lproven @xs4me2
For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

That's a worthy advance.

This entry was edited (2 days ago)
in reply to dynamite_ready

@dynamite_ready

The problem is that LLMs just make things up. There are no new discovers, there is no accurate information retrieval. But people don't notice, because they lack the expertise, they lack the ability to check.

LLMs cannot be trusted with anything. They are a sheer waste of our world's resources.

@lproven @xs4me2 @reading_recluse

in reply to xs4me2

@xs4me2 @dynamite_ready But it can't be used for brain surgery.

No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

in reply to xs4me2

@xs4me2 @lproven @dynamite_ready
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

in reply to Ben Tasker

I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.

It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...

This entry was edited (1 day ago)
in reply to Liam Proven

in reply to Liam Proven

There is no substitute for reading the final material of your subject to study by yourself. Line by line and internalizing it. I remember the days of our paper scientific library where I would stay a whole afternoon and would review Phys Rev B, Applied Physics, Applied Optics and more on the topic of my research and in the end had a stack of paper copies I took home to read. Basically that has not changed by online use but got so much more fast and efficient.
This entry was edited (1 day ago)
in reply to Liam Proven

in reply to Reading Recluse

You do wear machine-woven cloth, though, no?

Seriously: Why?

It's exploitative, the quality is mediocre, it kills jobs, it's a waste of resources, consumes vast amounts of energy, hinders creativity, destroys small businesses, forces uniformity onto people ... why wear it?

Because not doing so would be a waste of time. And time is the one resource that's (still) strictly limited for all of us. We compromise on the quality of clothing (debatable), in order to do other things we couldn't if we were still weaving cloth manually.

When mechanical weaving machines came about, the workers threw their wooden shoes, in French 'Sabot', into the machines to stop them.

All that is left of this effort is a word describing the futile attempt: Sabotage.

So protest all you like, it's just not going to get you anywhere.

in reply to social elephant in the room

@tseitr @papageier My problem with this framing is: who gets to decide?

Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.

But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.

I get the pushback. I'll never use one to write prose, because prose comes from my human heart.

But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.

Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.

So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.

in reply to Johnny ‘Decimal’ Noble

@johnnydecimal @tseitr @papageier
"nobody's boycotting the progress being made in chip design"

[waving hand]
Over here.
We're boycotting chips that offer us nothing more that we want or need.
Run the web browser, word processor, printer drivers, scan drivers, network connections, do security updates. And don't make the humans waste time with the damned computers. It's a lot to ask but new chips are not going to do this any better.

in reply to Johnny ‘Decimal’ Noble

@johnnydecimal @tseitr @papageier useful to whom? I write both prose and code and I would argue that they both a. come from my brain (powered by my heart, controlling my fingers) b. are about stitching existing pieces together to make new things. I find that stitching meaningful and rewarding, and through practice I'm becoming reasonably good at it. Not doing that would be worse than doing that (see how I'm restitching your words together?). That's why LLMs are useless to me.

Alex, the Hearth Fire reshared this.

in reply to Arianna Masciolini

@harisont @tseitr @papageier Perhaps the difference is that my job is not, and never has been, 'professional software developer'.

My current job involves trying to help people to be more organised. As part of that, it's very helpful if I can write computer programs and websites. In that aspect of my business, I find Claude Code very useful.

It provides much the same utility as does my accountant. As a business owner I must file taxes. But it's not what I do. It's not the function I serve.

My job, arguably, is much closer to that of a writer. The _ideas_ that I present are mine, from my human brain. So I value the act of creation.

I can see how a software developer might think differently. But for that person to deny me the utility of an LLM is like me telling my accountant that they can't use Xero and that they have to enter everything by hand in a double-entry ledger.

This entry was edited (1 day ago)
in reply to Johnny ‘Decimal’ Noble

@johnnydecimal @harisont @tseitr The 64k$ question: it's obviously a rearguard battle. Technology is advancing, Humanity is retreating. Tech has just captured a base we thought invulnerable until yesterday.

So the 128k$ question will be: Is your job as writer / creator of ideas still safe? I seriously doubt that. But what if not?

Me thinks we don't need next level AI, we need next level economics.

in reply to Papageier

@papageier @johnnydecimal @harisont @tseitr As an artist and a writer, I could not agree with you more.

I don't want AI to paint pictures for me or research/figure something out for me or write novels for me. Those activities are part of what makes being human actually *fun*.

"But Courtney, what about the people who can't do those things? Why shouldn't they use AI to make up for that?"

Because we artists & writers put in hella work to learn how to do what we do....

1/2

in reply to Court Cantrell does not comply

@papageier @johnnydecimal @harisont @tseitr
...Every piece of AI-gen "art" is a theft from me and people like me. Every AI-gen story, poem, novel is a theft from me and people like me. WE PUT IN THE TIME, BLOOD, SWEAT, AND TEARS (not to mention money for education and supplies) to *learn* these things -- and now people are stealing our works just because they want a shortcut *while* causing incalculable damage to the only bit of dirt any of us can survive on.

2/

in reply to Court Cantrell does not comply

@papageier @johnnydecimal @harisont @tseitr
I don't want AI to do *any* of our creative work or *any* of our learning for us. I want us all to do the work of learning and becoming, improving our damn skillsets and growing our damn brains.

All I want from AI is robot mice to clean my floors at night so I have more time to write and paint.

3/3

in reply to social elephant in the room

@tseitr @papageier Tech advancement is not only desirable, it’s part of human evolution. It should be a good thing, even AI, freeing up humans from basic grunt work.
But not in a profit driven capitalist system that relies on disenfranchising fellow citizens to make profits. And the haphazard manner of competitive development putting excess strain on energy and resources. $$$ is the lure, it seems to undermine us at every turn.
in reply to Papageier

@papageier a) clothing is to some degree essential. A clothing industry has to exist.
b) we may still complain about the bad practices of said industry, do what we can to mitigate it, demand legislation to regulate it, choose providers that operate more repsonsibly to the degree that we can afford it. Plenty of people with the skills still actually make some of their own clothes. We don't have to silently accept the bad things.
eta: what the AI companies want to sell is not the clothing, but the machine to enslave the people who make the clothing.
This entry was edited (2 days ago)
in reply to Papageier

@papageier
You are right that there has always been a protest to mechanising jobs. Black smiths when a nail cutting machine was invented for example.

There is a difference here since a notable portion of its function is at the academic level.

So let's say I need to write a book report, instead of reading the book I read an LLM summary, then write and publish my report.

in reply to Bwatch

@brokenshell I honestly don't know. I had presumed LLM output to deteriorate over time, as AI output appears on the Web and is used to train NextGen AI. However, so far I stand corrected. Latest LLM versions are doing astonishingly well, and the limits are not yet in sight.

Yes, it is probably a hard experience for academics to suddenly face the same fate as simple workers (like weavers) 150 years ago. Because they always felt superior, and therefore safe? Maybe. This alone should teach us a lesson.

But the underlying truth is: if you can automate something in a disruptive manner, someone will always do it. All others have no choice: follow suit, find a niche or suffer economic death.

in reply to Papageier

@papageier
Go back and read up on what the Luddites were actually protesting, jackass. They were not mindless technophobes.

Machine-woven cloth IN AND OF ITSELF is NOT inherently exploitative. It could have been used instead to elevate and improve the textile trade, making life easier for the workers.

Instead, the way the capitalists weaponized the tech to devalue labor was fucking evil.

Tech is not inherently good or bad. It's just a tool.

"AI" and LLMs, as they are currently being designed and deployed, are a tool being used as a WEAPON. Child-raping technofascist planetwreckers are using them to enclose the digital commons, jam any useful signals they don't control, and surveil the everloving shit out of everyone everywhere.

If we don't protest like our lives depend on it, NOW, things are going to get unimaginably and horrifyingly fucking bad.

in reply to Papageier

This is such a lazy argument:: You can WEAR clothing.

No one wants to read AI generated text, AI images are hideous. Beyond some niche industrial cases, which are not the focus of the hyperscalers, LLM's are generally useless and a massive waste of resources. The entire industry is based on a speculative, utopian fantasy of created an AGI that will solve all problems. It's utopian fantasy mixed with sunk cost fantasy.

It's like saying "why eat human shit when you can now eat robot generated shit.

Also sabotage and worker resistance got workers everything they ever had.

in reply to Papageier

@papageier
Hand woven clothes are not generally superior, tho.
You know that, right? I mean, you do, right?
Hand weave a cotton t-shirt, please. Or a fleece jacket. Or tights. I would like to see that done.
LLMs are inherently racist, sexist, and reductive, because the online society they sample is racist, sexist, amd reductive. It is baked in.
This entry was edited (1 day ago)
in reply to Reading Recluse

Our refusal to engage with or become in any way reliant on LLMs is also a conscious effort to run out the clock.

Tech giants are trying to crowbar "AI" into everything right now, because they need to make these services indispensable for society at large. This way our leaders might not have have a choice, but to bail out or otherwise coddle the industry once all this circular financing comes crashing down as it inevitably must.

Shannon Prickett reshared this.

in reply to Glenn Seto

@glennseto totally agree.

It’s like how recipe websites become useless as they added so much useless text for SEO and so they have “jump to recipe” buttons to actually find what you want.

The AI chatbots are like that button except, terribly, the only reason it’s needed is because of all the AI slop in the first place.

And then we have to rely on it as the entire internet is noise and no signal. Tech companies made the mess then push the tool to clean

in reply to lifewithtrees

@lifewithtrees @glennseto
Selling the disease along with the "cure".

What's scary is... they're shoehorning "AI" into everything, while also manipulating hardware prices and supply such that it's becoming increasingly harder for anyone to get their hands on home computing-- they're trying to make everyone depend on cloud computing through tools that allow them to surveil damn near everything we say and do, while poisoning the wells of information and claiming to be our rescuers if we only swallow the shit being spewed by the mindless digital oracles running on algorithms they can warp any way they like.

This shit was never supposed to make money or be useful in any direct way. It's a fascist's panopticon torment nexus wet dream.

in reply to Violet Madder

@violetmadder @glennseto selling the disease along with the cure is the same thing I’ve felt from religion. I guess that’s where the crossover is: it’s scams all the way down.

If what my separation from religion has taught me holds for this as well, then we don’t need to trust AI or tech, we need to throw it out and learn to trust ourselves. ❤️

in reply to lifewithtrees

@lifewithtrees @glennseto
You got it!

This culture, the scams, the money-- it IS a religion.

We have been treating the ultra-wealthy as royalty, entitled to rule over us by divine virtue of their superior wallets. It's prosperity gospel, the golden calf. Big bucks daddy knows what he's doing, he's super smart and fancy and if we worship him he'll give us some treats etc.

All we'd have to do is stop believing in them, deprive them of our devotion, stop buying what they're selling, and most of their wealth and power would vanish overnight.

in reply to Reading Recluse

Absolutely. LLMs are the biggest, most bloody useless con ever invented by the vacuous arseholes in charge of the tech industry.

The extra annoying thing is that there are other potential approaches to AI out there that are ultimately likely to be more useful, less destructive and work better (e.g. some expert systems, decision support systems, etc.) But so many folks are just playing with probabilistic horseshit generators instead.

This entry was edited (1 day ago)
in reply to Reading Recluse

I admit to having created songs with AI, pictures with AI, code with AI, clips of video with AI, everything more out of curiosity than nothing else. But generating text with AI, where is the fun in that...? AI generated text gives me that immediate uncanny valley effect more so than video, music, or pictures. I've quit buying the Sunday edition of a certain newspaper because reading some articles I was sure there's AI involved there. If I got that feeling reading a novel, what a disappointment that would be.
in reply to Reading Recluse

@Reading Recluse Add Corporate LLM and I'll agree, not generalizing A.I. as a whole Infrastructure that exist since earlier then you think.
The debate alone gets annoying, sure you can tell your opinions but it's a hype overflow lately that gets on the nerves of many people mind you.
Think what you want and do not use it at all must be satisfying enough while I agree corp AI is pure trash and immoral, not all AI is.

I wish you a good day ahead 😉

edit: Give room to others to explore and exchange how to make it better for all instead shooting it, while the gun is not the weapon in that case but you that trigger it is same as comparing it to a nuke or fire.

in reply to Reading Recluse

What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.
in reply to Fergabell 😷 🌱

@fergabell Completely true, I fully agree.

I really dislike that most LLM-defenders in my comments right now say something like: "Well actually, in this specific case LLM usage was actually helpful for me personally, so..."

Even entertaining the thought that it's somehow useful for someone somewhere, it doesn't erase the extreme damage it's doing to the world and us collectively, and the massive scale of exploitation it's engaging in to keep it all afloat.

in reply to Reading Recluse

@fergabell "I didn't kill him because he was crazy, I killed him because he was making sense."

Miller, The Expanse -- one of the episodes I just watched in S2.

Thing is, the LLM thing wouldn't be a thing if it wasn't this puffed up thing. Yeah, making an LLM would be costly and would burn up some GPU. But it wouldn't be this Earth sucking thing because it would only be applied where it's worth it.

Could be that the given situation makes that possible balance irrelevant.

in reply to Reading Recluse

Completely going d'accors. Also LLM produced "art" is so dull. I don't want to read it. For some reason my brain starts to shut down when reading an LLM produced text. I forget the picture as soon as I close it. Same with music. AI generated voices are so grating. The artificiality of it all makes me mad. It doesn't challenge me, it doesn't tell me anything, there is nothing intentional behind it. It's just - nothing. And it destroys the environment.
in reply to Reading Recluse

I have to disagree on one thing: I've used LLMs for complex social issues I faced in real-life in the past and they (in hindsight) correctly determined that it wasn't my fault or anything wrong with me. So for me, they improved my mental health in difficult times and successfully prevented me from getting depressed.

So there are definitely beneficial use cases for them. But they're also very overrated and love to hallucinate a lot and are unable to comprehend nuance in writing.

in reply to Reading Recluse

i feel pretty much the same, save to say, its not to concept of LLMs that I'm against, rather it is the theft of material for training, the impunity of that theft and the determination to disclaim any possibility of giving fair payment or recognition to those whose work is responsible for the stolen data.

on top, i really really dislike the cultish hype and forced use going on

in reply to Reading Recluse

For me, it doesn't make sense to think about LLMs in pure dogmatic categories like "in favor" or "against". Fact is, LLMs are out there now and won't just disappear, and they CAN be powerful and useful tools if used in a reasonable way. The problem is that a lot of people are currently overusing it and don't reflect enough about when and how to use it, which leads to a lot of AI-generated crap. Maybe humanity just needs more time to finally find a good balance of AI usage.
in reply to Reading Recluse

@reading_recluse

There shouldn’t even be a discourse. It’s a nobrainer that the tradeoffs aren’t worth the means. Sadly there’s no way to completely avoid using it. Doctors offices and every business and their mom’s is using it even if you’re not aware they are.

What pisses me off is the job losses behind it. It’s like hiring a 3rd grader to do a job a human adult can do more efficiently. I’m pretty sure it costs a whole lot more financially and ecologically to maintain the job with a robot than to just hire a human. Businesses really go hard in not wanting to pay people. Even to the point of not making sense.

This entry was edited (1 day ago)
in reply to Reading Recluse

There are fundamental differences between

1. "the person who had the idea was bad, so I will not touch things they tainted with their badness" (purity argument)

2. "the tool was created using bad (or catastrophic) means, so the ends don't matter" (purity)

3. "the tool creates bad ends every time it is used, so the means don't matter" (function)

4. "the tool creates bad ends when used inappropriately" (define "appropriate")

5. "the tool is sometimes helpful under limited circumstances". (define "limited")

and they can all be true.

Right now I'm somewhere between 2 and 3 - the means are bad but it may be possible to avoid adding to them,
and the bad ends are hard to quantify.

But as someone whose ability to code is almost completely gone due to long covid, but who sees a need for unprofitable software tools that no-one else will build, I may eventually end up in 5, supervising an LLM out of desperation.

For now I'm continuing to try to avoid LLM-generated content.

in reply to Orb 2069

@Orb2069 I'm explicitly not passing judgement in the above toot, just describing the categories I see.

I'm also not judging people like @pluralistic for their choices - he has consistently honestly engaged with identifying reasonable definitions for "limited" and "appropriate", and he seems to be attempting to limit both the personal and external harms of his choices given a sunk means cost.

I do pass judgement on the people who ignore (or celebrate) the bad means and the bad ends. And I pass judgement on the people whose definitions of "appropriate" and "limited" ignore (or celebrate) the costs and external harms of their choices.

in reply to Reading Recluse

I don't generally like LLMs at all and in the creative field especially I think they are an absolute disaster.

But, that being said, Pandora's Box has been opened. Companies are finding it as a way to get lots of ignorant investors on board right now and people who don't give a shit how it works will always be impressed.

So sure, we can be in our corner tucking ourselves away from where the world is headed. Or we can push for these things to be heavily regulated and more environmentally friendly. I mean, just look at how promising something like this is to save power and maybe get us some of our PC components back on the market:

taalas.com/the-path-to-ubiquit…

10x the performance, 20x cheaper, and another 10x less power consumption than current methods.

TL;DR let's push for this stuff to go in a better direction rather than hide from it.

in reply to sidereal

it's mainly unprofitable because of the vast amounts of power and space the big players' data centers take up. That chip I linked would make this a profitable venture.

But aside from that again, more ignorant people love this shit so there's now a demand. You either make it less harmful so those people don't destroy our planet, or you insist the technology as a whole shouldn't go anywhere and we stagnate at this horrible stage for a very long time.

in reply to thesofafox

@thesofafox Maybe. The tech industry hasn’t seemed to care very much about satisfying consumer demand ever before. I think AI is about pumping tech stocks to stave off another bubble bursting.

I don’t think this tech requires any resistance, personally, because it’s already failing. There’s no “advancing through it,” our computer systems were more advanced (worked better, faster, and more securely) ten years ago when fewer people were using this stuff.

in reply to Violet Madder

all the stuff you named off that AI is purposed for are things that have been happening long before LLMs and genAI were a thing for the public to consume. And in some cases, maybe even more efficiently without AI.

I don't buy this at all. If large AI companies were forced to stop operating tomorrow nothing would change. The same shit would happen with a different face to it.

in reply to thesofafox

@thesofafox @sidereal
The industrial generation of plausibly human-sounding bullshit on this scale would not be possible without these tools. Already more than half of the internet's content is slop. Burning the library at Alexandria is one thing-- silently running all the books through a funhouse filter that distorts what they say is quite another thing.

The analysis of writing and video footage etc on this scale is not possible without these tools. They're using it to digest all available data and "summarize" who might be an enemy of the state and target them for much, much worse things than advertising.

Of course they'll use every resource at their disposal to build their hellish panopticon no matter what, but the giant data centers ramp it up to a level that would make Goebbels faint.

in reply to Reading Recluse

I like to remember when we realized that all the nice imported surveillance cameras were suddenly phoning home and that it would be really expensive to remove them again from all our infrastructure, which is when the wonderful term "digital asbestos" was brought up in 2022:

bbc.com/news/uk-politics-63749…

With AI, I mean "artifice infliction", it's much the same. It's the new wonder material that gets put into everything and then we'll have to "live with it".

reddit.com/r/Suomi/comments/1k…

in reply to Reading Recluse

I agree with you in full, except in one thing: Non-generative, translation-only LLMs, as long as they give and explain alternative wordings, so I'm in total control of the result and the tone, I want to set, even in languages, I do not understand a single word of AND that they do not try to "improve" my writing.
This way, I can communicate with people, I userwise wouldn't be able to and give answers, that are meaningful for the receiver.
That's my only use case for LLMs.
in reply to Reading Recluse

Thank you for your words 🙏🏽 My job as a copywriter is one of the first professions to be replaced by LLMs. The results are worse than bad because AI machine texts don’t have ideas, they don’t have empathy, they have no understanding of anything that’s human. They are full of mistakes. BUT: LLM is cheap (or seems to be) and companies love this. I’m boycotting.
in reply to Reading Recluse

I am of the same stance and the people who are justifying the use and complete support of it have not yet realise that they are in fact at a loss since they subscribed to it personally.

I hope they all realise, big tech CEOs have been brainwashed to think AI will increase output by x10 and its by far one of the fastest push for enterprise that I've ever seen in my career.

None of this is a coincidence. People need to wake up imo

in reply to Reading Recluse

I really appreciate you laying it out like this. The 'purity culture' vs. 'boycott' distinction is a huge one, it's not about being a snob, it's about where we choose to put our limited time and energy. It’s becoming so exhausting trying to filter through the 'generic output' just to find a real human voice. Thanks for taking a stand on this, it’s a perspective that definitely needs more airtime in these circles.
in reply to Reading Recluse

On the one hand, I agree wholeheartedly about the irksome and irreflexive nature of discourse surrounding AI-adjacent topics on here.

At the same time, true human creators don't exist and we remain attached to a Romantic era ideal of the Wagnerian hero, that's been extensively exploited by capitalism in various forms for the last 100 years or so, selling culture back to us, who actually make it collectively.

I'm ambivalent about this in the sense that everybody seems to be nearly absolutely wrong for one reason or another.

This entry was edited (1 day ago)
in reply to Reading Recluse

so am I!

Besides being a social stance, it's also a way of ensuring self-sufficiency: by not using such products, I do not fall into the trap of becoming dependent on them. Broligarchs can't threaten to take away my access to those products because I have no reason to access those products anyway. Instead, I keep my brain working so that I can use it instead, because I know it's one thing that's probably going to remain with me

I say "products" rather than "technologies" or "tools" because that's what all the LLM based things being peddled to us are: shiny new inventions desiged to attract us until we fall into the trap of being dependent on them

in reply to Reading Recluse

Why is no one pointing out how ableist it is to boycott AI/LLMs instead of decolonize them?

LLMs, when trained properly, are changing the lives of disabled folks who are nonspeaking or need AAC or support with executive functioning.

Even the water and resource consumption doesn't touch the coal/oil/fossil fuel industries or the meat and dairy industries so this feels very much like fear of change...

in reply to Reading Recluse

in reply to Reading Recluse

in reply to Reading Recluse

> Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture

It's really as simple as "why should I bother to read something someone couldn't be bothered to write?"

Sure there's all the ethical aspects and would-be externalities but really, the complete lack of basic respect for readers/users is enough on its own.

This entry was edited (1 day ago)
in reply to elizabeth worm🔅

@elizabeth worm🔅 @Reading Recluse As how I built it , it just complements things I do not know as in dev things by feeding it and isolating it to a RAG with man pages and official docs with code and hallucination checkers on local host > is useful, only that 1 reply is very slower then the API branched ones not running on 0.0.0.0
A complementarity not reliability at 100% of it, as trying to make that RAG even better.
I hate as much as else those search engines and the enforcement in places where they do not belong as Github, search engines, OS level etc.
keep all homebrew and private and local.