Stubsack: weekly thread for sneers not worth an entire post, week ending 31st August 2025 - awful.systems
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
BlueMonday1984
in reply to BlueMonday1984 • • •Someone tried Adobe's new Generative Fill "feature" (just the latest development in Adobe's infatuation with AI) with the prompt "take this elf lady out of the scene", and the results were...interesting:
There's also an option to rate whatever the fill gets you, which I can absolutely see being used to sabotage the "feature".
Talia Hussain
in reply to BlueMonday1984 • • •YourNetworkIsHaunted
in reply to BlueMonday1984 • • •So the fucking Cracker Barrel rebranding thing happened. I'm going to pretend this is relevant here because the new logo looked like it was from the usual "imitating Apple minimalism without understanding it in the least" school of design. They've confirmed that they're not moving forward with it, restoring both the barrel and the cracker to the logo, so that's all good. That's not what I want to talk about.
No, what's grinding my gears is the way that the rollback is being pitched purely as a response to conservative "antiwoke" backlash, and not as a response to literally nobody liking it. This wasn't a case of a successful crusade against woke overreach, this was a case of corporate incompetence running into the reactions of actual human beings. I can't think of a more 2025 media dynamic than giving fucking Nazis a free win rather than giving corporate executives an L.
Soyweiser
in reply to YourNetworkIsHaunted • • •Charlie Stross
in reply to Soyweiser • • •BlueMonday1984
in reply to BlueMonday1984 • • •OpenAI has stated its scanning users' conversations (as if they weren't already) and reporting conversations to the cops in response to the recent teen suicide I mentioned a couple days ago.
So, rather than let ChatGPT drive users to kill themselves, its just going to SWAT users and have the cops do the job.
(On an arguably more comedic note, the AI doomers are accusing OpenAI of betraying humankind.)
Top AI Experts Concerned That OpenAI Has Betrayed Humankind
Noor Al-Sibai (Futurism)reshared this
Jürgen Hubert reshared this.
Randulo.com
in reply to BlueMonday1984 • • •Zazzoo 🇨🇦
in reply to Randulo.com • • •I've been calling these LLMs the "new, improved TIA program" since they arrived on the scene. The possibilities are chilling.
en.wikipedia.org/wiki/Total_In…
US mass surveillance program
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)BigMuffN69
in reply to BlueMonday1984 • • •argmin.net/p/the-banal-evil-of…
Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.
"I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.
But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."
The Banal Evil of AI Safety
Ben Recht (arg min)scruiser
in reply to BigMuffN69 • • •It's a good post. A few minor quibbles:
I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?
... show moreI wish people didn't feel the ne
It's a good post. A few minor quibbles:
I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?
I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.
One of the things I liked and didn't know about before
That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.
One of the comments gets it:
LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1
blakestacey
in reply to scruiser • • •reshared this
millennial fulcrum reshared this.
fullsquare
in reply to scruiser • • •it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it's the former, then it's rare rationalist W for wrong reasons
look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you'll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax
maybe they really made machines in their own image
fullsquare
in reply to fullsquare • • •