Skip to main content

8Petros [cracking the system for living] reshared this.


This is priceless! Republican Denver Riggleman deliberately takes a selfie in front of the now famous poster in Nuuk, Greenland, that says yes to NATO, no to the pedo, with a picture of Epstein and Trump.

I love it!

This entry was edited (5 hours ago)
in reply to Randahl Fink

The famous nazi staffed warcrime agency nato.

Guys, just because trump is a pedo and doesnt like nato doesnt mean nato is cool. Reality is not binary.


8Petros [cracking the system for living] reshared this.


What are your pain points, folks? Stuff that you hate doing or dealing with, or problems you can't find a good solution to? Stuff that other people might be frustrated with, too.

I'm looking for a way to make myself valuable to other people, as a way to both help people and also earn an income to feed my family in the process.

One thing I can do *really well* is create reliable software to automate rote tasks, generate financial/statistical/other reports, or calculate difficult solutions. Think it can't be done without LLMs? I might surprise you!

Throw me a bone!

Please boost for reach!

#PainPoints
#WishList
#Automation
#Reporting
#ProblemSolving
#FediHire
#GetFediBHired
#FediJob

reshared this

in reply to Aaron

Sadly, there is no money in solving any of my problems. If there was, someone would have solved them. See, for example, my complaints about text to speech systems. stuff.interfree.ca/2026/01/05/ai-tts-for-screenreaders.html
I can go into more detail about why all the options are bad if you want. But this is the sort of problem that eats years of your life, requires advanced mathematics (digital signal processing at a minimum), and advanced linguistics, on top of being a good systems-level programmer.

Aaron reshared this.

in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge I just so happen to be an (unemployed) machine learning researcher by trade, with advanced mathematics, linguistics, and programming skills. Maybe not systems-level programming, but I could probably find someone who does that and work with them.

Given that the first two responses I've gotten were both about accessibility, there might be more of a market for this than you think, and also, it might make a good way to demo my skills even if it isn't paid work.

in reply to Aaron

in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge Reading your linked article article and this reply, I get the sneaking suspicion that HDC (hyperdimensional computing) or other one- or few-shot learning methods that are designed to factor the model into independent components that can be quickly recomposed in new ways might be appropriate. The idea would be to, as you suggest, learn the values for these components using machine learning, but also the mapping between them and the sounds produced, so that each becomes separately tunable on the fly.

HDC has the added advantage that it is great for working with "fuzzy", human-interpretable rule representations, is typically extremely efficient compared to neural nets, and even meshes well with neural nets and gradient descent-based optimization.

Do you happen to have data of any sort that could be used for training?

in reply to Aaron

@fastfinge I have some ideas too on how to ensure that text isn't omitted from the outputs. The trick here would be to require the representation to map one-to-one, using an autoencoder at the phoneme level to ensure information isn't lost, plus a one-to-one phoneme to sound generator to compose the final audio
in reply to Aaron

In general, for training the rules for pronouncing English, the CMU pronouncing dictionary is used: www.speech.cs.cmu.edu/cgi-bin/cmudict
When it comes to open-source speech data, LJSpeech is the best we have, though far from perfect: keithito.com/LJ-Speech-Dataset/
And here's a link to GnuSpeech, the only open-source fully articulatory text to speech system I'm aware of: github.com/mym-br/gnuspeech_sa?tab=readme-ov-file
I'm afraid I don't have any particular data of my own.
in reply to Aaron

Sadly, this is so far outside of my expertise and abilities it's not even funny. I have an excellent handle on what's needed, and the vague shape of the path forward, but actually doing any of it is way outside of my skillset. If it was anywhere near something I could do, I would have started already. 😀
in reply to Aaron

@fastfinge my thinking is that, sure, I can build a thing, just like all those other folks, but you know the actual needs it would meet firsthand. That's tremendously valuable and can make the difference between something awesome and something completely useless
in reply to Aaron

Absolutely yes to all of the above. I can think of another 10 people on Mastodon at minimum who are also ready and willing to help where ever they can. Just none of us with the skillset to do the actual work.

Aaron reshared this.

in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge Awesome!

What you (and others who are interested) could do to help me right off the bat:

1. Make a list of the common issues, bugs, and failure modes you see in existing systems. Split hairs where you can on this, so I know exactly what issues to design around.

2. Make a list of the features you want, and include info about how important they are to you. Pay special attention to things that distinguish a good screen reader TTS from tools designed for sighted people.

You've already given me a lot of material on both of these, which is super helpful. I just want to make sure I'm getting a complete understanding so we are not surprised by a finished product that was subtly misaligned to your needs.

in reply to Aaron

When it comes to requirements, in general, if it can work with both the SAPI5 and NVDA addons API, it will suit the requirements of speech dispatcher on Linux and the mac API's. The important thing is that most screen readers want to register indexes and callbacks. So, for example, if I press a key to stop the screen reader speaking, it needs to know exactly where the text to speech system stopped so that it can put the cursor in the right place. It also wants to know what the tts system is reading so it can decide when to advance the cursor, get new text from the application to send for speaking, etc. I really really really wish I had a better example of how that works in NVDA than this: github.com/fastfinge/eloquence_64/blob/master/eloquence.py
in reply to Aaron

I wish it would. Unfortunately, that code is what we use to keep Eloquence alive in the 64-bit NVDA version. So it's awful, for dozens of reasons. This...is a bit clearer? Maybe? Anyway, it's the canonical example of how NVDA officially wants to interact with a text to speech system, written by the NVDA developers themselves. Any text to speech system useful for screen reader users needs to expose everything required for someone to write code like this. Not saying you could or should; there are dozens of blind folks who can do the job of integrating any text to speech system with all of the various API's on all the screen readers and platforms. But we have to have useful hooks to do it. github.com/nvaccess/nvda/blob/master/source/synthDrivers/espeak.py
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge I'm wondering if the best place to put text-to-speech processing is on the hardware of the video card. Therefore, the audio is being generated by a separate system. As long as we are blue-skying it here. therefore the system would not care what system was running, rather it could read whatever was sent to the video to display. As I'm thinking this through as I write this, obviously it would need to read descrete sections or it might be garbled gibberish, but...a thought.
in reply to Joe (TBA)

@RegGuy@hosford42 The issue is that then you lose all semantic meaning. Older screen readers did, in fact, work like this, especially back in the DOS days. But also, you're conflating two different systems, here: the text to speech system, and the overall screen reading system. All the text to speech system does is take text, and turn it into speech. The screen reader is responsible for everything else. And in general, we have perfectly good screen readers. Screen readers can also, by the way, drive Braille displays and other output devices.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @RegGuy Since most ML does take place on the GPU, it would be a convenient way to process things for speed. But that maybe wouldn't work so well for edge computing like phones or tablets. Better to have a very pared down model that doesn't need a lot of compute in the first place.
in reply to Aaron

@fastfinge Makes sense. I'm clearly out of my league and was just a thought that popped into my head. Carry on! 😀
in reply to Aaron

The sourcecode for dectalk is out there. Unfortunately, It's...legally dubious at best. It was leaked by an employee back in the day, and now the copyright status of the code is so unclear that nobody can safely use it for anything, but also nobody can demonstrate clear enough ownership to submit a DMCA and get it taken off github. GNUspeech is also pretty close to what's needed, but it won't even compile without all the NeXT development tools, I don't think. So at best all that would be is a base for something else; modernizing it would probably effectively be a complete rewrite anyway.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge Looking for the source, I found this:

github.com/dectalk

It looks like both DECtalk and DECtalkMini are being actively maintained, with commits as recent as 1 to 2 months ago. I was hoping the copyright for the "mini" version would be unencumbered, but no such luck. It would have to be a re-implementation from scratch using this code as a guide. That's a lot easier than implementing a new system out of nothing, though.

in reply to Aaron

I also have no idea about any associated IP or patents, though. Wouldn't whoever does it need to be able to prove they never saw the original code, just its outputs? Otherwise you're still infringing, aren't you? In this regard, it's probably actually a bad thing that the dectalk sourcecode is so widely available.

And most of the commits seem to be about just getting it to compile on modern systems with modern toolchains. I dread to think how unsafe closed-source C code written in 1998 is.

in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge hmm, patents do complicate things. I could search and see if there are patents on it, I suppose. Or I could just try to implement from scratch.
in reply to Aaron

If you're going to reimplement something, you might be better to go with gnuspeech, as it's known to be in the GPL. At the least, it gives you a vocal model to improve on, that was coded with open research in mind, rather than proprietary code probably written for job security.
in reply to Aaron

Also, if you enjoy comparing modern AI efforts with older rule-based text to speech systems, and listening to the AI fail hard, this text is wonderful for that. As far as I'm aware not a single text to speech system, right up to the modern day, can read this one hundred percent correctly. github.com/mym-br/gnuspeech_sa/blob/master/the_chaos.txt
But eloquence gets the closest, gnuspeech second, espeak third, dectalk fourth, and every AI system I've tried a distant last.

Aaron reshared this.

in reply to Aaron

@fastfinge (context: both Aaron and I are USAians)

It doesn't help that:

1. it's 150 or so years old, so a few pronunciations have changed a bit
2. the pronunciations and spellings (and hence some of the apparent mismatches) are UK English, not US English.

At a minimum, you'll have to envision skipping "r"s after vowels at the ends of words for many of these to make sense. As for the rest, I recognized a few of those from past experience with older UK English (e.g. "clerk" with an "a" sound), but a couple left me scratching my head saying "that's how people actually said or spelled it then and there?"

in reply to David Nash

@dpnash@hosford42 Right, but most text to speech systems have a UK English setting. And the mistakes they're making are on things much more basic than that. For example, far too many so-called state of the art AI TTS systems can't even pronounce "Susy", "plaid", "fuchsia", and "lieutenants".

Aaron reshared this.

in reply to 🇨🇦Samuel Proulx🇨🇦

Compare that with the version of GNU Speech released in 1995. It still messes up "tear" and "live". But once you get past the unnatural voice, it's far more precise. And once you get used to it, much much easier to listen to at an extremely high rate of speed (4x or more) all day. All text to speech advancement from "AI" is just the wow factor of "Wow, it sounds so human!" But pronunciation...you know, the important part of actually reading text...is either the same or worse. With five thousand times the resources.


8Petros [cracking the system for living] reshared this.


“ICE cannot be reformed. It cannot be rehabilitated; we must abolish ICE for good. And DHS Secretary Kristi Noem must resign or face impeachment,” Omar said seconds before the attack. - from Aljazeera.

I agree with what she said.


8Petros [cracking the system for living] reshared this.


In 1998 I started online #community, Brainstorms. Before #socialmedia sucked attention we had 400+ active members, meets in San Francisco, Memphis, Amsterdam, Melbourne. ~80 people now. For elders who prefer thoughtful discussion over performative posting, algorithms
brainstormscommunity.org

in reply to OpenStreetMap Ops Team

@josephcox Open source maps project dealing with AI scrapers, requesting journalists who might be interested ☝️

reshared this

in reply to OpenStreetMap Ops Team

I'm administering a web server for a client that has about 50 web sites. Every few days they get hammered by residential proxy IPs for a few hours, so I finally installed Anubis.

8Petros [cracking the system for living] reshared this.


"On Thursday, Jan 22 a group of about a dozen Vermont community elders entered the atrium of White Cap Office Park in Williston VT, home of ICE’s National Criminal Analysis and Targeting Center.

For the next 3.5 hours, they sat together in silence, pausing every 90 seconds to read the name of someone killed in ICE custody, followed by a loud whistle blast.

Around 3:15 the Williston Police announced they had determined that the protestors were not breaking the law."

facebook.com/molly.grover.9655…


8Petros [cracking the system for living] reshared this.


This entry was edited (1 day ago)