Skip to main content


reshared this

in reply to Ramin Honary

do you know if all this gpu data centers can be repurposed for video rendering? I’m just trying to figure out if there will be anything useful that comes out of this.
in reply to wasabi brain

do you know if all this gpu data centers can be repurposed for video rendering?


@virtualinanity I think some of the GPUs can be re-sold for video rendering or gaming, but they aren’t purpose-built for that. Some of the new chips NVidia are designing are really only useful for neural network inference, so if too many of those get built I can’t see any way to recover the costs of that investment.

in reply to HoldMyType

@xameer for I while I mostly did image recognition for quality control systems. I recently started learning more about how to do Retrieval-Augmented Generation (RAG) using small LLMs like the 8-billion parameter LLaMa. I know enough to generate language embedding for your corpus and do a similarity search.

But let’s not conflate actually useful AI with what Silicon Valley is peddling to all the big governments of the world right now. They seem to have this strong religious conviction that scaling these things up indefinitely will create a general intelligence that will then explain to them how to create a smaller, faster, more efficient AGI and they won’t have to worry about energy costs any more after that — that is, at least what guys like Sam Altman and Peter Theil seem to be trying to convince everyone will happen.

in reply to Ramin Honary

yes that was the intention of the question.
was it something like pict-rs * and ollama?
in reply to HoldMyType

@xameer I was not the data scientist on the project so I don’t recall the exact algorithms that were used, but there were several used for image recognition, there were a variety of CNNs we used, they were all programmed as scripts for PyTorch with whatever hyperparameters the data scientists thought were best for the application. Some of these algorithms were as old as AlexNet or ResNet, but these still work well so we used them. Some of them weren’t even deep neural networks, they were just ordinary convolutions using OpenCV for box drawing, I recall using OpenCV for one application I made for cleaning-up data sets and doing data augmentation.

For RAG, the only ones with which I experimented were LLaMa and DeepSeek, and yes, I used both through the Ollama library.

in reply to Ramin Honary

@Ramin Honary You need not massive GPU and a mountain of Network even > make your own LLM and run it on local mechanisms
in reply to plan-A

@zer0unplanned yes, that’s true. Althuogh I don’t have a big enough graphics card to train my own LLM. Using just a CPU, I could possible train additional layers for an LLM to incorporate my local manual pages for all the programming projects I work on. But I haven’t yet found a hand-tuned LLM that is really a whole lot better than Grep. I mean, LLMs are indeed better, but not so much better than ordinary full-text search that I would consider it worth the trouble it takes to retrain an LLM to search my manual pages.
in reply to Ramin Honary

friendica (DFRN) - Link to source
plan-A
 — (Proud Eskimo!)
@Ramin Honary In fact you can not train at all any LLM it will not remember next boot unless you download the .json index of conversation or use a reverse proxied front end to recall and save all conversations.
Also I use just my pc, I thank you for your kind formulated answer friend ; )
in reply to Ramin Honary

@dibi58 We’re desperately building systems to gatekeep and monetize what is rapidly becoming a commodity resource. I’m not convinced that’s a viable foundation on which to build a future economy, especially given we’re already seeing China providing “good enough” for much of the work people want to do, and all the US-based AI can seem to accomplish is summarizing emails and be a fuckbot.
This entry was edited (3 days ago)