Skip to main content


“We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing, "Humanity needs to learn to accept AI as the new equilibrium of human nature."

If "the new equilibrium of human nature" is code for worthless, mind-numbing garbage that is clogging up any and all human interaction, then I guess we agree!

futurism.com/artificial-intell…

reshared this

in reply to Blaise

in reply to plan-A

@plan-A

Yes, except I write my own extension and do not use a plugin manager 😉

in reply to Unus Nemo

friendica (DFRN) - Link to source
plan-A
 — (Proud Eskimo!)

@Unus Nemo I could not let it and bare the shame of plug in's anymore :headbang: so I created this on yet a new model (the smallest due to only 8GB Ram the Qwen2.1.5 coder was trash)
That api in the pic does not mean I'm connected trough their API nor need for network, that API word in text shell console refers to the http local in browser interface as UI


Description: a Vim shell interface connected to the LLM to verify hallucinations and with a trigger question test it confirmed Hallucination.
oh I returned to phi 3 from scratch and deleted the other trash


description :added the other codes over python

I know, I still have to work on the plug in's in my shell console but I like them..
So, I essentially just open a 2nd shell (to test only) copy/paste the code in that second shell where I open Vim. 1st Esc then Press V it highlights the error then F 5 to verdict after pressing Enter

This entry was edited (1 day ago)
in reply to plan-A

@plan-A

There is nothing wrong with using vim-plug. It just does not do anything for me as I write most of my own plugins. The main reason to use vim-plug is for keeping plugins updated. Today, that is moot, as even tpope has not updated a vim plugin in over 4 years. I tend to hard fork the plugins I like best and maintain them myself. So a plugin manager is not required.

Yes, with low ram your options are rather limited. I find it is best to train your own LLM though that requires a significant GPU. At the very least an RTX 3060i or equivalent. You just have to be careful what open source, or your own source you train the LLM on. As there are a lot of low quality projects. Train with a bunch of low quality code, get low quality suggestions.

Though I mainly use AI for an Intel-sense level of auto completion. Having an AI write actual code is very risky. With anything even slightly complicated you are bound to do more debugging than it is worth. It would be less effort to just write it from scratch yourself.

in reply to Unus Nemo

@Unus Nemo Impressive if it autocompletes in your codes which is impressive, I did not knew you could train them, yes I made once with another LLM a front end apache server just to keep progress and with the command recall it showed me all progress but no other way. In the end for the best models one really needs a powerful pc with way more RAM.
in reply to Blaise

This entry was edited (1 day ago)
in reply to Blaise

@Blaise I can't train my specific model again due to RAM shortage I need or a better model 18 to 20 GB or RAG by prompting and the Vim saves it as I understood
@Blaise and if it doesn't work keep asking until it work!