-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAlKZ1vnnw+C0jA2QeBLqw
MncZ/LdzMTZJ5LAicxUuPn55O6JxUNmGKXIfJG8EPH/pOCUadWvJpYdStwe+Frnp
oHXLySoC34I5WPTDgvyjeuWJN1MymWZDCe82E46lwUBkyklI6LZUslDBIAXdDemo
OtOczKFYW3yiQzeF/NAvFm61eyK20imdRY1JLWIIjX7IxNaYv2ivA1i9CwJHR8Vr
DrAf/0UAwv/+7h3J1hWDoJixWGH62V+83uvNAp2aQ7rS1Tws9GWWJBYJm0u6MstK
F+/IwaMjG6tmfKRXRRjR4+w53tDetsPH/p9fgOtzAdH3eQEDDbCSZgIfHgIiEBwh
bd2oEEZSQojlF8ntunxEsLP5hewt6SZrkQ3X6A2xjGxRohOv/va3lP+sY2aZjNW5
iZImFiLZsRyYBbZSdl1/9lEi5NOMiR78G07YyR1tVlNE2TNk5gO5C55UYU8a4PV3
x4XOV6VPHPqSs86DubNkVRyKqs8Qy870261FtDT4B08AyAbtf7a2MBXUwW79jmJk
iBFT2uilAXLxpUpIeMCWgkUKKL3NLEta0PdzezyCt348i2oHCj2P91pGIzxRb8/r
TOuFQ+CuwcSeLuSS1q4JcjaO7J4D3Ezx+Zzx6y/giHiFkCSl9kZzE5g1moB/R+DD
G6hhBflaTPykV6Hne8tAG18CAwEAAQ==
-----END PUBLIC KEY-----
2026-01-28T05:27:38+00:00
- Uid
-
b71a4a1e-3769-048d-df16-3e1026167847
- Nickname
-
liv
- Full_name
-
liv
- Searchable
-
true
- First_name
-
liv
- Family_name
-
- Url
-
https://friendica.rogueproject.org/
- Photo
-
- Photo_medium
-
- Photo_small
-
Unus Nemo
in reply to liv • •@liv
Right π Good luck to him. Real life does not work that way. I have seen the quality of AI generated code. It is worse than the code being generated by sweat shops in India, and that is very bad. AI is great for providing an intellisense feel to Vim, code completion. But it cannot be trusted to write applications.
Besides, why would I want to stop doing something I love to do? Telling me good news, you can now stop writing code is sort of like telling the average American good news, you can now stop watching TV π Good luck with that!
liv likes this.
liv
in reply to Unus Nemo • •Totally agree though β AI code quality is still rough. but for me
if it could handle chores like cooking and cleaning and other everyday boring tasks, that would be real progress. kek.π
Unus Nemo likes this.
Unus Nemo
in reply to liv • •@liv
lol, give it time. I use only local AI and train the LLMs myself. Though, even then I would not trust code produced by the AI just yet in any case. The quality of the material you train the LLM with will definitely show in the results. My LLMs will produce far better code than most commercial AIs that are trained on anything they can scrape from the internet. Though, I do not focus on training my LLMs to code. I mainly train them in philosophy and psychology and nutrition π
liv likes this.
liv
in reply to Unus Nemo • •Unus Nemo likes this.
Unus Nemo
in reply to liv • •@liv
You will not be able to train a base model with it, but you can train with a RTX 5060ti it will just take a while, and of course you will have to train small models. To train larger models I am afraid you have to spend a lot more. At the least you would need a RTX Pro 6000 with 96gb of ddr 7. Which will set you back around $8500.00 dollars. Though that is what is required for serious training.
You can do a lot with a consumer grade gpu these days, though serious AI gpu for commercial use (above the RTX Pro 6000) start at around $16,000.00 which is a bit steep for a hobby π.
liv likes this.
liv
in reply to Unus Nemo • •Unus Nemo
in reply to liv • •@liv
I train models for my research. I do not monetize my efforts at this point, and may never. Typically, my trained LLMs will be part of a project I am working on. Such as making NPC more realistic in an MMRPG or to answer questions on brewing 😉. My main goal in life it to enjoy life. So I spend a lot of time researching many aspects of life.
If you have a bit of IT skills then you could download llama.cpp which is not affiliated with Meta by the way.
If you have never used git before then start with these basic steps.
Create a fork of the project, to do this you need to make an account on github if you do not already have one. This is not technically required and if you do not intend to modify the code in any way then you can just clone their project directly. You will not be able to sync your local changes with the project though.
Cl
... show more@liv
I train models for my research. I do not monetize my efforts at this point, and may never. Typically, my trained LLMs will be part of a project I am working on. Such as making NPC more realistic in an MMRPG or to answer questions on brewing π. My main goal in life it to enjoy life. So I spend a lot of time researching many aspects of life.
If you have a bit of IT skills then you could download llama.cpp which is not affiliated with Meta by the way.
If you have never used git before then start with these basic steps.
Create a fork of the project, to do this you need to make an account on github if you do not already have one. This is not technically required and if you do not intend to modify the code in any way then you can just clone their project directly. You will not be able to sync your local changes with the project though.
Clone your fork of the project to your system. You will need git installed of course.
Review the building instructions, if you do not have an adequate GPU then you could compile it using the CPU. As long as you only use models that will fit into your memory of your computer you will be fine. Just CPU is a lot slower than using a GPU though I have done it on many occasions on laptops with poor GPUs.
There is a user friendly wrapper around llama.cpp which is called Ollama If you just want to play with local AI it is a great way to go. You could check both out and see which one fits your needs best. Options are a great part of life π
liv likes this.
liv
in reply to Unus Nemo • •Unus Nemo likes this.