Nemo Dragon, meets VS Code


Nemo Dragon, meets VS Code


I am not a fan of MicroSoft, I think most people would be aware of that. I have recently been required to use VSCode for some of my work and I admit it is not entirely unpleasant. I still prefer vim and no vim mode is not good enough 😉 though it does make it vastly more usable.


After an arm wrestling match, that I finally won, I have VS Code configured for my work environment. I use Fedora 43 Workstation for my daily driver and getting VS Code to co-operate with my default configuration for the terminal was a bit of a nuisance. There must be more than a million settings in this beast! 😉


Unus Nemo


Tips & Donations


#Nemo Dragon #VSCode

Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

I do not think I will ever be done pimping up my OS after 30+ years I am still working on it 😛. I do not think we outgrow that. When we leave the confines of MS/Windows (In my case MS/DOS) or MacOS behind and we are given the freedom to make our computer truly our own. It is hard to not take advantage of that opportunity.

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

Here is a pro-tip for you. If you need to have the .py extension on a file for your editor to recognize it as a python file so you get all the perks you can keep a <filename>.py file and then create a symbolic link to that file with no extension. Say the python filename was hello.py then you could use ln -s hello hello.py be sure not to forget the -s option as this is what makes the link symbolic. This will let your editor know it is a python file and your code will still execute without the user needing to know the .py extension. If I distribute code that I know will run on both MS/Windows and 'nix then I include that symbolic link in my distribution. It will not hurt Windows that it exists and it give a more natural feel on 'nix. Though I have gotten away from supporting Windows at all, sense they now are official 100% spyware. The whole OS malware at this point from my point of view.

Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

stop using llama.cpp server from the build dir instead install it via:

make sure you are in the build directory
cmake --install . --prefix /usr/local

this will install your binaries and libraries to the /usr/local/bin and /usr/local/lib64 directories of your toolbox.

You will probably have to add a config file for the libraries

create a file named llamacpp.conf in the /etc/ld.so.conf.d directory of your toolbox (if this does not persist then you may need to build it into your container). the contents of the file:

/usr/local/lib64/llamacpp

after creating the file in /etc/ls.so.conf.d you need to run ldconfig in order to update the linker.

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

A pod is not a container, a pod is typically a number of containers run on a kubernete distributed system. You should not need a new one no, the changes should work in a proper toolbox, that is what toolboxes are for. Though sometimes some of the directories in the toolbox will use read only bind mounts, then your changes will only last until you stop and then restart the container. If you find that the changes do not persist we can build a custom toolbox that will allow those changes to persist. Try it in your existing toolbox, if it is a problem then we will talk about building a custom toolbox.

Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

even if we had to rebuild the toolbox you are using llama.cpp on then it would not be a big deal. We could even create a new container just for llama.cpp and let it run in the background for your llama.cpp server. We would just export the build so that you could install it in the new container. You would only have to rebuild that container when you had a new version of llama.cpp

This entry was edited (1 week ago)
Unknown parent

friendica (DFRN) - Link to source

Unus Nemo

@plan-A (゚ヮ゚)

You are going to love this 😉. I just checked my fork of llama.cpp and they had some fixes applied (I was 30 commits behind in 2 days 😉 very active project!). So I synced and you guessed it, I rebuilt my llama.cpp 😉. Just when I had everything up and running. It is cool though.

Review the changes in the repo, if you do not feel like they are pressing then you do not have to rebuild for every update, but be sure to do a git pull before your next build.

This entry was edited (1 week ago)
⇧