llama.cpp (source code)

= llama.cpp
{c}

https://ollama.com

This appears to be the backend library of <Ollama>.

They have a <CLI> front-end named <llama-cli>.

https://askubuntu.com/questions/1461564/install-llama-cpp-locally has some tutorials for <Ubuntu>. There was no nicely pre-packaged one for <Ubuntu 25.04>, but build worked on 79e0b68c178656bb0632cb8602d2940b755077f8 In particular it exposed <Vulkan> support before <Ollama> did: https://github.com/ollama/ollama/pull/5059 and it did seem to work, using up my <AMD GPU>.