ollama/runner
Jesse Gross 1200e427f7 ollamarunner: Automatically enable flash attention
If a user hasn't explicitly either enabled or disabled flash attention,
automatically enable flash attention if the model supports it and
it would not trigger a fallback to CPU.

This supports text, vision and embedding models as well as automatic
handling of KV cache quantization (which requires flash attention). If a
model does not call the fast fused attention operation, this is detected
and disables any operations that depend on it.
2025-12-17 13:09:49 -08:00
..
common server: add logprobs and top_logprobs support to Ollama's API (#12899) 2025-11-11 08:49:50 -08:00
llamarunner ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
ollamarunner ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
README.md Runner for Ollama engine 2025-02-13 17:09:26 -08:00
runner.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00

README.md

runner

Note: this is a work in progress

A minimial runner for loading a model and running inference via a http web server.

./runner -model <model binary>

Completion

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "hi"}' http://localhost:8080/completion

Embeddings

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "turn me into an embedding"}' http://localhost:8080/embedding