ollama/llm
Jesse Gross 1200e427f7 ollamarunner: Automatically enable flash attention
If a user hasn't explicitly either enabled or disabled flash attention,
automatically enable flash attention if the model supports it and
it would not trigger a fallback to CPU.

This supports text, vision and embedding models as well as automatic
handling of KV cache quantization (which requires flash attention). If a
model does not call the fast fused attention operation, this is detected
and disables any operations that depend on it.
2025-12-17 13:09:49 -08:00
..
llm_darwin.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_linux.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_windows.go win: lint fix (#10571) 2025-05-05 11:08:12 -07:00
server.go ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
server_test.go llm: Don't always evict models on CPU-only systems 2025-12-02 10:58:08 -08:00
status.go logs: catch rocm errors (#12888) 2025-10-31 09:54:25 -07:00