ollama/ml
Jesse Gross 1200e427f7 ollamarunner: Automatically enable flash attention
If a user hasn't explicitly either enabled or disabled flash attention,
automatically enable flash attention if the model supports it and
it would not trigger a fallback to CPU.

This supports text, vision and embedding models as well as automatic
handling of KV cache quantization (which requires flash attention). If a
model does not call the fast fused attention operation, this is detected
and disables any operations that depend on it.
2025-12-17 13:09:49 -08:00
..
backend ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
nn fix: qwen2.5 vl rope (#13486) 2025-12-15 17:30:33 -08:00
backend.go ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
device.go flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
path.go cpu: always ensure LibOllamaPath included (#12890) 2025-10-31 14:37:29 -07:00