ollama/ml/backend/ggml
Jesse Gross 1200e427f7 ollamarunner: Automatically enable flash attention
If a user hasn't explicitly either enabled or disabled flash attention,
automatically enable flash attention if the model supports it and
it would not trigger a fallback to CPU.

This supports text, vision and embedding models as well as automatic
handling of KV cache quantization (which requires flash attention). If a
model does not call the fast fused attention operation, this is detected
and disables any operations that depend on it.
2025-12-17 13:09:49 -08:00
..
ggml feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
ggml.go ollamarunner: Automatically enable flash attention 2025-12-17 13:09:49 -08:00
ggml_test.go ml: add slice operation (#12870) 2025-11-13 13:28:21 -08:00
quantization.go chore: fix some inconsistent function name in comment 2025-08-13 09:50:27 -07:00
threads.go ollama debug tensor 2025-03-11 14:49:19 -07:00
threads_debug.go ollama debug tensor 2025-03-11 14:49:19 -07:00