ollama/ml/backend
Daniel Hiltgen bd6c1d6b49
flash attn: add auto mode for llama engine (#13052)
* flash attn: add auto mode for llama engine

If the user does not specify fa in the environment, use auto-mode.

* review comments

* ensure kv cache quantized types have FA explicitly enabled

additional review comments
2025-12-12 13:27:19 -08:00
..
ggml flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
backend.go next ollama runner (#7913) 2025-02-13 16:31:21 -08:00