ollama/ml/backend/ggml
Oliver Simons 1ee3fe46f3
Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-12-29 06:39:47 -06:00
..
ggml Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525) 2025-12-29 06:39:47 -06:00
ggml.go ggml: Use assigned layers when reporting loading stats 2025-12-29 06:39:42 -06:00
quantization.go Move quantization to new backend (#10363) 2025-12-29 06:37:52 -06:00
threads.go ollama debug tensor 2025-03-11 14:49:19 -07:00
threads_debug.go ollama debug tensor 2025-03-11 14:49:19 -07:00