ollama/model/models
Oliver Simons 1ee3fe46f3
Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-12-29 06:39:47 -06:00
..
gemma2 ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
gemma3 ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
gemma3n Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525) 2025-12-29 06:39:47 -06:00
llama Only load supported models on new engine (#11362) 2025-12-29 06:39:42 -06:00
llama4 use nn.Linear in place of ml.Tensor (#11049) 2025-12-29 06:38:13 -06:00
mistral3 ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
mllama ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
qwen2 Only load supported models on new engine (#11362) 2025-12-29 06:39:42 -06:00
qwen3 use nn.Linear in place of ml.Tensor (#11049) 2025-12-29 06:38:13 -06:00
qwen25vl ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
models.go add new gemma model (#11204) 2025-12-29 06:39:38 -06:00