ollama/model/models
Daniel Hiltgen 517807cdf2
perf: build graph for next batch async to keep GPU busy (#11863)
* perf: build graph for next batch in parallel to keep GPU busy

This refactors the main run loop of the ollama runner to perform the main GPU
intensive tasks (Compute+Floats) in a go routine so we can prepare the next
batch in parallel to reduce the amount of time the GPU stalls waiting for the
next batch of work.

* tests: tune integration tests for ollama engine

This tunes the integration tests to focus more on models supported
by the new engine.
2025-08-29 14:20:28 -07:00
..
gemma2 ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
gemma3 perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
gemma3n Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525) 2025-07-29 12:37:06 -07:00
gptoss update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama Only load supported models on new engine (#11362) 2025-07-11 12:21:54 -07:00
llama4 perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
mistral3 perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
mllama perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
qwen2 Only load supported models on new engine (#11362) 2025-07-11 12:21:54 -07:00
qwen3 use nn.Linear in place of ml.Tensor (#11049) 2025-06-11 12:10:15 -07:00
qwen25vl perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
models.go gpt-oss (#11672) 2025-08-05 12:21:16 -07:00