ollama/model
Daniel Hiltgen 517807cdf2
perf: build graph for next batch async to keep GPU busy (#11863)
* perf: build graph for next batch in parallel to keep GPU busy

This refactors the main run loop of the ollama runner to perform the main GPU
intensive tasks (Compute+Floats) in a go routine so we can prepare the next
batch in parallel to reduce the amount of time the GPU stalls waiting for the
next batch of work.

* tests: tune integration tests for ollama engine

This tunes the integration tests to focus more on models supported
by the new engine.
2025-08-29 14:20:28 -07:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
input ollamarunner: Separate text and multimodal graphs 2025-05-15 13:46:20 -07:00
models perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
testdata gemma2 impl 2025-03-11 14:35:08 -07:00
bytepairencoding.go model: fix boundary in bpe 2025-08-19 18:34:49 -07:00
bytepairencoding_test.go model: add bpe roundtripping tests 2025-08-19 22:05:48 -07:00
model.go perf: build graph for next batch async to keep GPU busy (#11863) 2025-08-29 14:20:28 -07:00
model_test.go update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
sentencepiece.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
sentencepiece_test.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
textprocessor.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
vocabulary.go model: treat 'user defined' tokens as special tokens (#11077) 2025-06-16 16:03:16 -07:00
vocabulary_test.go model: treat 'user defined' tokens as special tokens (#11077) 2025-06-16 16:03:16 -07:00