ollama/runner/ollamarunner
Jesse Gross 26465fb85f ollamarunner: Worst case batch for token generation
We currently allocate the worst case batch for max sized
batches, which corresponds to prompt processing. However,
there are some cases where the generated graph is different
for small and large batches. To ensure that we don't need
to allocate memory later after layout has taken place, we
should run the worst case batch both ways and take the larger
amount of memory.

This does not noticeably affect loading speed as the most expensive
part of this logic is from image processing and that does not
occur during token generation.
2025-10-30 13:53:10 -07:00
..
cache.go feat(model): add qwen3vl (#12665) 2025-10-28 17:39:47 -07:00
cache_test.go feat(model): add qwen3vl (#12665) 2025-10-28 17:39:47 -07:00
multimodal.go s/From*Slice/From*s/ (#12255) 2025-10-28 12:08:49 -07:00
runner.go ollamarunner: Worst case batch for token generation 2025-10-30 13:53:10 -07:00