ollama/runner/ollamarunner
Jesse Gross 282bfaaa95 ollamarunner: Use a separate context per multimodal input
Currently there is a single context per sequence, shared all by
all multimodal inputs. Since we build a vision encoder graph per
image, with a large number of inputs we can eventually hit the
maximum number of graph nodes per context.

This changes to use a separate context for each image, ensuring
that available resource limits are consistent.
2025-03-14 15:38:54 -07:00
..
cache.go llm: remove internal subprocess req and resp types (#9324) 2025-03-14 15:21:53 -07:00
cache_test.go model: Update encoder cache to use multimodal input processing handler 2025-03-09 17:05:26 -07:00
runner.go ollamarunner: Use a separate context per multimodal input 2025-03-14 15:38:54 -07:00