Currently there is a single context per sequence, shared all by all multimodal inputs. Since we build a vision encoder graph per image, with a large number of inputs we can eventually hit the maximum number of graph nodes per context. This changes to use a separate context for each image, ensuring that available resource limits are consistent. |
||
|---|---|---|
| .. | ||
| gemma2 | ||
| gemma3 | ||
| llama | ||
| mllama | ||
| pixtral | ||
| qwen2vl | ||
| models.go | ||