Currently there is a single context per sequence, shared all by all multimodal inputs. Since we build a vision encoder graph per image, with a large number of inputs we can eventually hit the maximum number of graph nodes per context. This changes to use a separate context for each image, ensuring that available resource limits are consistent. |
||
|---|---|---|
| .. | ||
| imageproc.go | ||
| imageproc_test.go | ||
| model.go | ||
| model_text.go | ||
| model_vision.go | ||
| process_image.go | ||