ollama/model/mllama
Jesse Gross 7916f55009 vocab: Use int32 for special tokens
Special tokens are currently read as uint32 from the model metadata.
However, all other parts of the system (including the tokenizer) use
int32 to represent tokens so it is impossible to represent the high
portion of the unsigned range. For consistency and to avoid casts,
we should just use int32 everywhere.
2025-02-13 17:09:26 -08:00
..
imageproc.go imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
imageproc_test.go imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
model.go vocab: Use int32 for special tokens 2025-02-13 17:09:26 -08:00
model_text.go backend: API to support full precision matmul 2025-02-13 17:09:26 -08:00
model_vision.go backend: Consistently use int (vs. int64) for tensor shapes 2025-02-13 17:09:26 -08:00
process_image.go next ollama runner (#7913) 2025-02-13 16:31:21 -08:00