ollama/model
Jesse Gross 7916f55009 vocab: Use int32 for special tokens
Special tokens are currently read as uint32 from the model metadata.
However, all other parts of the system (including the tokenizer) use
int32 to represent tokens so it is impossible to represent the high
portion of the unsigned range. For consistency and to avoid casts,
we should just use int32 everywhere.
2025-02-13 17:09:26 -08:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
llama vocab: Use int32 for special tokens 2025-02-13 17:09:26 -08:00
mllama vocab: Use int32 for special tokens 2025-02-13 17:09:26 -08:00
pixtral imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
qwen2vl imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
testdata next ollama runner (#7913) 2025-02-13 16:31:21 -08:00
model.go model: Load tensors behind an interface 2025-02-13 17:09:26 -08:00
model_test.go model: Load tensors behind an interface 2025-02-13 17:09:26 -08:00
process_text.go vocab: Use int32 for special tokens 2025-02-13 17:09:26 -08:00
process_text_test.go next ollama runner (#7913) 2025-02-13 16:31:21 -08:00