ollama/model
Jesse Gross d1ed4b17ef
ml: Panic rather than return error on tensor allocation failure
FromFloatSlice and FromIntSlice return an error if the shape doesn't
match the passed data or if memory can't be allocated. Since these
are inputs, the memory being allocated is system memory rather than VRAM.

In many cases, the caller can't really handle the error and panics.

Empty and Zeros directly panic if they can't allocate memory.

This makes things consistent by panicing for the first two cases,
removing a fair amount of error handling code. This is also consistent
with how Go typically handles these situations.
2025-12-29 06:38:06 -06:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
input ollamarunner: Separate text and multimodal graphs 2025-12-29 06:38:01 -06:00
models ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
testdata gemma2 impl 2025-03-11 14:35:08 -07:00
bytepairencoding.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
bytepairencoding_test.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
model.go ml: Panic rather than return error on tensor allocation failure 2025-12-29 06:38:06 -06:00
model_test.go fs: move ml.Config to fs package 2025-04-03 13:12:24 -07:00
sentencepiece.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
sentencepiece_test.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
textprocessor.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
vocabulary.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00