ollama/model
Jesse Gross 7b9ab4cb32
ggml: Seperate tensor load from backend creation
Currently, when the backend is created, the tensors are loaded at the
same time, which is a slow operation. This separates them to be two
steps:
 - Create backend, including enumerating tensors and memory allocation
 - Loading tensor data

This allows more flexibility in managing model loading.
2025-12-29 06:38:02 -06:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
input ollamarunner: Separate text and multimodal graphs 2025-12-29 06:38:01 -06:00
models model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
testdata gemma2 impl 2025-03-11 14:35:08 -07:00
bytepairencoding.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
bytepairencoding_test.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
model.go ggml: Seperate tensor load from backend creation 2025-12-29 06:38:02 -06:00
model_test.go fs: move ml.Config to fs package 2025-04-03 13:12:24 -07:00
sentencepiece.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
sentencepiece_test.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
textprocessor.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00
vocabulary.go model: handle multiple eos tokens (#10577) 2025-12-29 06:38:01 -06:00