ollama/model/models
Michael Yang bab6f34dc0 ml/backend/ggml: update model loading for hybrid/multi backends
use a similar strategy as llama.cpp for deciding where tensors should be
allocated. this will be improved later to be aware of usable memory
before assigning the tensor
2025-03-07 14:08:21 -08:00
..
llama ml/backend/ggml: update model loading for hybrid/multi backends 2025-03-07 14:08:21 -08:00
mllama ollamarunner: Improve multimodal input handling 2025-03-06 16:54:16 -08:00
pixtral models: Move model into their own directory 2025-02-13 17:09:26 -08:00
qwen2vl models: Move model into their own directory 2025-02-13 17:09:26 -08:00
models.go models: Move model into their own directory 2025-02-13 17:09:26 -08:00