ollama/ml/backend/ggml/ggml
Jesse Gross c2f5d6662b ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-05-02 12:22:47 -07:00
..
cmake ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-04-16 18:56:29 -07:00
include llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
src ollamarunner: Re-enable worst case graph preallocation. 2025-05-02 12:22:47 -07:00
.rsync-filter ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-04-16 18:56:29 -07:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00