ollama/ml/backend/ggml/ggml
Jesse Gross cec8a9dee0
ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-12-29 06:37:50 -06:00
..
cmake ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-12-29 06:37:39 -06:00
include llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
src ollamarunner: Re-enable worst case graph preallocation. 2025-12-29 06:37:50 -06:00
.rsync-filter ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-12-29 06:37:39 -06:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00