ollama/ml
Jesse Gross cec8a9dee0
ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-12-29 06:37:50 -06:00
..
backend ollamarunner: Re-enable worst case graph preallocation. 2025-12-29 06:37:50 -06:00
nn attention: Remove unnecessary contiguous operations 2025-03-01 20:53:23 -08:00
backend.go llama4 2025-12-29 06:37:44 -06:00