ollama/runner/llamarunner
Jesse Gross d5a0d8d904 llm: New memory management
This changes the memory allocation strategy from upfront estimation to
tracking actual allocations done by the engine and reacting to that. The
goal is avoid issues caused by both under-estimation (crashing) and
over-estimation (low performance due to under-utilized GPUs).

It is currently opt-in and can be enabled for models running on the
Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
cases is unchanged and will continue to use the existing estimates.
2025-08-14 15:24:01 -07:00
..
cache.go ollamarunner: Base cached tokens on current prompt 2025-05-15 13:46:20 -07:00
cache_test.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00
image.go update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
image_test.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00
runner.go llm: New memory management 2025-08-14 15:24:01 -07:00