ollama/ml
Jesse Gross f03b8bc51a ggml: Use max graph memory allocation when reserving
When calculating the size of the memory required for a compute
graph, we may test multiple graphs - for example a vision encoder
and the text model. Since these graphs are never run at the same
time, we just want the max size.

Typically, a new graph only reallocates memory if it doesn't fit in
the existing space, so the last graph reservation is the max size.
However, the Vulkan backend imposes a 1G cap for a single allocation,
which means that the graph may require multiple allocations. This
results in a problem if:
 - There is an old graph with one small chunk and one big chunk
 - A new graph with one big chunk that is smaller than the total
   of the old graph.
In this case, the big chunk of the new graph will trigger a
reallocation, which will free the old second chunk. The total
amount of memory reported will be lower than the max. To avoid
this, we should explicitly take the max from each graph.
2025-12-18 11:14:17 -08:00
..
backend ggml: Use max graph memory allocation when reserving 2025-12-18 11:14:17 -08:00
nn fix: qwen2.5 vl rope (#13486) 2025-12-15 17:30:33 -08:00
backend.go GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
device.go flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
path.go cpu: always ensure LibOllamaPath included (#12890) 2025-10-31 14:37:29 -07:00