ollama/ml
Jesse Gross 8bf38552de llm: Prefer dedicated GPUs over iGPUs when allocating memory
We currently assign model layers to GPUs according to free VRAM,
which assumes that GPU performance is roughly equal. This does not
work well for mixed dGPU and iGPU systems because iGPUs typically
use system memory which is large but their performance is slow.
This instead assigns layers to dGPUs first and then iGPUs.

In the future, this could be generalized to have a more fine grained
notion of GPU performance but dGPU vs. iGPU performance is the most
extreme.
2025-11-11 13:11:08 -08:00
..
backend Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
nn ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
backend.go ggml: Enable op_offload to improve partial offload performance 2025-10-30 13:53:10 -07:00
device.go llm: Prefer dedicated GPUs over iGPUs when allocating memory 2025-11-11 13:11:08 -08:00
path.go cpu: always ensure LibOllamaPath included (#12890) 2025-10-31 14:37:29 -07:00