ollama/ml/backend/ggml/ggml
Inforithmics eb7b5ce9f4 Fix patches apply 2025-09-16 22:14:05 +02:00
..
cmake update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
include ggml: Avoid allocating CUDA primary context on unused GPUs 2025-08-27 16:24:18 -07:00
src Fix patches apply 2025-09-16 22:14:05 +02:00
.rsync-filter Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-08-15 00:06:53 +02:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00