ollama/ml/backend/ggml/ggml
Daniel Hiltgen 0cc90a8186
harden uncaught exception registration (#12120)
2025-09-02 09:43:55 -07:00
..
cmake update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
include ggml: Avoid allocating CUDA primary context on unused GPUs 2025-08-27 16:24:18 -07:00
src harden uncaught exception registration (#12120) 2025-09-02 09:43:55 -07:00
.rsync-filter update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00