ollama/ml/backend/ggml/ggml/src
Jesse Gross 0b9c6cb497
ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-12-29 06:38:10 -06:00
..
ggml-blas Revert "cgo: use O3" 2025-01-31 10:25:39 -08:00
ggml-cpu chore: disable debug in binary libraries (#10788) 2025-12-29 06:38:04 -06:00
ggml-cuda ggml: Export GPU UUIDs 2025-12-29 06:38:10 -06:00
ggml-hip llama: update to commit 2016f07b (#10352) 2025-12-29 06:37:42 -06:00
ggml-metal ggml: Export GPU UUIDs 2025-12-29 06:38:10 -06:00
CMakeLists.txt llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00
ggml-alloc.c ggml: Report graph memory for failed allocations 2025-12-29 06:38:06 -06:00
ggml-backend-impl.h llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
ggml-backend-reg.cpp chore: update mllama to use ollama engine (#10637) 2025-12-29 06:37:59 -06:00
ggml-backend.cpp ggml: Report graph memory for failed allocations 2025-12-29 06:38:06 -06:00
ggml-common.h llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
ggml-impl.h llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
ggml-opt.cpp llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00
ggml-quants.c llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00
ggml-quants.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-threading.cpp next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-threading.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml.c chore: update mllama to use ollama engine (#10637) 2025-12-29 06:37:59 -06:00
ggml.go all: fix cgo compiler warnings on windows (#10563) 2025-12-29 06:37:51 -06:00
ggml_darwin_arm64.go llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
gguf.cpp llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
ollama-debug.c ollama-debug.c: change 'ld' to 'PRIi64' 2025-03-13 17:10:37 +08:00