ollama/llama/patches
Xiaodong Ye 603d3ab0ca vulkan: get GPU ID (ollama v0.11.5)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-06 20:11:06 +02:00
..
.gitignore update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0002-pretokenizer.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0003-clip-unicode.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0004-solar-pro.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0005-fix-deepseek-deseret-regex.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0006-maintain-ordering-for-rules-for-grammar.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0007-sort-devices-by-score.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0009-remove-amx.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0010-fix-string-arr-kv-loading.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0011-ollama-debug-tensor.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0012-add-ollama-vocab-for-grammar-support.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0013-add-argsort-and-cuda-copy-for-i32.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0014-graph-memory-reporting-on-failure.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0015-ggml-Export-GPU-UUIDs.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0016-temporary-prevent-rocm-cuda-mixed-loading.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0017-add-C-API-for-mtmd_input_text.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0018-no-power-throttling-win32-with-gnuc.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0019-BF16-macos-version-guard.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0020-Enable-CUDA-Graphs-for-gemma3n.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0021-Disable-ggml-blas-on-macos-v13-and-older.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0022-fix-mtmd-audio.cpp-build-on-windows.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0023-ggml-No-alloc-mode.patch update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0023-vulkan-get-GPU-ID-ollama-v0.11.5.patch vulkan: get GPU ID (ollama v0.11.5) 2025-09-06 20:11:06 +02:00