ollama/llama/patches
Gabe Goodhart b95693056c
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
* feat: Bump llama.cpp to the latest master (17f7f4b)

This brings in significant improvements to prefill performance for all
models using the SSM_CONV and SSM_SCAN ops (granite4, jamba, falcon-h,
nemotron-h, Qwen3 Next) on Apple Metal.

See https://github.com/ggml-org/llama.cpp/pull/17876

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 1-4

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Update patches 5-12

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 13-18

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patch 20

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 21-31

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Sync vendored code

The two files I'm not sure about here are the swap from gemma3-iswa.cpp to
gemma3.cpp (I chose to include this because I think it's required), and the
inclusion of `ggml-zendnn.h` which I chose to omit.

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-12-10 12:59:27 -08:00
..
.gitignore update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0002-pretokenizer.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0003-clip-unicode.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0004-solar-pro.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0005-fix-deepseek-deseret-regex.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0006-maintain-ordering-for-rules-for-grammar.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0007-sort-devices-by-score.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0009-remove-amx.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0010-fix-string-arr-kv-loading.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0011-ollama-debug-tensor.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0012-add-ollama-vocab-for-grammar-support.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0013-add-argsort-and-cuda-copy-for-i32.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0014-graph-memory-reporting-on-failure.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0015-ggml-Export-GPU-UUIDs.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0016-add-C-API-for-mtmd_input_text.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0017-no-power-throttling-win32-with-gnuc.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0018-ggml-Add-batch-size-hint.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0019-fix-mtmd-audio.cpp-build-on-windows.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0020-ggml-No-alloc-mode.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0021-decode-disable-output_all.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0022-ggml-Enable-resetting-backend-devices.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0023-harden-uncaught-exception-registration.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0024-GPU-discovery-enhancements.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0025-NVML-fallback-for-unified-memory-GPUs.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0026-report-LoadLibrary-failures.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0027-interleave-multi-rope.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0028-Add-memory-detection-using-DXGI-PDH.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0029-ggml-cuda-skip-large-batches.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0030-win-exit-instead-of-abort.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
0031-fix-bakllava-regression.patch feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00