ollama/llama/patches
Daniel Hiltgen 485da9fd35
win: exit instead of abort (#13138)
Calling abort on windows triggers the C++ runtime to attempt a debugger
attach, which causes the crashed runners to hang instead of exit, leading
to a timeout instead of a fast failure during discovery.
2025-11-18 16:33:33 -08:00
..
.gitignore update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0002-pretokenizer.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0003-clip-unicode.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0004-solar-pro.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0005-fix-deepseek-deseret-regex.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0006-maintain-ordering-for-rules-for-grammar.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0007-sort-devices-by-score.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0009-remove-amx.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0010-fix-string-arr-kv-loading.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0011-ollama-debug-tensor.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0012-add-ollama-vocab-for-grammar-support.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0013-add-argsort-and-cuda-copy-for-i32.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0014-graph-memory-reporting-on-failure.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0015-ggml-Export-GPU-UUIDs.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0016-add-C-API-for-mtmd_input_text.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0017-no-power-throttling-win32-with-gnuc.patch ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
0018-ggml-Add-batch-size-hint.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0019-fix-mtmd-audio.cpp-build-on-windows.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0020-ggml-No-alloc-mode.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0021-decode-disable-output_all.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0022-ggml-Enable-resetting-backend-devices.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0023-harden-uncaught-exception-registration.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0024-GPU-discovery-enhancements.patch bring back sysfs based VRAM information for AMD (#12871) 2025-11-17 15:40:58 -08:00
0025-NVML-fallback-for-unified-memory-GPUs.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0026-report-LoadLibrary-failures.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0027-interleave-multi-rope.patch Remove unnecessary MacOs 13 and lower Patches (#12656) 2025-11-06 15:52:56 -08:00
0028-Add-memory-detection-using-DXGI-PDH.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0029-vulkan-Call-ggml_vk_buffer_write_2d-from-ggml_vk_buf.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0030-Vulkan-MMQ-Integer-Dot-Refactor-and-K-Quant-support-.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0031-vulkan-Update-topk_moe-fusion-to-handle-gpt-s-late-s.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0032-vulkan-Fuse-rope-set_rows-16769.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0033-vulkan-Handle-argsort-with-a-large-number-of-rows-16.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0034-vulkan-fix-shmem-overrun-in-mmq-id-shader-16873.patch vulkan: temporary cary of vulkan fixes (#12971) 2025-11-12 08:31:40 -08:00
0035-vulkan-Fix-crash-when-FP16-mul_mat-accumulation-is-n.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0036-ggml-cuda-skip-large-batches.patch cuda: skip large batches 2025-11-18 16:11:37 -08:00
0036-win-exit-instead-of-abort.patch win: exit instead of abort (#13138) 2025-11-18 16:33:33 -08:00