ollama/llama/patches
Jesse Gross ccb7eb8135 ggml: Free ggml_backend_buffer_t when releasing buffer
When ggml_backend_buffer_free() is called, the device memory
is released but not all backends consistently release the actual
ggml_backend_buffer_t in system RAM, causing a memory leak.

Bug #10040
2025-04-15 15:29:58 -07:00
..
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch ggml: Free ggml_backend_buffer_t when releasing buffer 2025-04-15 15:29:58 -07:00
0002-pretokenizer.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0003-embeddings.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0004-clip-unicode.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0005-solar-pro.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0006-conditional-fattn.patch ggml: Free ggml_backend_buffer_t when releasing buffer 2025-04-15 15:29:58 -07:00
0007-add-mllama-support.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0008-add-unpad-operator.patch ggml: Free ggml_backend_buffer_t when releasing buffer 2025-04-15 15:29:58 -07:00
0009-fix-deepseek-deseret-regex.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0010-Maintain-ordering-for-rules-for-grammar.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0011-llama-Ensure-KV-cache-is-fully-defragmented.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0012-use-dynamic-backend-loading-for-clip.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0013-sort-devices-by-score.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0014-add-phony-target-ggml-cpu-for-all-cpu-variants.patch llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
0015-use-std-filesystem-path-instead-of-wstring.patch fix: own lib/ollama directory 2025-03-03 13:01:18 -08:00
0016-remove-amx.patch fix: own lib/ollama directory 2025-03-03 13:01:18 -08:00
0017-fix-clip-compiler-error.patch fix: own lib/ollama directory 2025-03-03 13:01:18 -08:00
0018-add-phi4-support.patch fix: own lib/ollama directory 2025-03-03 13:01:18 -08:00
0019-fix-string-arr-kv-loading.patch llama: fix kv loading on snowflake-arctic-embed models (#9536) 2025-03-07 09:25:34 -08:00
0020-ollama-debug-tensor.patch ollama debug tensor 2025-03-11 14:49:19 -07:00
0021-add-model-quantizations.patch model: support for mistral-small in the ollama runner 2025-04-03 16:57:36 -07:00
0022-add-rdna4-support.patch Add gfx1200 & gfx1201 support on linux (#9878) 2025-03-27 07:35:19 -07:00
0022-metal-add-op_neg.patch model: support for mistral-small in the ollama runner 2025-04-03 16:57:36 -07:00