ollama/llama/llama.cpp/src
Gabe Goodhart 4987f13d34
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552)
* feat: Bump llama.cpp to df1b612

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(mtmd): Correctly encode text chunks during mtmd tokenization

There can be text chunks that appear interspersed with the image embeddings
that contain template delimiter tokens for some models. These need to be
correctly translated to text tokens.

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* tests: Use MtmdChunk in image_test

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* style: Fix unnecessary conversion linting

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(ggml): Revert changes to ggml_hip.cpp

These changes were done largely by our code assistant and are likely wrong

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Revert changes in mem_nvml.cpp

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update sync point to 1deee0

This brings in several more optimization commits and model support for
EmbeddingGemma

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches for 1deee0

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: sync for bump to 1deee0

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Bad patch updates with errant `+`

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Bump llama.cpp/ggml to 7049736

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: format-patches after latest bump

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-10-13 15:26:18 -07:00
..
llama-adapter.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-adapter.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-arch.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-arch.h Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-batch.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-batch.h update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-chat.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-chat.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-context.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-context.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-cparams.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-cparams.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-grammar.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-grammar.h llama: remove model loading for grammar (#10096) 2025-04-24 11:51:19 -07:00
llama-graph.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-graph.h Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-hparams.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-hparams.h Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-impl.cpp llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-impl.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-io.cpp llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-io.h llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-kv-cache-iswa.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-kv-cache-iswa.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-kv-cache.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-kv-cache.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-kv-cells.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory-hybrid.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-memory-hybrid.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory-recurrent.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-memory-recurrent.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-memory.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-mmap.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-mmap.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-model-loader.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-model-loader.h update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.h llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
llama-model.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-model.h Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-quant.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-quant.h next build (#8539) 2025-01-29 15:03:38 -08:00
llama-sampling.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-sampling.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-vocab.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-vocab.h Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama.go Revert "cgo: use O3" 2025-01-31 10:25:39 -08:00
unicode-data.cpp next build (#8539) 2025-01-29 15:03:38 -08:00
unicode-data.h next build (#8539) 2025-01-29 15:03:38 -08:00
unicode.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
unicode.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00