..
llama-adapter.cpp
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-adapter.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-arch.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-arch.h
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-batch.cpp
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-batch.h
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-chat.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-chat.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-context.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-context.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-cparams.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-cparams.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-grammar.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-grammar.h
llama: remove model loading for grammar ( #10096 )
2025-04-24 11:51:19 -07:00
llama-graph.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-graph.h
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-hparams.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-hparams.h
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-impl.cpp
llama: update llama.cpp vendor code to commit d7cfe1ff ( #9356 )
2025-02-26 20:34:44 -08:00
llama-impl.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-io.cpp
llama: update to commit 71e90e88 ( #10192 )
2025-04-16 15:14:01 -07:00
llama-io.h
llama: update to commit 71e90e88 ( #10192 )
2025-04-16 15:14:01 -07:00
llama-kv-cache-iswa.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-kv-cache-iswa.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-kv-cache.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-kv-cache.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-kv-cells.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-memory-hybrid.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-memory-hybrid.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-memory-recurrent.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-memory-recurrent.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-memory.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-memory.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-mmap.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-mmap.h
llama: update llama.cpp vendor code to commit d7cfe1ff ( #9356 )
2025-02-26 20:34:44 -08:00
llama-model-loader.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-model-loader.h
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-model-saver.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
llama-model-saver.h
llama: update to commit de4c07f93 ( #10655 )
2025-05-12 12:17:26 -07:00
llama-model.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-model.h
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-quant.cpp
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00
llama-quant.h
next build ( #8539 )
2025-01-29 15:03:38 -08:00
llama-sampling.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-sampling.h
llama: update llama.cpp vendor code to commit d7cfe1ff ( #9356 )
2025-02-26 20:34:44 -08:00
llama-vocab.cpp
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama-vocab.h
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes ( #12552 )
2025-10-13 15:26:18 -07:00
llama.cpp
logs: fix bogus "0 MiB free" log line ( #12590 )
2025-10-14 11:26:28 -07:00
llama.go
Revert "cgo: use O3"
2025-01-31 10:25:39 -08:00
unicode-data.cpp
next build ( #8539 )
2025-01-29 15:03:38 -08:00
unicode-data.h
next build ( #8539 )
2025-01-29 15:03:38 -08:00
unicode.cpp
update vendored llama.cpp and ggml ( #11823 )
2025-08-14 14:42:58 -07:00
unicode.h
Update GGML to b6646 ( #12245 )
2025-10-02 14:47:10 -07:00