ollama/llama/llama.cpp/src
Daniel Hiltgen 49a9c9ba6a
GGML update to ec98e2002 (#13451)
* Revert "add support for NVIDIA Nemotron 3 Nano"

This reverts commit e7d2ae9d69.

* GGML update to 380b4c984

Remove MaskBatchPadding as GGML_KQ_MASK_PAD is no longer present (no
padding required)

* update to c45f89d55

* ec98e2002

solar pro needed more adjusting - needs verification

* review comments
2025-12-17 13:13:55 -08:00
..
models GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-adapter.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-adapter.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-arch.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-arch.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-batch.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-batch.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-chat.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-chat.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-context.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-context.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-cparams.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-cparams.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-grammar.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-grammar.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-graph.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-graph.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-hparams.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-hparams.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-impl.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-impl.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-io.cpp llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-io.h llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-kv-cache-iswa.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cache-iswa.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-kv-cache.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-kv-cache.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-kv-cells.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-hybrid.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-memory-hybrid.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory-recurrent.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-recurrent.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-memory.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-mmap.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-mmap.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-model-loader.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-model-loader.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-model-saver.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.h llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
llama-model.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-model.h llama/parsers/renderers: nemotron 3 nano (#13489) 2025-12-15 18:00:08 -08:00
llama-quant.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-quant.h next build (#8539) 2025-01-29 15:03:38 -08:00
llama-sampling.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-sampling.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-vocab.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama-vocab.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama.cpp GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
llama.go ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
unicode-data.cpp next build (#8539) 2025-01-29 15:03:38 -08:00
unicode-data.h next build (#8539) 2025-01-29 15:03:38 -08:00
unicode.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
unicode.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00