ollama/llama/llama.cpp/src
Daniel Hiltgen e7d2ae9d69 add support for NVIDIA Nemotron 3 Nano
Carry upstream patches temporarily
2025-12-15 15:30:49 -08:00
..
models add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-adapter.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-adapter.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-arch.cpp add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-arch.h add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-batch.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-batch.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-chat.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-chat.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-context.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-context.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-cparams.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-cparams.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-grammar.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-grammar.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-graph.cpp add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-graph.h ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
llama-hparams.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-hparams.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-impl.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-impl.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-io.cpp llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-io.h llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-kv-cache-iswa.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cache-iswa.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-kv-cache.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cache.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cells.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-hybrid.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-memory-hybrid.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory-recurrent.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-recurrent.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-memory.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-mmap.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-mmap.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-model-loader.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-model-loader.h update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.h llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
llama-model.cpp add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-model.h add support for NVIDIA Nemotron 3 Nano 2025-12-15 15:30:49 -08:00
llama-quant.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-quant.h next build (#8539) 2025-01-29 15:03:38 -08:00
llama-sampling.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-sampling.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-vocab.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-vocab.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama.cpp ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
llama.go ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
unicode-data.cpp next build (#8539) 2025-01-29 15:03:38 -08:00
unicode-data.h next build (#8539) 2025-01-29 15:03:38 -08:00
unicode.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
unicode.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00