ollama/ml/backend/ggml/ggml/include
Daniel Hiltgen 49a9c9ba6a
GGML update to ec98e2002 (#13451)
* Revert "add support for NVIDIA Nemotron 3 Nano"

This reverts commit e7d2ae9d69.

* GGML update to 380b4c984

Remove MaskBatchPadding as GGML_KQ_MASK_PAD is no longer present (no
padding required)

* update to c45f89d55

* ec98e2002

solar pro needed more adjusting - needs verification

* review comments
2025-12-17 13:13:55 -08:00
..
ggml-alloc.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
ggml-backend.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
ggml-blas.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-cann.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-cpp.h llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
ggml-cpu.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
ggml-cuda.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-hexagon.h ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
ggml-metal.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
ggml-opencl.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-opt.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
ggml-rpc.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
ggml-sycl.h next build (#8539) 2025-01-29 15:03:38 -08:00
ggml-vulkan.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
ggml-zdnn.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
ggml-zendnn.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
ggml.h GGML update to ec98e2002 (#13451) 2025-12-17 13:13:55 -08:00
gguf.h llama: fix kv loading on snowflake-arctic-embed models (#9536) 2025-03-07 09:25:34 -08:00
ollama-debug.h ollama debug tensor 2025-03-11 14:49:19 -07:00