ollama/llama/llama.cpp
Daniel Hiltgen 39ca55a1ba
Move quantization to new backend (#10363)
* Move quantization logic to GGML via new backend

This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.

* Remove "add model quantizations"

This is no longer needed now that quantization is implemented in Go+GGML code directly.
2025-12-29 06:37:52 -06:00
..
common llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
examples/llava llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
include llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
src Move quantization to new backend (#10363) 2025-12-29 06:37:52 -06:00
.rsync-filter llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00