ollama/llm
Bruce MacDonald 56fd4e4ef2 log embedding eval timing 2023-08-14 12:51:31 -03:00
..
ggml-alloc.c update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-alloc.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-cuda.cu update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-cuda.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-metal.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-metal.m update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-metal.metal update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-mpi.c update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-mpi.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-opencl.cpp update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml-opencl.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml.c update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
ggml.go ggml: fix off by one error 2023-08-11 10:45:22 -07:00
ggml.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
k_quants.c update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
k_quants.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
llama-util.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
llama.cpp update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
llama.go log embedding eval timing 2023-08-14 12:51:31 -03:00
llama.h update `llama.cpp` to `f64d44a` 2023-08-12 22:47:15 -04:00
llama_darwin.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00
llm.go implement loading ggml lora adapters through the modelfile 2023-08-10 09:23:39 -07:00
update-llama-cpp.sh partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00