* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm |
||
|---|---|---|
| .. | ||
| llama.cpp | ||
| ggml.go | ||
| ggml_llama.go | ||
| llama_test.go | ||
| llm.go | ||
| utils.go | ||