ollama/llm
Blake Mizerany 49c126fde8 build.go: introduce a friendlier way to build Ollama
This commit introduces a more friendly way to build Ollama dependencies
and the binary without abusing `go generate` and removing the
unnecessary extra steps it brings with it.

This script also provides nicer feedback to the user about what is
happening during the build process.

At the end, it prints a helpful message to the user about what to do
next (e.g. run the new local Ollama).
2024-04-09 13:52:08 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate build.go: introduce a friendlier way to build Ollama 2024-04-09 13:52:08 -07:00
llama.cpp@37e7854c10 Bump to b2581 2024-04-02 11:53:07 -07:00
patches Bump to b2581 2024-04-02 11:53:07 -07:00
ggla.go refactor model parsing 2024-04-01 13:16:15 -07:00
ggml.go add command-r graph estimate 2024-04-04 14:07:24 -07:00
gguf.go refactor model parsing 2024-04-01 13:16:15 -07:00
llm.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go no rope parameters 2024-04-05 18:05:27 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00