ollama/llm
pufferffish 022b9217aa Merge branch 'main' of https://github.com/ollama/ollama into vulkan 2024-07-02 10:47:56 +01:00
..
ext_server Do not shift context for sliding window models (#5368) 2024-06-28 19:39:31 -07:00
generate Merge remote-tracking branch 'upstream/main' into vulkan 2024-06-28 08:47:37 +01:00
llama.cpp@7c26775adb llm: update llama.cpp commit to `7c26775` (#4896) 2024-06-17 15:56:16 -04:00
patches llm: architecture patch (#5316) 2024-06-26 21:38:12 -07:00
filetype.go Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 2024-05-23 13:21:49 -07:00
ggla.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
ggml.go gemma2 graph 2024-06-27 13:34:52 -07:00
ggml_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
gguf.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
llm.go revert tokenize ffi (#4761) 2024-05-31 18:54:21 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Move nested payloads to installer and zip file on windows 2024-04-23 16:14:47 -07:00
memory.go handle asymmetric embedding KVs 2024-06-20 09:57:27 -07:00
memory_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
payload.go Move libraries out of users path 2024-06-17 13:12:18 -07:00
server.go error 2024-07-01 16:04:13 -07:00
status.go error 2024-07-01 16:04:13 -07:00