Logo
Explore Help
Sign In
pali112/ollama
1
0
Fork 0
You've already forked ollama
Code Issues Pull Requests Packages Projects Releases Wiki Activity
Files
fefb3e77d1d5441f2cdcd2c89123c4aa3f574b09
ollama/llm
History
Jeffrey Morgan 9241a29336 Revert "Revert "bump submodule to 6c00a06 (#2479)"" (#2485)
This reverts commit 6920964b87.
2024-02-13 18:18:41 -08:00
..
ext_server
set shutting_down to false once shutdown is complete (#2484)
2024-02-13 17:48:41 -08:00
generate
Detect AMD GPU info via sysfs and block old cards
2024-02-12 08:19:41 -08:00
llama.cpp @ 6c00a06692
Revert "Revert "bump submodule to 6c00a06 (#2479)"" (#2485)
2024-02-13 18:18:41 -08:00
patches
patch: always add token to cache_tokens (#2459)
2024-02-12 08:10:16 -08:00
dyn_ext_server.c
…
dyn_ext_server.go
Shutdown faster
2024-02-08 22:22:50 -08:00
dyn_ext_server.h
…
ggml.go
…
gguf.go
refactor tensor read
2024-01-24 10:48:31 -08:00
llama.go
use llm.ImageData
2024-01-31 19:13:48 -08:00
llm.go
Ensure the libraries are present
2024-02-07 17:27:49 -08:00
payload_common.go
Detect AMD GPU info via sysfs and block old cards
2024-02-12 08:19:41 -08:00
payload_darwin_amd64.go
…
payload_darwin_arm64.go
…
payload_linux.go
…
payload_test.go
…
payload_windows.go
…
utils.go
…
Powered by Gitea Version: 1.25.4 Page: 1991ms Template: 108ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API