This website requires JavaScript.
Explore
Help
Sign In
pali112
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
Code
Issues
Pull Requests
Packages
Projects
Releases
Wiki
Activity
Files
fb7c89801e2cf7fd87cb7fe97a2f4795404ceafc
ollama
/
llama
/
patches
/
0036-ggml-cuda-skip-large-batches.patch
Michael Yang
0796d79d19
cuda: skip large batches
...
cuda panics on batches larger than 1024 so skip those and fallback to cpu
2025-11-18 16:11:37 -08:00
1.1 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink