Commit Graph

4640 Commits

Author SHA1 Message Date
Nakasaka, Masato 7a6b09ebae Removed unused code
Fix linter error in CI
2025-09-16 17:18:49 +09:00
Masato Nakasaka ede4081253 Fix compile error in Mac
Metal is preferred so we're disabling Vulkan for now
2025-09-16 17:00:17 +09:00
Nakasaka, Masato da466f4f86 Copied minimal definition from vulkan header 2025-09-16 15:05:54 +09:00
Inforithmics bdfae41e7b Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-12 22:18:42 +02:00
Daniel Hiltgen e4ce68311a
cuda: remove compression for better compatibility (#12259)
This retains compatibility with driver 531 and up at the trade-off of space.
2025-09-12 07:59:14 -07:00
Inforithmics 5053b2e351 Fix Patch 2025-09-12 08:13:17 +02:00
Jesse Gross 26214125e8 ollamarunner: Suppress stack trace during memory allocation
Allocation failures can be a normal part of new memory estimates, so
we shouldn't print a stack trace in this case.
2025-09-11 14:30:31 -07:00
Daniel Hiltgen 61fb912ca4
CI: fix windows cuda build (#12246)
* ci: adjust cuda component list

v13 has a different breakdown of the components required to build ollama

* review comments
2025-09-11 12:25:26 -07:00
Jesse Gross aba1575315 llm: Don't try to load split vision models in the Ollama engine
If a model with a split vision projector is loaded in the Ollama
engine, the projector will be ignored and the model will hallucinate
a response. Instead, fallback and try to load the model in the llama
engine.
2025-09-11 11:41:55 -07:00
Jesse Gross eb10390de9 llm: Enable new memory estimates by default
New memory estimates (see #11090 for more information) are now
enabled automatically for all models running on the Ollama engine,
improving both stability and performance through more accurate sizing
and allocation. Models running on the llama engine will continue to
use the original style of memory estimation.
2025-09-11 11:21:53 -07:00
Michael Yang feb18cd710
feat: add dimensions field to embed requests (#12242)
* feat: add field to truncate embeddings

* add openai embeddings for dimensions
2025-09-11 10:36:10 -07:00
fengyuchuanshen 8a7e2055d2
cmd: use slices.Contains to simplify code (#12249) 2025-09-11 09:57:31 -07:00
Inforithmics 69ed26c93b Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-11 18:30:21 +02:00
Thomas Stocker 0db9fb4ad4
Merge pull request #4 from rillomas/fix-vulkan-uuid
added Vulkan API to get correct Device UUID
2025-09-11 16:24:15 +02:00
Jesse Gross 29ddfc2cab ggml: Disable flash attention for gemma2
Our new engine implementation of gemma2 doesn't support flash
attention, which means that it also doesn't support KV cache
quantization. Currently, it is possible to turn these two on,
which will result in a crash.
2025-09-10 16:40:45 -07:00
Jesse Gross 71cb86af3e llm: Remove unneeded warning with flash attention enabled
If flash attention is enabled without KV cache quanitization, we will
currently always get this warning:
level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
2025-09-10 16:40:45 -07:00
CarbonatedWater.org 5198956372
docs: add ollama-co2 to community integrations (#12230) 2025-09-10 16:37:10 -07:00
Daniel Hiltgen 17a023f34b
Add v12 + v13 cuda support (#12000)
* Add support for upcoming NVIDIA Jetsons

The latest Jetsons with JetPack 7 are moving to an SBSA compatible model and
will not require building a JetPack specific variant.

* cuda: bring back dual versions

This adds back dual CUDA versions for our releases,
with v11 and v13 to cover a broad set of GPUs and
driver versions.

* win: break up native builds in build_windows.ps1

* v11 build working on windows and linux

* switch to cuda v12.8 not JIT

* Set CUDA compression to size

* enhance manual install linux docs
2025-09-10 12:05:18 -07:00
Parth Sareen 8d6fffaead
runner: simplify parser entrypoints in runner (#12233) 2025-09-10 11:24:42 -07:00
Masato Nakasaka dd853c4040 modified UUID code inside ggml 2025-09-10 14:45:12 +09:00
Masato Nakasaka f4add77fc3 Merge branch 'vulkanV3' into fix-vulkan-uuid 2025-09-10 13:36:06 +09:00
Inforithmics 08bec121eb Remove Code not in llama.cpp 2025-09-10 00:09:17 +02:00
Inforithmics d5cecee907 Fix GPU ID Patch 2025-09-09 23:47:08 +02:00
Parth Sareen 20b53eaa72
tests: add tool calling integration test (#12232) 2025-09-09 14:01:11 -07:00
Daniel Hiltgen 6745182885
tests: reduce stress on CPU to 2 models (#12161)
* tests: reduce stress on CPU to 2 models

This should avoid flakes due to systems getting overloaded with 3 (or more) models running concurrently

* tests: allow slow systems to pass on timeout

If a slow system is still streaming a response, and the response
will pass validation, don't fail just because the system is slow.

* test: unload embedding models more quickly
2025-09-09 09:32:15 -07:00
Masato Nakasaka ec7628f853 added Vulkan API to get correct Device UUID
current UUID from pipelineCacheUUID does not match CUDA
2025-09-09 17:11:50 +09:00
Kashyap Tanuku f810ec741c
readme: add Clueless to community integrations (#12188) 2025-09-08 21:31:29 -07:00
Jesse Gross e119783e66 llm: Clamp batch size to context size
The context must always be able to store the current batch, so
if the user requests a small context then we should also shrink
the batch to match. This also fixes the TestLongInputContext
test on the new engine. (The old engine already has this behavior.)
2025-09-08 20:40:11 -07:00
Parth Sareen 1a558f98e2
runner: move harmony to runner (#12052) 2025-09-08 15:07:59 -07:00
Gabe Goodhart 7b91c9ce51
Hybrid and recurrent memory estimates (#12186)
This PR updates the memory size estimate logic to better handle recurrent and hybrid-recurrent models which are currently being badly overestimated because the default logic assumes full attention for all layers.

The logic for the sizing of the recurrent layers comes from the llama.cpp implementation

        ggml_tensor * r = ggml_new_tensor_1d(ctx, type_r, hparams.n_embd_r()*mem_size);
        ggml_tensor * s = ggml_new_tensor_1d(ctx, type_s, hparams.n_embd_s()*mem_size);

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-08 14:53:22 -07:00
Daniel Hiltgen 950d33aa30
docs: show how to debug nvidia init failures (#12216)
This debug setting can help troubleshoot obscure initialization failures.
2025-09-08 11:39:00 -07:00
Michael Yang 9714e38dd0
fix: nil pointer dereference if cache is nil (#12215) 2025-09-08 09:53:59 -07:00
Inforithmics ab7f456cf6 rename gpu patch to correct number 2025-09-07 01:05:00 +02:00
Inforithmics 8687e30bb5 Update vulkan version to the version used in llama.cpp 2025-09-06 21:03:31 +02:00
Inforithmics 80873ca49e Reduce Changes remove TestHomogeneousGPUs (doesn't exist on master) 2025-09-06 20:58:35 +02:00
Inforithmics f30bcaa7bf Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-06 20:38:40 +02:00
Inforithmics 8880174a8e disable mmap for vulkan 2025-09-06 20:33:56 +02:00
Inforithmics d97c2ab8b9 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-06 20:16:05 +02:00
Xiaodong Ye 603d3ab0ca vulkan: get GPU ID (ollama v0.11.5)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-06 20:11:06 +02:00
frob 4378ae4ffa
parser: don't check the file type of safetensors to prevent false negatives. (#12176)
* Don't check the file type of safetensor to prevent false negatives.

---------

Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-09-05 16:27:40 -07:00
Michael Yang 5994e8e8fd
embedding gemma model (#12181)
* ollama: add embeddings
2025-09-04 09:09:07 -07:00
Michael Yang b3e6120736
more logutil.Trace (#12177) 2025-09-03 17:24:39 -07:00
Michael Yang fb92b61754
logutil: add Trace and TraceContext helpers (#12110) 2025-09-02 13:09:12 -07:00
Jesse Gross 8149a3c86e llm: Avoid underflow in free memory logging
If a GPU's free memory is less than the reserved amount, we might get
an underflow. Since it is an unsigned uint64, we print this as a large
number rather than the more correct 0. This only affects logging, the
actual layout code already handles this correctly.

Bug #12138
2025-09-02 12:30:26 -07:00
Daniel Hiltgen 0cc90a8186
harden uncaught exception registration (#12120) 2025-09-02 09:43:55 -07:00
pxwanglu e42300f25b
ml: fix struct field name in comment (#12123) 2025-08-31 16:26:11 -07:00
alpha-nerd-nomyo 66e73809a1
readme: add NOMYO Router to community integrations (#12129) 2025-08-31 13:49:10 -07:00
Inforithmics 1fc3239582 Add vulkan to TestHomogeneousGPUs
Test
2025-08-30 20:33:06 +02:00
Inforithmics 8300a55e1d Fix Unit Test (Add Vulkan Library) 2025-08-30 20:26:53 +02:00
Thomas Stocker 879041d937
Merge pull request #1 from rillomas/removeLibcap
Removed libcap related code
2025-08-30 20:06:15 +02:00