Commit Graph

4628 Commits

Author SHA1 Message Date
Inforithmics 69ed26c93b Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-11 18:30:21 +02:00
Thomas Stocker 0db9fb4ad4
Merge pull request #4 from rillomas/fix-vulkan-uuid
added Vulkan API to get correct Device UUID
2025-09-11 16:24:15 +02:00
Jesse Gross 29ddfc2cab ggml: Disable flash attention for gemma2
Our new engine implementation of gemma2 doesn't support flash
attention, which means that it also doesn't support KV cache
quantization. Currently, it is possible to turn these two on,
which will result in a crash.
2025-09-10 16:40:45 -07:00
Jesse Gross 71cb86af3e llm: Remove unneeded warning with flash attention enabled
If flash attention is enabled without KV cache quanitization, we will
currently always get this warning:
level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
2025-09-10 16:40:45 -07:00
CarbonatedWater.org 5198956372
docs: add ollama-co2 to community integrations (#12230) 2025-09-10 16:37:10 -07:00
Daniel Hiltgen 17a023f34b
Add v12 + v13 cuda support (#12000)
* Add support for upcoming NVIDIA Jetsons

The latest Jetsons with JetPack 7 are moving to an SBSA compatible model and
will not require building a JetPack specific variant.

* cuda: bring back dual versions

This adds back dual CUDA versions for our releases,
with v11 and v13 to cover a broad set of GPUs and
driver versions.

* win: break up native builds in build_windows.ps1

* v11 build working on windows and linux

* switch to cuda v12.8 not JIT

* Set CUDA compression to size

* enhance manual install linux docs
2025-09-10 12:05:18 -07:00
Parth Sareen 8d6fffaead
runner: simplify parser entrypoints in runner (#12233) 2025-09-10 11:24:42 -07:00
Masato Nakasaka dd853c4040 modified UUID code inside ggml 2025-09-10 14:45:12 +09:00
Masato Nakasaka f4add77fc3 Merge branch 'vulkanV3' into fix-vulkan-uuid 2025-09-10 13:36:06 +09:00
Inforithmics 08bec121eb Remove Code not in llama.cpp 2025-09-10 00:09:17 +02:00
Inforithmics d5cecee907 Fix GPU ID Patch 2025-09-09 23:47:08 +02:00
Parth Sareen 20b53eaa72
tests: add tool calling integration test (#12232) 2025-09-09 14:01:11 -07:00
Daniel Hiltgen 6745182885
tests: reduce stress on CPU to 2 models (#12161)
* tests: reduce stress on CPU to 2 models

This should avoid flakes due to systems getting overloaded with 3 (or more) models running concurrently

* tests: allow slow systems to pass on timeout

If a slow system is still streaming a response, and the response
will pass validation, don't fail just because the system is slow.

* test: unload embedding models more quickly
2025-09-09 09:32:15 -07:00
Masato Nakasaka ec7628f853 added Vulkan API to get correct Device UUID
current UUID from pipelineCacheUUID does not match CUDA
2025-09-09 17:11:50 +09:00
Kashyap Tanuku f810ec741c
readme: add Clueless to community integrations (#12188) 2025-09-08 21:31:29 -07:00
Jesse Gross e119783e66 llm: Clamp batch size to context size
The context must always be able to store the current batch, so
if the user requests a small context then we should also shrink
the batch to match. This also fixes the TestLongInputContext
test on the new engine. (The old engine already has this behavior.)
2025-09-08 20:40:11 -07:00
Parth Sareen 1a558f98e2
runner: move harmony to runner (#12052) 2025-09-08 15:07:59 -07:00
Gabe Goodhart 7b91c9ce51
Hybrid and recurrent memory estimates (#12186)
This PR updates the memory size estimate logic to better handle recurrent and hybrid-recurrent models which are currently being badly overestimated because the default logic assumes full attention for all layers.

The logic for the sizing of the recurrent layers comes from the llama.cpp implementation

        ggml_tensor * r = ggml_new_tensor_1d(ctx, type_r, hparams.n_embd_r()*mem_size);
        ggml_tensor * s = ggml_new_tensor_1d(ctx, type_s, hparams.n_embd_s()*mem_size);

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-08 14:53:22 -07:00
Daniel Hiltgen 950d33aa30
docs: show how to debug nvidia init failures (#12216)
This debug setting can help troubleshoot obscure initialization failures.
2025-09-08 11:39:00 -07:00
Michael Yang 9714e38dd0
fix: nil pointer dereference if cache is nil (#12215) 2025-09-08 09:53:59 -07:00
Inforithmics ab7f456cf6 rename gpu patch to correct number 2025-09-07 01:05:00 +02:00
Inforithmics 8687e30bb5 Update vulkan version to the version used in llama.cpp 2025-09-06 21:03:31 +02:00
Inforithmics 80873ca49e Reduce Changes remove TestHomogeneousGPUs (doesn't exist on master) 2025-09-06 20:58:35 +02:00
Inforithmics f30bcaa7bf Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-06 20:38:40 +02:00
Inforithmics 8880174a8e disable mmap for vulkan 2025-09-06 20:33:56 +02:00
Inforithmics d97c2ab8b9 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-06 20:16:05 +02:00
Xiaodong Ye 603d3ab0ca vulkan: get GPU ID (ollama v0.11.5)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-06 20:11:06 +02:00
frob 4378ae4ffa
parser: don't check the file type of safetensors to prevent false negatives. (#12176)
* Don't check the file type of safetensor to prevent false negatives.

---------

Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-09-05 16:27:40 -07:00
Michael Yang 5994e8e8fd
embedding gemma model (#12181)
* ollama: add embeddings
2025-09-04 09:09:07 -07:00
Michael Yang b3e6120736
more logutil.Trace (#12177) 2025-09-03 17:24:39 -07:00
Michael Yang fb92b61754
logutil: add Trace and TraceContext helpers (#12110) 2025-09-02 13:09:12 -07:00
Jesse Gross 8149a3c86e llm: Avoid underflow in free memory logging
If a GPU's free memory is less than the reserved amount, we might get
an underflow. Since it is an unsigned uint64, we print this as a large
number rather than the more correct 0. This only affects logging, the
actual layout code already handles this correctly.

Bug #12138
2025-09-02 12:30:26 -07:00
Daniel Hiltgen 0cc90a8186
harden uncaught exception registration (#12120) 2025-09-02 09:43:55 -07:00
pxwanglu e42300f25b
ml: fix struct field name in comment (#12123) 2025-08-31 16:26:11 -07:00
alpha-nerd-nomyo 66e73809a1
readme: add NOMYO Router to community integrations (#12129) 2025-08-31 13:49:10 -07:00
Inforithmics 1fc3239582 Add vulkan to TestHomogeneousGPUs
Test
2025-08-30 20:33:06 +02:00
Inforithmics 8300a55e1d Fix Unit Test (Add Vulkan Library) 2025-08-30 20:26:53 +02:00
Thomas Stocker 879041d937
Merge pull request #1 from rillomas/removeLibcap
Removed libcap related code
2025-08-30 20:06:15 +02:00
Daniel Hiltgen 517807cdf2
perf: build graph for next batch async to keep GPU busy (#11863)
* perf: build graph for next batch in parallel to keep GPU busy

This refactors the main run loop of the ollama runner to perform the main GPU
intensive tasks (Compute+Floats) in a go routine so we can prepare the next
batch in parallel to reduce the amount of time the GPU stalls waiting for the
next batch of work.

* tests: tune integration tests for ollama engine

This tunes the integration tests to focus more on models supported
by the new engine.
2025-08-29 14:20:28 -07:00
Daniel Hiltgen ead4a9a1d0
Always filter devices (#12108)
* Always filter devices

Avoid crashing on unsupported AMD iGPUs

* Remove cuda device filtering

This interferes with mixed setups
2025-08-29 12:17:31 -07:00
ofrancon 4383a3ab7a
readme: add Neuro SAN to community integrations (#12109) 2025-08-28 12:27:13 -07:00
Jesse Gross 9d97e6a9f1 ggml: Avoid allocating CUDA primary context on unused GPUs
The recent memory management changes caused all GPUs to be visible
to the runner, regardless of whether they are ultimately used. This
caused CUDA devices to allocate a primary context (~300 MB VRAM) on
each GPU, for each model. This is unnecessary, so we can both avoid
touching GPUs that we exclude in the early stage of allocation and
freeing the memory for any that we touch but don't use.

The issue will continue to exist for the old engine, since it touches
all devices during initialization.
2025-08-27 16:24:18 -07:00
Michael Yang 1081532430
fix keep alive (#12041) 2025-08-27 11:51:25 -07:00
Masato Nakasaka af5f5bdf60 Removed libcap related code
libcap is not directly related to Vulkan and should be added by its own PR. It adds additional library dependencies for building and also requires users to run setcap or run ollama as root, which is not ideal for easy use
2025-08-27 11:51:53 +09:00
Michael Yang 59412fbb43
convert(gptoss): mxfp4 to ggml layout to avoid jit conversion (#12018)
* convert: return bytes written

* ggml flavor mxfp4

* simplify jit conversion

* comment
2025-08-26 16:41:02 -07:00
Michael Yang 86834a2797
convert: fix tensor sorting (#12015)
there's two bugs here.

1. the check for a layer id is incorrect and should be >= 0 since layer
   0 is valid
2. if both tensors have an layer identifier, it will only compare the
   layer id which will return 0 if the tensors are in the same layer.
   instead it should fallback to comparing the full tensor name
2025-08-26 13:57:46 -07:00
Michael Yang 85ccf7354d
gptoss: enable flash attention by default (#11996) 2025-08-26 13:34:45 -07:00
Michael Yang 30fb7e19f8
remove extra field attr (#11205) 2025-08-25 09:58:16 -07:00
Jeffrey Morgan d3450dd52e
api: implement stringer for ToolFunctionParameters (#12038) 2025-08-22 16:26:48 -07:00
Jeffrey Morgan 4bcb04ad88
tools: avoid matching braces that are part of tool content (#12039) 2025-08-22 15:22:14 -07:00