Commit Graph

4798 Commits

Author SHA1 Message Date
Inforithmics
b244c9f9f3 revert debugging changes (vulkan builds on windows) 2025-09-20 09:44:09 +02:00
Inforithmics
6e310d1cb6 fixed install command 2025-09-20 09:37:25 +02:00
Inforithmics
b4595f0022 correct vulkan silent install 2025-09-20 09:31:58 +02:00
Inforithmics
7e161f1dbf correct vulkan install 2025-09-20 09:16:54 +02:00
Inforithmics
d1125ea349 comment out cude for faster turnaround 2025-09-20 09:14:02 +02:00
Inforithmics
c972cf6d46 set vulkan path 2025-09-20 09:12:14 +02:00
Inforithmics
45f7850e75 temporarly commenting out rocm 2025-09-20 09:04:30 +02:00
Inforithmics
e2b38c391b commenting out error action stop 2025-09-20 09:02:55 +02:00
Inforithmics
ed03bb7928 reenable cpu 2025-09-20 09:01:25 +02:00
Inforithmics
c84ac53579 Commenting out other presets to build vulkan 2025-09-20 09:00:26 +02:00
Inforithmics
a4461bc0d4 use temporarly windows-latest for build 2025-09-20 08:46:59 +02:00
Inforithmics
6bbc054705 temporarly comment out gate to run windows task 2025-09-20 08:35:58 +02:00
Inforithmics
0f543fdb1e Vulkan on Windows Test 2025-09-20 08:04:11 +02:00
Patrick Devine
dba39b2eee gemma: fix rope scaling for qat models (#12348)
* gemma: fix rope scaling for qat models

* gofumpt yourself
2025-09-19 15:04:40 -07:00
Inforithmics
d5dab2d186 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-19 22:29:52 +02:00
Inforithmics
62b2265f9d buildvulkanAsSeperateFunction 2025-09-19 06:52:05 +02:00
Michael Yang
9f3a37fd36 fix: model load for unsupported embedding models (#12311)
with #12181, there's now support for embeddings in ollama engine.
this is done by mutating the architecture and adding _embed when it
detects an embedding model. however this introduced a bug where if
an embedding model was run based on an existing ollama engine model
without an embedding implementation, e.g. llama4, it will pass the
initial arch support check but fail when actually loaded.

there's currently two entrypoints to creating a model. previously this
second entrypoint was necessary because calling model.New would also
load the model. since #11818, this is no longer th case so merge them
to reduce complexity
v0.12.0 v0.12.0-rc1
2025-09-18 16:11:08 -07:00
Michael Yang
7460259eb3 feat: qwen3 embed (#12301)
* cleanup

* use pooling.TypeNone

* pooling test

* qwen3 embed
2025-09-18 15:50:32 -07:00
Jeffrey Morgan
22ccdd74c2 server: add unauthorized error to remote chat handler (#12338) 2025-09-18 15:40:31 -07:00
Daniel Hiltgen
0c3d0e7533 build: avoid unbounded parallel builds (#12319)
With the addition of cuda v13, on a clean setup, the level of parallelism
was causing docker desktop to become overwhelmed and compilers
were crashing.  This limits to 8 parallel per build stage, with the ability
to override if you have many more cores available.
2025-09-18 14:57:01 -07:00
Devon Rifkin
e7f56ef3d8 harmony: remove special casing in routes.go
Now that we have a built-in parser abstraction, which was introduced in
<https://github.com/ollama/ollama/pull/12248>, we can modify our harmony
parser to match this and then get rid of nearly all of the
harmony-specific logic in routes.go. We do have a small amount of
code that turns the parser on by default if the architecture matches and
no other built-in parser was provided.

The built-in parser interface was modified in order to handle harmony's
prefill and tool name translation requirements.
2025-09-18 14:55:59 -07:00
Patrick Devine
eb0a5d4459 auth: check the permissions on the private key to see if it's readable (#12336) 2025-09-18 14:34:34 -07:00
Michael Yang
ceac416ec2 fix(integration): check truncated length (#12337) 2025-09-18 14:00:21 -07:00
Inforithmics
01d8466dd6 Merge branch 'vulkanV3' of https://github.com/inforithmics/ollama into vulkanV3 2025-09-18 07:53:50 +02:00
Inforithmics
59a83bd97e Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-18 07:47:40 +02:00
Thomas Stocker
84257d8a7f Merge pull request #6 from rillomas/fix-vulkan-header
Fixed Vulkan header
2025-09-18 07:47:07 +02:00
Patrick Devine
2717dce6fe convert: convert bf16 vision weights to fp16 (#12324)
This change moves back to converting bf16 vision weights to fp16,
specifically if they start with the name "v." (such as v.blk.0.attn_k.weight).

This fixes a bug where converted images are failing because they are trying
to call `im2col` which doesn't have a bf16 kernel in ggml.
v0.12.0-rc0
2025-09-17 17:43:17 -07:00
Nakasaka, Masato
d0b5247084 Fixed Vulkan header
More aligned with official header definition now
2025-09-18 08:40:52 +09:00
frob
9b8187b487 server: skip parsing initial <think> if provided in the prompt for /api/generate (#12289) 2025-09-17 16:39:04 -07:00
Patrick Devine
8b894933a7 engine: add remote proxy (#12307) 2025-09-17 14:40:53 -07:00
Inforithmics
15eef5cc87 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-17 23:06:02 +02:00
Thomas Stocker
5ed727815e Merge pull request #5 from rillomas/remove-vulkan-header
Fix CI related errors
2025-09-17 23:05:04 +02:00
Daniel Hiltgen
9c5bf342bc fix: multi-cuda version skew (#12318)
Ensure that in a version skewed multi-cuda setup we use the lowest version for all GPUs
2025-09-17 13:05:09 -07:00
Michael Yang
564b558c92 fix(llama): other llama flavours (#12308)
* fix(llama): rope scale

* spm llama

* skip moe models

* cleanup
2025-09-17 12:12:21 -07:00
Michael Yang
a417ac97ee prefer ollama engine for qwen3 (#12310) 2025-09-17 09:48:21 -07:00
Nakasaka, Masato
ac9d59cf69 Fixed wrong structure ID 2025-09-17 16:59:23 +09:00
Nakasaka, Masato
45430ded4b Fixed missing members in Vulkan header
also added zero clear for some structs
2025-09-17 16:04:43 +09:00
Nakasaka, Masato
6cf4e0a7c8 added missing NL 2025-09-17 15:21:24 +09:00
Nakasaka, Masato
73441c9780 Removed unneeded function call
Somehow removing this call fixed the crashing when Vulkan header was removed
2025-09-17 15:11:13 +09:00
Nakasaka, Masato
882278a258 Merge remote-tracking branch 'vk-upstream/vulkanV3' into remove-vulkan-header 2025-09-17 09:24:06 +09:00
russcoss
05d53457af refactor: use the built-in max/min to simplify the code (#12280)
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-16 17:14:21 -07:00
Michael Yang
b225508c9b logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
Inforithmics
176d30744e fixing lint error 2025-09-16 22:48:24 +02:00
Inforithmics
0d4f3341c3 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-16 22:15:31 +02:00
Inforithmics
eb7b5ce9f4 Fix patches apply 2025-09-16 22:14:05 +02:00
Devon Rifkin
fa1c987a29 Merge pull request #12248 from ollama/drifkin/qwen3-coder-parsing
add qwen3-coder tool support
2025-09-16 10:21:43 -07:00
Michael Yang
ad95d5b30b use split activations when possible (#12293)
* use ggml_*_split activations when possible

* forward qkv
2025-09-16 09:51:19 -07:00
Michael Yang
c253433d68 embed: cleanup (#12299)
* cleanup

* use pooling.TypeNone

* pooling test
2025-09-16 09:48:42 -07:00
Beshoy Girgis
a1cff89b30 fix: fix CUDA detection for older GPUs (#12300)
Prioritize GPU compute capability over driver version to ensure
Pascal GPUs (CC 6.1) use compatible CUDA v12 libraries instead of v13.
2025-09-16 07:47:06 -07:00
Nakasaka, Masato
7a6b09ebae Removed unused code
Fix linter error in CI
2025-09-16 17:18:49 +09:00