Commit Graph

4688 Commits

Author SHA1 Message Date
Inforithmics ed03bb7928 reenable cpu 2025-09-20 09:01:25 +02:00
Inforithmics c84ac53579 Commenting out other presets to build vulkan 2025-09-20 09:00:26 +02:00
Inforithmics a4461bc0d4 use temporarly windows-latest for build 2025-09-20 08:46:59 +02:00
Inforithmics 6bbc054705 temporarly comment out gate to run windows task 2025-09-20 08:35:58 +02:00
Inforithmics 0f543fdb1e Vulkan on Windows Test 2025-09-20 08:04:11 +02:00
Inforithmics d5dab2d186 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-19 22:29:52 +02:00
Inforithmics 62b2265f9d buildvulkanAsSeperateFunction 2025-09-19 06:52:05 +02:00
Michael Yang 9f3a37fd36
fix: model load for unsupported embedding models (#12311)
with #12181, there's now support for embeddings in ollama engine.
this is done by mutating the architecture and adding _embed when it
detects an embedding model. however this introduced a bug where if
an embedding model was run based on an existing ollama engine model
without an embedding implementation, e.g. llama4, it will pass the
initial arch support check but fail when actually loaded.

there's currently two entrypoints to creating a model. previously this
second entrypoint was necessary because calling model.New would also
load the model. since #11818, this is no longer th case so merge them
to reduce complexity
2025-09-18 16:11:08 -07:00
Michael Yang 7460259eb3
feat: qwen3 embed (#12301)
* cleanup

* use pooling.TypeNone

* pooling test

* qwen3 embed
2025-09-18 15:50:32 -07:00
Jeffrey Morgan 22ccdd74c2
server: add unauthorized error to remote chat handler (#12338) 2025-09-18 15:40:31 -07:00
Daniel Hiltgen 0c3d0e7533
build: avoid unbounded parallel builds (#12319)
With the addition of cuda v13, on a clean setup, the level of parallelism
was causing docker desktop to become overwhelmed and compilers
were crashing.  This limits to 8 parallel per build stage, with the ability
to override if you have many more cores available.
2025-09-18 14:57:01 -07:00
Patrick Devine eb0a5d4459
auth: check the permissions on the private key to see if it's readable (#12336) 2025-09-18 14:34:34 -07:00
Michael Yang ceac416ec2
fix(integration): check truncated length (#12337) 2025-09-18 14:00:21 -07:00
Inforithmics 01d8466dd6 Merge branch 'vulkanV3' of https://github.com/inforithmics/ollama into vulkanV3 2025-09-18 07:53:50 +02:00
Inforithmics 59a83bd97e Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-18 07:47:40 +02:00
Thomas Stocker 84257d8a7f
Merge pull request #6 from rillomas/fix-vulkan-header
Fixed Vulkan header
2025-09-18 07:47:07 +02:00
Patrick Devine 2717dce6fe
convert: convert bf16 vision weights to fp16 (#12324)
This change moves back to converting bf16 vision weights to fp16,
specifically if they start with the name "v." (such as v.blk.0.attn_k.weight).

This fixes a bug where converted images are failing because they are trying
to call `im2col` which doesn't have a bf16 kernel in ggml.
2025-09-17 17:43:17 -07:00
Nakasaka, Masato d0b5247084 Fixed Vulkan header
More aligned with official header definition now
2025-09-18 08:40:52 +09:00
frob 9b8187b487
server: skip parsing initial <think> if provided in the prompt for /api/generate (#12289) 2025-09-17 16:39:04 -07:00
Patrick Devine 8b894933a7
engine: add remote proxy (#12307) 2025-09-17 14:40:53 -07:00
Inforithmics 15eef5cc87 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-17 23:06:02 +02:00
Thomas Stocker 5ed727815e
Merge pull request #5 from rillomas/remove-vulkan-header
Fix CI related errors
2025-09-17 23:05:04 +02:00
Daniel Hiltgen 9c5bf342bc
fix: multi-cuda version skew (#12318)
Ensure that in a version skewed multi-cuda setup we use the lowest version for all GPUs
2025-09-17 13:05:09 -07:00
Michael Yang 564b558c92
fix(llama): other llama flavours (#12308)
* fix(llama): rope scale

* spm llama

* skip moe models

* cleanup
2025-09-17 12:12:21 -07:00
Michael Yang a417ac97ee
prefer ollama engine for qwen3 (#12310) 2025-09-17 09:48:21 -07:00
Nakasaka, Masato ac9d59cf69 Fixed wrong structure ID 2025-09-17 16:59:23 +09:00
Nakasaka, Masato 45430ded4b Fixed missing members in Vulkan header
also added zero clear for some structs
2025-09-17 16:04:43 +09:00
Nakasaka, Masato 6cf4e0a7c8 added missing NL 2025-09-17 15:21:24 +09:00
Nakasaka, Masato 73441c9780 Removed unneeded function call
Somehow removing this call fixed the crashing when Vulkan header was removed
2025-09-17 15:11:13 +09:00
Nakasaka, Masato 882278a258 Merge remote-tracking branch 'vk-upstream/vulkanV3' into remove-vulkan-header 2025-09-17 09:24:06 +09:00
russcoss 05d53457af
refactor: use the built-in max/min to simplify the code (#12280)
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-16 17:14:21 -07:00
Michael Yang b225508c9b
logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
Inforithmics 176d30744e fixing lint error 2025-09-16 22:48:24 +02:00
Inforithmics 0d4f3341c3 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-16 22:15:31 +02:00
Inforithmics eb7b5ce9f4 Fix patches apply 2025-09-16 22:14:05 +02:00
Devon Rifkin fa1c987a29
Merge pull request #12248 from ollama/drifkin/qwen3-coder-parsing
add qwen3-coder tool support
2025-09-16 10:21:43 -07:00
Michael Yang ad95d5b30b
use split activations when possible (#12293)
* use ggml_*_split activations when possible

* forward qkv
2025-09-16 09:51:19 -07:00
Michael Yang c253433d68
embed: cleanup (#12299)
* cleanup

* use pooling.TypeNone

* pooling test
2025-09-16 09:48:42 -07:00
Beshoy Girgis a1cff89b30
fix: fix CUDA detection for older GPUs (#12300)
Prioritize GPU compute capability over driver version to ensure
Pascal GPUs (CC 6.1) use compatible CUDA v12 libraries instead of v13.
2025-09-16 07:47:06 -07:00
Nakasaka, Masato 7a6b09ebae Removed unused code
Fix linter error in CI
2025-09-16 17:18:49 +09:00
Masato Nakasaka ede4081253 Fix compile error in Mac
Metal is preferred so we're disabling Vulkan for now
2025-09-16 17:00:17 +09:00
Nakasaka, Masato da466f4f86 Copied minimal definition from vulkan header 2025-09-16 15:05:54 +09:00
Daniel Hiltgen 93c64ea1b1
doc: show how to clear the cgo cache (#12298) 2025-09-15 15:45:35 -07:00
Michael Yang 3f6642f6fc
model: implement bert in ollama engine (#9080)
* fix truncate

* s/SentencePieceModel/SentencePiece/

* bert

* wordpiece

* refactor pooling

* more tokenizers

* normalize embeddings
2025-09-15 15:35:59 -07:00
Michael Yang 6f7117145f
batch: use tensors for outputs (#12185)
this cleans up the model interface slightly without too much impact in
other areas
2025-09-15 14:33:06 -07:00
Devon Rifkin 472feec2ff address comments 2025-09-15 11:46:25 -07:00
Devon Rifkin 47991940d4 add qwen3-coder tool support
The format qwen3-coder uses is relatively unique, both in rendering and
in parsing. To implement parsing, I wrote a custom parser in similar
style to harmony. For the rendering, I found that the logic would be
much more difficult to follow in a template, so I introduced the concept
of a built-in renderer that uses go code, rather than a template to
generate prompts.

I set us up for future built-in parsers and renderers by making it so
they can be specified in a Modelfile like so:

```
RENDERER "qwen3-coder"
PARSER "qwen3-coder"
```

These need to be provided explicitly because the architecture alone is
not enough to understand what format the model expects to receive, and
what format we expect it to output (e.g., qwen3-coder is `qwen3moe`,
which includes other qwen3-family models as well)

I haven't converted harmony to be one of these "built-ins" yet, since
some of it is in flux with the changes @ParthSareen has been making to
move harmony to the runner. It is likely that many other built-ins will
need to move to the runner as well, but I'm able to slightly defer that
decision since qwen3-coder doesn't have thinking (and therefore doesn't
need to be in the runner to make structured outputs work). I expect to
unify harmony with this approach very soon.

Whether a particular model supports tools or thinking was previously
inferred from templates, but without a template we now also use the
parser itself to declare what it supports. If we have future models that
re-use the same parsing format, but have different capabilities, we'll
want to parameterize them and give them different names to be specified
as a `PARSER`.

Misc changes:

- I worked on the renderer by diffing outputs from the reference
  implementation and ours. To make it easier to do this, I extended
  <https://github.com/ollama/ollama/pull/11875> to also support
  returning the prompt via the openai compat layer
2025-09-15 11:33:47 -07:00
jmorganca 92b96d54ef Revert "runner: move harmony to runner (#12052)"
This reverts commit 1a558f98e2.
2025-09-12 20:40:14 -03:00
jmorganca 9d56e63dbf Revert "runner: simplify parser entrypoints in runner (#12233)"
This reverts commit 8d6fffaead.
2025-09-12 20:40:14 -03:00
tc-mb 053092185e
Fix image cannot be seen with slice image on llama engine
Ollama's recent engine update, llama.cpp, caused all models requiring a slice schema to not display images. As a result, the value of numTokens isn't always the length of the sliced ​​image embed, but rather the end length of the schema. This causes the image embed to not be correctly included during all slice processing.
2025-09-12 16:25:12 -07:00