Commit Graph

4714 Commits

Author SHA1 Message Date
Thomas Stocker bd27162f24
Add Vulkan to Build Matrix
Vulkan Builds on CI
2025-09-20 11:06:35 +02:00
Inforithmics 04fba9ba09 revert debugging changes 2025-09-20 11:03:09 +02:00
Inforithmics 2098e6a8e3 trying to use version 1.4.313 2025-09-20 11:00:37 +02:00
Inforithmics fe47191720 add some more extra 2025-09-20 10:53:43 +02:00
Inforithmics 6f546457de try again 2025-09-20 10:49:24 +02:00
Inforithmics 19bc49de5f try without version number 2025-09-20 10:48:18 +02:00
Inforithmics a7557cf1a8 trying again 2025-09-20 10:39:05 +02:00
Inforithmics 3ccc18f1e1 try again 2025-09-20 10:36:48 +02:00
Inforithmics 79a0f526b1 fixed vulkan-sdk name 2025-09-20 10:33:23 +02:00
Inforithmics 0f86789808 fix version 2025-09-20 10:31:44 +02:00
Inforithmics 62a8d66002 trying again 2025-09-20 10:30:31 +02:00
Inforithmics 26df69a025 trying again 2025-09-20 10:24:31 +02:00
Inforithmics 475d2c2583 trying to fix 2025-09-20 10:15:29 +02:00
Inforithmics c91b494a8b fix version 2025-09-20 10:10:10 +02:00
Inforithmics af50fd5af7 try again linux build 2025-09-20 10:08:24 +02:00
Inforithmics 236c274017 temporarly disable cuda and rocm 2025-09-20 10:00:14 +02:00
Inforithmics e29bb17613 trying to build vulkan for linux 2025-09-20 09:58:31 +02:00
Inforithmics a0389785c7 revert windows-latest 2025-09-20 09:45:36 +02:00
Inforithmics b244c9f9f3 revert debugging changes (vulkan builds on windows) 2025-09-20 09:44:09 +02:00
Inforithmics 6e310d1cb6 fixed install command 2025-09-20 09:37:25 +02:00
Inforithmics b4595f0022 correct vulkan silent install 2025-09-20 09:31:58 +02:00
Inforithmics 7e161f1dbf correct vulkan install 2025-09-20 09:16:54 +02:00
Inforithmics d1125ea349 comment out cude for faster turnaround 2025-09-20 09:14:02 +02:00
Inforithmics c972cf6d46 set vulkan path 2025-09-20 09:12:14 +02:00
Inforithmics 45f7850e75 temporarly commenting out rocm 2025-09-20 09:04:30 +02:00
Inforithmics e2b38c391b commenting out error action stop 2025-09-20 09:02:55 +02:00
Inforithmics ed03bb7928 reenable cpu 2025-09-20 09:01:25 +02:00
Inforithmics c84ac53579 Commenting out other presets to build vulkan 2025-09-20 09:00:26 +02:00
Inforithmics a4461bc0d4 use temporarly windows-latest for build 2025-09-20 08:46:59 +02:00
Inforithmics 6bbc054705 temporarly comment out gate to run windows task 2025-09-20 08:35:58 +02:00
Inforithmics 0f543fdb1e Vulkan on Windows Test 2025-09-20 08:04:11 +02:00
Inforithmics d5dab2d186 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-19 22:29:52 +02:00
Inforithmics 62b2265f9d buildvulkanAsSeperateFunction 2025-09-19 06:52:05 +02:00
Michael Yang 9f3a37fd36
fix: model load for unsupported embedding models (#12311)
with #12181, there's now support for embeddings in ollama engine.
this is done by mutating the architecture and adding _embed when it
detects an embedding model. however this introduced a bug where if
an embedding model was run based on an existing ollama engine model
without an embedding implementation, e.g. llama4, it will pass the
initial arch support check but fail when actually loaded.

there's currently two entrypoints to creating a model. previously this
second entrypoint was necessary because calling model.New would also
load the model. since #11818, this is no longer th case so merge them
to reduce complexity
2025-09-18 16:11:08 -07:00
Michael Yang 7460259eb3
feat: qwen3 embed (#12301)
* cleanup

* use pooling.TypeNone

* pooling test

* qwen3 embed
2025-09-18 15:50:32 -07:00
Jeffrey Morgan 22ccdd74c2
server: add unauthorized error to remote chat handler (#12338) 2025-09-18 15:40:31 -07:00
Daniel Hiltgen 0c3d0e7533
build: avoid unbounded parallel builds (#12319)
With the addition of cuda v13, on a clean setup, the level of parallelism
was causing docker desktop to become overwhelmed and compilers
were crashing.  This limits to 8 parallel per build stage, with the ability
to override if you have many more cores available.
2025-09-18 14:57:01 -07:00
Patrick Devine eb0a5d4459
auth: check the permissions on the private key to see if it's readable (#12336) 2025-09-18 14:34:34 -07:00
Michael Yang ceac416ec2
fix(integration): check truncated length (#12337) 2025-09-18 14:00:21 -07:00
Inforithmics 01d8466dd6 Merge branch 'vulkanV3' of https://github.com/inforithmics/ollama into vulkanV3 2025-09-18 07:53:50 +02:00
Inforithmics 59a83bd97e Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-18 07:47:40 +02:00
Thomas Stocker 84257d8a7f
Merge pull request #6 from rillomas/fix-vulkan-header
Fixed Vulkan header
2025-09-18 07:47:07 +02:00
Patrick Devine 2717dce6fe
convert: convert bf16 vision weights to fp16 (#12324)
This change moves back to converting bf16 vision weights to fp16,
specifically if they start with the name "v." (such as v.blk.0.attn_k.weight).

This fixes a bug where converted images are failing because they are trying
to call `im2col` which doesn't have a bf16 kernel in ggml.
2025-09-17 17:43:17 -07:00
Nakasaka, Masato d0b5247084 Fixed Vulkan header
More aligned with official header definition now
2025-09-18 08:40:52 +09:00
frob 9b8187b487
server: skip parsing initial <think> if provided in the prompt for /api/generate (#12289) 2025-09-17 16:39:04 -07:00
Patrick Devine 8b894933a7
engine: add remote proxy (#12307) 2025-09-17 14:40:53 -07:00
Inforithmics 15eef5cc87 Merge remote-tracking branch 'upstream/main' into vulkanV3 2025-09-17 23:06:02 +02:00
Thomas Stocker 5ed727815e
Merge pull request #5 from rillomas/remove-vulkan-header
Fix CI related errors
2025-09-17 23:05:04 +02:00
Daniel Hiltgen 9c5bf342bc
fix: multi-cuda version skew (#12318)
Ensure that in a version skewed multi-cuda setup we use the lowest version for all GPUs
2025-09-17 13:05:09 -07:00
Michael Yang 564b558c92
fix(llama): other llama flavours (#12308)
* fix(llama): rope scale

* spm llama

* skip moe models

* cleanup
2025-09-17 12:12:21 -07:00