Commit Graph

185 Commits

Author SHA1 Message Date
Jesse Gross
858b91f1e5 ggml: Use ordinal IDs for AMD GPUs on Linux when UUID is unavailable
Some AMD GPUs do not provide UUIDs and report only "XX". In these
cases, we should use the ordinal ID as an alternate identifier.
This is the same as we always need to do on Windows for AMD.

In addition, this prints out the ID for each GPU when enumerating
them for easier debugging in the future.
2025-12-29 06:39:52 -06:00
Jesse Gross
3d990dc451 ggml: No-alloc mode
Callers can set a backend buffer type to be no-alloc, meaning that
it does not allocate memory for tensors or operations. This can
be used for calculating memory requirements. Tensors and graphs
must be recreated with no-alloc set to false before loading data.

Defaults to false for newly created backend buffer types.
2025-12-29 06:39:51 -06:00
Michael Yang
ed2e8a9022 gpt-oss (#11672)
* bf16

* tests

* gpt-oss

* enable gptoss for engine

* rough estimate

* convert to mxfp4

* handle safetensors U8

* clamp glu/linear

* update tokenizer

* MXFP4 support

This implements the Open Compute Microscaling (MX) FP4 format
as a tensor type with backend implementations focusing
on mulmat and mulmatid on CPU, CUDA, and Metal.

* Unit tests for MXFP4 support

This exercises various operations and shapes on both CPU and GPU (if detected
on the system)

* cuda graph

* unit test adjustments

* cuda: optimize memory access

Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4

* mac: fix crash on old macos versions

cblas_sgemm is only supported on v13.3 and up, however bf16 is
only supported on v14+ so we were falling back to ggml-blas and
crashing on bf16 tensors.  Checking for the function being null
seems to be the simplest way to condittionally avoid registering the
backend.

* server: Minimum context length for gptoss

This model requires a minimum context length of 8192 to function
effectively. Users can set higher values through all normal mechanisms
but lower values will be silently reset.

* ggml: Multiply by numParallel for gptoss sliding window

When computing the graph size estimate, the context size is already
multiplied by numParallel so estimates reflect that. However, since
sliding window models use a smaller, fixed context size, they need
to manually take numParallel into account.

* gpt-oss integration

includes harmony parser and thinking levels, etc.

* fix sync

* fix tests

* fix lint

---------

Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Devon Rifkin <drifkin@drifkin.net>
2025-12-29 06:39:48 -06:00
Daniel Hiltgen
5038e33776 mac: disable bf16 on unsupported OS versions (#11585)
Support for bf16 was added in MacOS v14+ and attempting to enable
on older versions causes runtime failures.
2025-12-29 06:39:47 -06:00
Oliver Simons
1ee3fe46f3 Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-12-29 06:39:47 -06:00
Jesse Gross
387cb031b3 ggml: Report ordinal IDs for AMD GPUs on Windows
We don't get valid UUIDs for AMD GPUs on Windows, so the best option
is to use the ordinal IDs. This brings us in line with what we currently
do on the Ollama server - the only exception is AMD GPUs on Linux, which
falls back to using ordinal IDs. The GGML implementation has no fallback
but it doesn't appear to occur for any of the GPUs that we support.

It's also possible that there are collisions between ordinal IDs for
different libraries - however the only places where we use them are
AMD on Windows and Metal on Mac, which can never occur on the same
system.
2025-12-29 06:39:42 -06:00
Michael Yang
801564fa8b add new gemma model (#11204)
* update patches

* cherry pick metal mean kernel

* cherry pick cuda mean kernel

* gemma3n
2025-12-29 06:39:38 -06:00
Daniel Hiltgen
29ec3ddf9a Re-remove cuda v11 (#10694)
* Re-remove cuda v11

Revert the revert - drop v11 support requiring drivers newer than Feb 23

This reverts commit c6bcdc4223.

* Simplify layout

With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)

* distinct sbsa variant for linux arm64

This avoids accidentally trying to load the sbsa cuda libraries on
a jetson system which results in crashes.

* temporary prevent rocm+cuda mixed loading
2025-12-29 06:38:18 -06:00
Jeffrey Morgan
5e3fb4744b Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
2025-12-29 06:38:16 -06:00
Jeffrey Morgan
c5237d9462 Revert "ggml: Export GPU UUIDs" (#11115)
This reverts commit aaa7818000.
2025-12-29 06:38:16 -06:00
Jesse Gross
0b9c6cb497 ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-12-29 06:38:10 -06:00
Parth Sareen
5ae2770e0d llama: add minimum memory for grammar (#10820) 2025-12-29 06:38:07 -06:00
Jesse Gross
b3de134eda ggml: Report graph memory for failed allocations
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
2025-12-29 06:38:06 -06:00
DarkCaster
3decfd28a8 llama: fix incorrect initialization of C.struct_common_sampler_cparams.penalty_present (#10779) 2025-12-29 06:38:03 -06:00
Michael Yang
af9708c72d model: handle multiple eos tokens (#10577)
* get eos_token_id from generation_config.json

* refactor

* include both ids and strings in trace

* comments

* remove special case for gemma3 special vocab (#10743)
2025-12-29 06:38:01 -06:00
Bruce MacDonald
558b0f5fe9 model: add Qwen2.5-VL support (#10385) 2025-12-29 06:37:59 -06:00
Michael Yang
4d12503049 chore: update mllama to use ollama engine (#10637) 2025-12-29 06:37:59 -06:00
Parth Sareen
d0ed25bde8 llama: fix memory leak for grammar (#10696) 2025-12-29 06:37:59 -06:00
Jeffrey Morgan
24118aa1db llama: fix defrag patch to defragment when no slots are available (#10695) 2025-12-29 06:37:59 -06:00
Jeffrey Morgan
3f2b7658af llama: fix crash on snowflake embedding model (#10690) 2025-12-29 06:37:58 -06:00
Jeffrey Morgan
9163ed39d1 llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00
frob
1791b68cc2 llama: allocate grammar buffer based on schema length (#10649) 2025-12-29 06:37:56 -06:00
Jeffrey Morgan
9ec2150629 api: remove unused sampling parameters (#10581) 2025-12-29 06:37:54 -06:00
Daniel Hiltgen
39ca55a1ba Move quantization to new backend (#10363)
* Move quantization logic to GGML via new backend

This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.

* Remove "add model quantizations"

This is no longer needed now that quantization is implemented in Go+GGML code directly.
2025-12-29 06:37:52 -06:00
Jeffrey Morgan
13c66584a5 api: remove unused or unsupported api options (#10574)
Some options listed in api/types.go are not supported in
newer models, or have been deprecated in the past. This is
the first of a series of PRs to clean up the API options
2025-12-29 06:37:52 -06:00
Jeffrey Morgan
9a44e41802 all: fix cgo compiler warnings on windows (#10563) 2025-12-29 06:37:51 -06:00
Jesse Gross
cec8a9dee0 ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-12-29 06:37:50 -06:00
Jeffrey Morgan
723fec1b25 llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
Jeffrey Morgan
85d3f71c02 llama: update to commit 2016f07b (#10352) 2025-12-29 06:37:42 -06:00
Parth Sareen
7cf4c146bc llama: remove model loading for grammar (#10096) 2025-12-29 06:37:41 -06:00
Jeffrey Morgan
8c08f74532 ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-12-29 06:37:39 -06:00
Jeffrey Morgan
3824c0803b llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
Jesse Gross
abb8f89af9 ggml: Free ggml_backend_buffer_t when releasing buffer
When ggml_backend_buffer_free() is called, the device memory
is released but not all backends consistently release the actual
ggml_backend_buffer_t in system RAM, causing a memory leak.

Bug #10040
2025-12-29 06:37:37 -06:00
Bruce MacDonald
6bd0a983cd model: support for mistral-small in the ollama runner
Mistral is a popular research lab making open source models. This updates
the forward pass of llama architecture models to support both llama models
and mistral models by accounting for additional metadata present in mistral
models, and finding the correct dimensions for the output projection.
2025-04-03 16:57:36 -07:00
Bruce MacDonald
66b2539238 runner: clear cache when shift is not possible (#9433)
Clear KV cache when shift operation is not supported by model.
Added KvCacheCanShift() check to handle models that can't perform cache shifts,
falling back to full cache clear while preserving logical token history to
maintain expected behavior when context window fills up.
2025-03-31 12:54:45 -07:00
saman-amd
ead27aa9fe Add gfx1200 & gfx1201 support on linux (#9878) 2025-03-27 07:35:19 -07:00
Patrick Devine
ef378ad673 gemma3 quantization (#9776) 2025-03-14 17:41:07 -07:00
Michael Yang
9e4642e9b3 ollama debug tensor 2025-03-11 14:49:19 -07:00
Jeffrey Morgan
e093db92c4 sample: temporarily use grammars for constrained generation in new engine (#9586) 2025-03-10 16:17:39 +01:00
Jeffrey Morgan
4289c74359 llama: fix kv loading on snowflake-arctic-embed models (#9536) 2025-03-07 09:25:34 -08:00
Michael Yang
05a01fdecb ml/backend/ggml: consolidate system info logging
- output backend system info when initializing the backend. this ensures
  this information is always present without needing to be called
  explicitly
- convert to structured logging
- enumerate devices rather than backends since devices are ordered
- track device indices grouped by device name
2025-03-04 15:14:31 -08:00
Michael Yang
ba7d31240e fix: own lib/ollama directory
expand backend loading error handling to catch more problems and log
them instead of panicing
2025-03-03 13:01:18 -08:00
Michael Yang
657685e85d fix: replace deprecated functions 2025-02-28 21:29:34 +00:00
Jeffrey Morgan
98d44fa39d llama: add phi4 mini support (#9403) 2025-02-27 19:30:32 -08:00
Michael Yang
a59f665235 ml/backend/ggml: fix debug logging 2025-02-27 18:30:57 +00:00
Jeffrey Morgan
d7d7e99662 llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
Jeffrey Morgan
3ad4bc8afe llama: removed unused 'vendoring' file (#9351) 2025-02-25 14:33:03 -08:00
Jeffrey Morgan
8c13cfa4dd ml/backend/ggml: fix crash on windows paths with wide characters (#9305) 2025-02-23 19:13:53 -08:00
Michael Yang
bda4ef6c56 reorder patches 2025-02-20 03:49:24 +00:00
Michael Yang
1e438b237c Merge pull request #9203 from ollama/mxyng/sapphirerapids
build: remove backend build for sapphirerapids
2025-02-19 21:42:00 +00:00