Commit Graph

179 Commits

Author SHA1 Message Date
Michael Yang 801564fa8b
add new gemma model (#11204)
* update patches

* cherry pick metal mean kernel

* cherry pick cuda mean kernel

* gemma3n
2025-12-29 06:39:38 -06:00
Daniel Hiltgen 29ec3ddf9a
Re-remove cuda v11 (#10694)
* Re-remove cuda v11

Revert the revert - drop v11 support requiring drivers newer than Feb 23

This reverts commit c6bcdc4223.

* Simplify layout

With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)

* distinct sbsa variant for linux arm64

This avoids accidentally trying to load the sbsa cuda libraries on
a jetson system which results in crashes.

* temporary prevent rocm+cuda mixed loading
2025-12-29 06:38:18 -06:00
Jeffrey Morgan 5e3fb4744b
Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
2025-12-29 06:38:16 -06:00
Jeffrey Morgan c5237d9462
Revert "ggml: Export GPU UUIDs" (#11115)
This reverts commit aaa7818000.
2025-12-29 06:38:16 -06:00
Jesse Gross 0b9c6cb497
ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-12-29 06:38:10 -06:00
Parth Sareen 5ae2770e0d
llama: add minimum memory for grammar (#10820) 2025-12-29 06:38:07 -06:00
Jesse Gross b3de134eda
ggml: Report graph memory for failed allocations
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
2025-12-29 06:38:06 -06:00
DarkCaster 3decfd28a8
llama: fix incorrect initialization of C.struct_common_sampler_cparams.penalty_present (#10779) 2025-12-29 06:38:03 -06:00
Michael Yang af9708c72d
model: handle multiple eos tokens (#10577)
* get eos_token_id from generation_config.json

* refactor

* include both ids and strings in trace

* comments

* remove special case for gemma3 special vocab (#10743)
2025-12-29 06:38:01 -06:00
Bruce MacDonald 558b0f5fe9
model: add Qwen2.5-VL support (#10385) 2025-12-29 06:37:59 -06:00
Michael Yang 4d12503049
chore: update mllama to use ollama engine (#10637) 2025-12-29 06:37:59 -06:00
Parth Sareen d0ed25bde8
llama: fix memory leak for grammar (#10696) 2025-12-29 06:37:59 -06:00
Jeffrey Morgan 24118aa1db
llama: fix defrag patch to defragment when no slots are available (#10695) 2025-12-29 06:37:59 -06:00
Jeffrey Morgan 3f2b7658af
llama: fix crash on snowflake embedding model (#10690) 2025-12-29 06:37:58 -06:00
Jeffrey Morgan 9163ed39d1
llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00
frob 1791b68cc2
llama: allocate grammar buffer based on schema length (#10649) 2025-12-29 06:37:56 -06:00
Jeffrey Morgan 9ec2150629
api: remove unused sampling parameters (#10581) 2025-12-29 06:37:54 -06:00
Daniel Hiltgen 39ca55a1ba
Move quantization to new backend (#10363)
* Move quantization logic to GGML via new backend

This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.

* Remove "add model quantizations"

This is no longer needed now that quantization is implemented in Go+GGML code directly.
2025-12-29 06:37:52 -06:00
Jeffrey Morgan 13c66584a5
api: remove unused or unsupported api options (#10574)
Some options listed in api/types.go are not supported in
newer models, or have been deprecated in the past. This is
the first of a series of PRs to clean up the API options
2025-12-29 06:37:52 -06:00
Jeffrey Morgan 9a44e41802
all: fix cgo compiler warnings on windows (#10563) 2025-12-29 06:37:51 -06:00
Jesse Gross cec8a9dee0
ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-12-29 06:37:50 -06:00
Jeffrey Morgan 723fec1b25
llama: update to commit e1e8e099 (#10513) 2025-12-29 06:37:49 -06:00
Jeffrey Morgan 85d3f71c02
llama: update to commit 2016f07b (#10352) 2025-12-29 06:37:42 -06:00
Parth Sareen 7cf4c146bc
llama: remove model loading for grammar (#10096) 2025-12-29 06:37:41 -06:00
Jeffrey Morgan 8c08f74532
ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-12-29 06:37:39 -06:00
Jeffrey Morgan 3824c0803b
llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
Jesse Gross abb8f89af9
ggml: Free ggml_backend_buffer_t when releasing buffer
When ggml_backend_buffer_free() is called, the device memory
is released but not all backends consistently release the actual
ggml_backend_buffer_t in system RAM, causing a memory leak.

Bug #10040
2025-12-29 06:37:37 -06:00
Bruce MacDonald 6bd0a983cd model: support for mistral-small in the ollama runner
Mistral is a popular research lab making open source models. This updates
the forward pass of llama architecture models to support both llama models
and mistral models by accounting for additional metadata present in mistral
models, and finding the correct dimensions for the output projection.
2025-04-03 16:57:36 -07:00
Bruce MacDonald 66b2539238
runner: clear cache when shift is not possible (#9433)
Clear KV cache when shift operation is not supported by model.
Added KvCacheCanShift() check to handle models that can't perform cache shifts,
falling back to full cache clear while preserving logical token history to
maintain expected behavior when context window fills up.
2025-03-31 12:54:45 -07:00
saman-amd ead27aa9fe
Add gfx1200 & gfx1201 support on linux (#9878) 2025-03-27 07:35:19 -07:00
Patrick Devine ef378ad673
gemma3 quantization (#9776) 2025-03-14 17:41:07 -07:00
Michael Yang 9e4642e9b3 ollama debug tensor 2025-03-11 14:49:19 -07:00
Jeffrey Morgan e093db92c4
sample: temporarily use grammars for constrained generation in new engine (#9586) 2025-03-10 16:17:39 +01:00
Jeffrey Morgan 4289c74359
llama: fix kv loading on snowflake-arctic-embed models (#9536) 2025-03-07 09:25:34 -08:00
Michael Yang 05a01fdecb ml/backend/ggml: consolidate system info logging
- output backend system info when initializing the backend. this ensures
  this information is always present without needing to be called
  explicitly
- convert to structured logging
- enumerate devices rather than backends since devices are ordered
- track device indices grouped by device name
2025-03-04 15:14:31 -08:00
Michael Yang ba7d31240e fix: own lib/ollama directory
expand backend loading error handling to catch more problems and log
them instead of panicing
2025-03-03 13:01:18 -08:00
Michael Yang 657685e85d fix: replace deprecated functions 2025-02-28 21:29:34 +00:00
Jeffrey Morgan 98d44fa39d
llama: add phi4 mini support (#9403) 2025-02-27 19:30:32 -08:00
Michael Yang a59f665235 ml/backend/ggml: fix debug logging 2025-02-27 18:30:57 +00:00
Jeffrey Morgan d7d7e99662
llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
Jeffrey Morgan 3ad4bc8afe
llama: removed unused 'vendoring' file (#9351) 2025-02-25 14:33:03 -08:00
Jeffrey Morgan 8c13cfa4dd
ml/backend/ggml: fix crash on windows paths with wide characters (#9305) 2025-02-23 19:13:53 -08:00
Michael Yang bda4ef6c56 reorder patches 2025-02-20 03:49:24 +00:00
Michael Yang 1e438b237c
Merge pull request #9203 from ollama/mxyng/sapphirerapids
build: remove backend build for sapphirerapids
2025-02-19 21:42:00 +00:00
Jeffrey Morgan d2eb226c91
llama: add patch to fix ggml backend reg on Linux with utf-8 characters in the path (#9159) 2025-02-18 22:46:17 -05:00
Michael Yang 5f8c03189e build: remove backend build for sapphirerapids
sapphire rapids has amx support but it ends up having a negative
performance impact.

emerald rapids also has amx support with a positive performance impact
however there's no reasonable way in ggml to differentiate between the
two. the impact is small (~6%) so disable amx entirely for simplicity
2025-02-18 14:47:58 -08:00
Jeffrey Morgan 6600bd7d91
ml/backend/ggml: stable sort devices by score (#9081) 2025-02-13 18:42:36 -08:00
Jesse Gross ed443a0393 Runner for Ollama engine
This provides integration with the new Ollama engine
(5824541 next ollama runner (#7913)) and the rest of the Ollama
infrastructure such as the runner and Ollama server.

In addition, it also builds out the KV cache infrastructure to
support requirements of how Ollama runs models such as:
 - Parallel processing
 - Memory management for defragmentation and shifting
 - Multi-modal modals

Both old and new engines continue to be supported. By default, only
the old engine is used. To enable the new engine:

Start the server with the OLLAMA_NEW_ENGINE environment variable set:
OLLAMA_NEW_ENGINE=1 ./ollama serve

Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
./ollama run jessegross/llama3.1
2025-02-13 17:09:26 -08:00
Michael Yang 49df03da9a
fix: harden backend loading (#9024)
* wrap ggml_backend_load_best in try/catch
* ignore non-ollama paths
2025-02-11 15:36:53 -08:00
Jeffrey Morgan f4711da7bd
ml/backend/ggml: fix crash on dlopen for non-AVX systems (#8976) 2025-02-10 09:52:12 -08:00