Commit Graph

4303 Commits

Author SHA1 Message Date
frob 56765df3ee
docs: remove unsupported quantizations (#10842) 2025-12-29 06:38:07 -06:00
frob 4fed7101b7
server: add hint to the error message when model path access fails (#10843) 2025-12-29 06:38:07 -06:00
Jesse Gross f34f58bbb2
ml: Improve slog formatting for BackendMemory 2025-12-29 06:38:07 -06:00
Parth Sareen 8cd2b6478e
tools: refactor tool call parsing and enable streaming (#10415) 2025-12-29 06:38:07 -06:00
Parth Sareen 5ae2770e0d
llama: add minimum memory for grammar (#10820) 2025-12-29 06:38:07 -06:00
Jesse Gross d1ed4b17ef
ml: Panic rather than return error on tensor allocation failure
FromFloatSlice and FromIntSlice return an error if the shape doesn't
match the passed data or if memory can't be allocated. Since these
are inputs, the memory being allocated is system memory rather than VRAM.

In many cases, the caller can't really handle the error and panics.

Empty and Zeros directly panic if they can't allocate memory.

This makes things consistent by panicing for the first two cases,
removing a fair amount of error handling code. This is also consistent
with how Go typically handles these situations.
2025-12-29 06:38:06 -06:00
Jesse Gross 6e68feda00
ollamarunner: Memory usage reporting
This provides granular information about the backend memory allocations
required by the runner:
 - Per backend
 - Per layer
 - Weights, cache and graph
 - Allocation status

This can be used for debugging and validating memory estimates.
2025-12-29 06:38:06 -06:00
Jesse Gross b3de134eda
ggml: Report graph memory for failed allocations
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
2025-12-29 06:38:06 -06:00
Daniel Hiltgen 99880e7254
sched: fix runner leak during reloading unload (#10819)
When the same model is being reloaded rapidly with client connections
being canceled before the model finishes loading, the queued unload
event could cause a leak of runners by deleting a different runner from
the loaded list.
2025-12-29 06:38:06 -06:00
Michael Yang df4b146c49
fix: mllama quality (#10807)
* fix mllama convert

- transform attn_gate and ffn_gate
- swap attention heads for vision models

* fix mllama

the mlp gate which was applied in the wrong place
2025-12-29 06:38:05 -06:00
Bruce MacDonald d25bde723c
server: improve tensor quantization fallback logic (#10806)
Fall back to alternative quantization types when a tensor's dimensions aren't divisible by the block size required for the original desired quantization type. If retried quantization types fail, the system ultimately falls back to F16 (half-precision floating point) which has a block size of 1 and can handle any tensor dimension.
2025-12-29 06:38:05 -06:00
Daniel Hiltgen 1dbe9ba784
integration: add qwen2.5-vl (#10815)
Replace the older llava model with qwen2.5 for vision tests
Skip split-batch test on small VRAM systems to avoid excessive test time
2025-12-29 06:38:05 -06:00
Michael Yang 197db4eccd
remove support for multiple ggufs in a single file (#10722)
* remove support for multiple ggufs in a single file

this was an attempt to make it easier to import multimodal models into
ollama. this was rarely used and error prone so remove it

* fix: create fused model from blob
2025-12-29 06:38:05 -06:00
Daniel Hiltgen bf0fbfeb0e
win: detect background upgrade in progress (#10785)
Give the user a helpful error instead of showing
connection refused errors.
2025-12-29 06:38:05 -06:00
Michael Yang dc8ee7636b
feat: port qwen2 model (#10782) 2025-12-29 06:38:04 -06:00
Michael Yang 9215b190fa
feat: qwen3 dense and sparse models (#10708)
* feat: qwen3 dense
* feat: qwen3moe
* fix llama4 moe
2025-12-29 06:38:04 -06:00
Michael Yang 7f3e4d6f06
fix cmakelists (#10804)
this fixes an issue introduced in #10788
2025-12-29 06:38:04 -06:00
Michael Yang 02fd383448
chore: disable debug in binary libraries (#10788) 2025-12-29 06:38:04 -06:00
Michael Yang 9213339549
fix: qwen25vl assign samebatch in multimodal input (#10789)
setting samebatch on the vision start token is problematic because it
will be shared with other inputs that also use images. this will cause
the input to be cached and the runner will not see SameBatch. SameBatch
will also be incorrect since it may be for a different image.

assigning samebatch to the input tokens resolves this by ensure it's
assigned correctly to inputs corresponding to the image.

not setting same batch correctly may cause panics during inference since
images are no longer guaranteed to be in the same batch.
2025-12-29 06:38:03 -06:00
Michael Yang 20dcadf7e8
ml: add more rope options (#10775) 2025-12-29 06:38:03 -06:00
DarkCaster 3decfd28a8
llama: fix incorrect initialization of C.struct_common_sampler_cparams.penalty_present (#10779) 2025-12-29 06:38:03 -06:00
Michael Yang 20a612834f
fix llama and mistral3 models (#10774)
* fix llama model

* fix mistral3.1 model

do not set default vision layers
2025-12-29 06:38:03 -06:00
Jesse Gross dba546a24a
llm: Use first layer as memory buffer in estimation
This is a partial revert of 0478d44 "Fixed over vram allcation dure to
small initial layer sizes."

Previously we used the size of the first layer as an extra reserved
amount of space to buffer our memory estimates. The above commit
changed this to use the largest layer. However, this had performance
impacts on more models than the original commit was trying to fix.

There is just a heuristic without an ideal solution so this goes back
to the historic behavior.

Fixes: #10765, #10756, #10752, #10726
2025-12-29 06:38:03 -06:00
Daniel Hiltgen f7a5f0da58
avoid kv truncation during create (#10761) 2025-12-29 06:38:02 -06:00
Jesse Gross 7b9ab4cb32
ggml: Seperate tensor load from backend creation
Currently, when the backend is created, the tensors are loaded at the
same time, which is a slow operation. This separates them to be two
steps:
 - Create backend, including enumerating tensors and memory allocation
 - Loading tensor data

This allows more flexibility in managing model loading.
2025-12-29 06:38:02 -06:00
Jesse Gross 07030ffa59
llm: Estimate projector memory correctly for Ollama engine
The Llama engine always places vision projectors on the first GPU
if one exists. However, the Ollama engine groups it with the output
layer, which means the projector is only offloaded if all other layers
are offloaded. The memory estimation code always assumes the former
layout - this changes it to use the correct layout based on the engine.

This addresses two impacts of the current behavior:
 - In multi-GPU setups, we can crash with OOM errors when we try to
   allocate memory on a full GPU while another still has space.
 - If the vision projector is large, it may prevent us from offloading
   anything when we could have fit some of the text layers.
2025-12-29 06:38:02 -06:00
Jesse Gross a9beff33f8
llm: Consistently track unassigned model data
In some cases, if we fail to assign a piece of the model to a GPU then
we lose track of this data. Although it doesn't change the memory
allocation, it does affect the total size of the model reported by
tools such as ollama ps (and also the percent offloaded).

This makes it look like setting num_gpu isn't reflected in ollama ps,
which isn't true but the offloading percent may appear to not change.

Spreading the model across more GPUs will continue to impact the
reported total size of the model.
2025-12-29 06:38:02 -06:00
Ronald Wilson b84eda2b82
readme: add TinyNotepad to community integrations (#10763)
This PR adds Tiny Notepad, a lightweight, notepad-like interface to chat with local LLMs via Ollama. 

- It’s designed as a simple, distraction-free alternative. 
- The app supports basic note-taking, timestamped logs, and model parameter controls. 
- Built with Tkinter, it runs entirely offline and available via PyPI.

Aims to provide a lightweight easy to run and install interface for ollama.
2025-12-29 06:38:02 -06:00
Michael Yang af9708c72d
model: handle multiple eos tokens (#10577)
* get eos_token_id from generation_config.json

* refactor

* include both ids and strings in trace

* comments

* remove special case for gemma3 special vocab (#10743)
2025-12-29 06:38:01 -06:00
Daniel Hiltgen 48a1fc0830
Fix lingering Q4_0 help reference (#10720) 2025-12-29 06:38:01 -06:00
Bruce MacDonald 88114310e6
cmd: add ellipses to truncated show metadata (#10717)
When a piece of information has been truncated in the show output an ellipses to indicate that more data has not been displayed
2025-12-29 06:38:01 -06:00
Jesse Gross cdae35b52a
ollamarunner: Multi-modal worst case graph
We currently preallocate compute graph memory for the worst case
batch of text tokens. This adds support for doing the same for
images.

Note that image models are more complicated than text models in
how they process their inputs so there may be cases where this
approach isn't completely generic for all models. It covers all
currently supported models though.
2025-12-29 06:38:01 -06:00
Jesse Gross e54f602a15
ollamarunner: Separate text and multimodal graphs
For some multimodal models (such as gemma3), we create a single
graph that generates the image embedding and then use this in the
text model. The embedding tensor is completely opaque to the runner.

However, this doesn't work if we need to use the embedding in multiple
batches. This can arise if the embedding is larger than the batch size.
In these cases (as with llama4), we would like to create views that
are more appropriately sized. However, if we do this then the original
source tensor is used in multiple graphs, which isn't allowed. To
avoid that problem, models with this pattern compute the embedding
tensor on first use and recreate the individual views. There is no
longer a single vision and text graph.

This codifies the pattern of separating vision and text graphs. The
logic of computing tensors on demand is moved to the runner, so models
no longer have to worry about this. It also gives the runner visibility
into the multimodal tensors, which is important for memory management.
2025-12-29 06:38:01 -06:00
Jesse Gross 8c75fb33d1
ollamarunner: Base cached tokens on current prompt
When we restore a sequence from the cache, we split the prompt into
the already used tokens (stored in the cache) and new tokens that
need to be processed. Currently, the references to the used tokens
are coming from the stored previous sequence.

However, even though we know that the used tokens are semantically
equivalent to the prefix of the prompt, tokens can contain pointers
which are no longer valid. As a result, it is better to get the
used tokens from the prompt, which has currently valid pointers.

This doesn't currently have any impact because it isn't possible
to reuse the pointers (which are tensors) anyways. However, it
becomes an issue once we can.
2025-12-29 06:38:00 -06:00
Michael Yang 4e77815773
fix pixel values padding (#10718)
* panic if trying to pad 4d

* fix pixel values padding
2025-12-29 06:38:00 -06:00
Michael Yang d507f23b0d
fix mllama conversion (#10716)
cross attention Q and K projections needs to have their heads swapped, similar to non-cross attention Q and K tensors
2025-12-29 06:38:00 -06:00
Bruce MacDonald c38d583c99
ggml: update qwen25vl vision size estimate (#10711) 2025-12-29 06:38:00 -06:00
Daniel Hiltgen a017e78f35
fix crash in old clients with quantization progress (#10710)
Older clients assumed the digest was at least 19 characters long so increase the size
of the dummy digest to avoid array out of bounds crashes.
2025-12-29 06:38:00 -06:00
Bruce MacDonald 558b0f5fe9
model: add Qwen2.5-VL support (#10385) 2025-12-29 06:37:59 -06:00
Michael Yang 4d12503049
chore: update mllama to use ollama engine (#10637) 2025-12-29 06:37:59 -06:00
tej 783739ee9f
Fixed over vram allcation dure to small initial layer sizes.
Co-authored-by: Tej Kiran <kiran.tej@amd.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
Co-authored-by: Tej Kiran <itej89@gmailcom>
2025-12-29 06:37:59 -06:00
Parth Sareen d0ed25bde8
llama: fix memory leak for grammar (#10696) 2025-12-29 06:37:59 -06:00
Jeffrey Morgan 24118aa1db
llama: fix defrag patch to defragment when no slots are available (#10695) 2025-12-29 06:37:59 -06:00
Daniel Hiltgen d344573e5b
Revert "remove cuda v11 (#10569)" (#10692)
Bring back v11 until we can better warn users that their driver
is too old.

This reverts commit fa393554b9.
2025-12-29 06:37:58 -06:00
Jeffrey Morgan 3f2b7658af
llama: fix crash on snowflake embedding model (#10690) 2025-12-29 06:37:58 -06:00
Jeffrey Morgan 595b683ffb
server: add webp image input support (#10653) 2025-12-29 06:37:58 -06:00
Michael Yang b9c7aed5ce
fix vocabulary (#10679) 2025-12-29 06:37:58 -06:00
Bruce MacDonald f1c017735b
models: remove unused qwen2vl processing (#10677) 2025-12-29 06:37:58 -06:00
Daniel Hiltgen 0132148534
Follow up to #10363 (#10647)
The quantization PR didn't block all unsupported file types,
which this PR fixes.  It also updates the API docs to reflect
the now reduced set of supported types.
2025-12-29 06:37:57 -06:00
Jeffrey Morgan 9163ed39d1
llama: update to commit de4c07f93 (#10655) 2025-12-29 06:37:57 -06:00