Commit Graph

4428 Commits

Author SHA1 Message Date
Jeffrey Morgan
26a1129d71 readme: update quickstart link text to Gemma 3 2025-12-29 06:38:13 -06:00
Jeffrey Morgan
deaf879fb9 readme: update quickstart example to Gemma 3 2025-12-29 06:38:13 -06:00
Daniel Hiltgen
3d1278ab26 mac: handle "keep" named apps (#11031)
When a user elects to keep the existing app, the
new Ollama is named `Ollama 2.app`
This fixes the app startup flow to handle this naming pattern.
2025-12-29 06:38:13 -06:00
Daniel Hiltgen
1effde30cb spawn desktop quickly (#11011)
Give the desktop app a hint to start fast.
2025-12-29 06:38:12 -06:00
Krzysztof Jeziorny
874e02626f docs: update link to AMD drivers in linux.md (#10973) 2025-12-29 06:38:12 -06:00
Jeffrey Morgan
3b70283d35 Revert "server: add model capabilities to the list endpoint (#10174)" (#11004)
This reverts commit 0943001193.
2025-12-29 06:38:12 -06:00
Daniel Hiltgen
5d8b0297df launch app hidden (#10962)
When starting the app in the background, start it hidden.
2025-12-29 06:38:12 -06:00
Daniel Hiltgen
1b1eb74ab1 win: handle more than 2048 processes (#10997)
Fix an array out of bounds crash
2025-12-29 06:38:11 -06:00
Devon Rifkin
1fb5a3d56a move thinking logic into its own package (#10990)
move thinking logic into its own package
2025-12-29 06:38:11 -06:00
Hunter Wittenborn
8b158c2049 docs: fix typo in development.md (#10998) 2025-12-29 06:38:11 -06:00
Devon Rifkin
237fdab92d export ThinkingParser 2025-12-29 06:38:11 -06:00
JasonHonKL
47bebce5f8 server: add model capabilities to the list endpoint (#10174) 2025-12-29 06:38:11 -06:00
HardCodeDev
dfd002e57f readme: add SimpleOllamaUnity to community integrations (#10817) 2025-12-29 06:38:10 -06:00
Parth Sareen
b43f6b223c tools: resiliency upgrade to name and arg extraction from template (#10917) 2025-12-29 06:38:10 -06:00
Jesse Gross
0b9c6cb497 ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-12-29 06:38:10 -06:00
Jesse Gross
f6fc508ec6 llm: Make "POST predict" error message more informative
"POST predict" basically means that the runner has crashed, which
can have many reasons. However, many people think this is a specific
error and either report only this message or group together unrelated
bugs. This replaces it with a more friendly and helpful message.
2025-12-29 06:38:10 -06:00
Devon Rifkin
026aba9f11 add thinking support to the api and cli (#10584)
- Both `/api/generate` and `/api/chat` now accept a `"think"`
  option that allows specifying whether thinking mode should be on or
  not
- Templates get passed this new option so, e.g., qwen3's template can
  put `/think` or `/no_think` in the system prompt depending on the
  value of the setting
- Models' thinking support is inferred by inspecting model templates.
  The prefix and suffix the parser uses to identify thinking support is
  also automatically inferred from templates
- Thinking control & parsing is opt-in via the API to prevent breaking
  existing API consumers. If the `"think"` option is not specified, the
  behavior is unchanged from previous versions of ollama
- Add parsing for thinking blocks in both streaming/non-streaming mode
  in both `/generate` and `/chat`
- Update the CLI to make use of these changes. Users can pass `--think`
  or `--think=false` to control thinking, or during an interactive
  session they can use the commands `/set think` or `/set nothink`
- A `--hidethinking` option has also been added to the CLI. This makes
  it easy to use thinking in scripting scenarios like
  `ollama run qwen3 --think --hidethinking "my question here"` where you
  just want to see the answer but still want the benefits of thinking
  models
2025-12-29 06:38:09 -06:00
Patrick Devine
a2bdc43bc8 client: add request signing to the client (#10881)
If OLLAMA_AUTH is set, sign each request w/ a timestamp and pass the signature in the token header
2025-12-29 06:38:09 -06:00
Jesse Gross
9c5c197393 kvcache: Skip computing causal mask for worst case graph reservation
Computing an attention mask for a large context and max batch is
expensive - over 100ms. Models like Gemma3 that have multiple types
of caches and custom attention masks need to do this 4 times, so this
adds approximately 500ms to startup time when using 128k context

When we are reserving the worst case graph, we don't need the mask,
only its shape, so we can skip this.
2025-12-29 06:38:09 -06:00
Kyle Steere
8d989025e2 server: abort download on empty digest
Signed-off-by: Kyle Steere <kyle.steere@chainguard.dev>
2025-12-29 06:38:09 -06:00
Parth Sareen
75e3b372a1 tools: relax JSON parse constraints for tool calling (#10872) 2025-12-29 06:38:09 -06:00
Parth Sareen
951b332cd2 tools: remove newline stripping (#10869) 2025-12-29 06:38:08 -06:00
RAPID ARCHITECT
2f6d9234ac readme: add AWS Strands Agents SDK example to community integrations (#10865) 2025-12-29 06:38:08 -06:00
Min Yoo
02b0285474 readme: Add macLlama to community integrations (#10790)
This commit updates the README to include macLlama within the community integrations section.

macLlama is a native macOS application built for lightweight and efficient LLM interaction.  Key features include:

*   **Lightweight & Native:** Designed to be resource-friendly and perform optimally on macOS.
*   **Chat-like Interface:** Provides a user-friendly, conversational interface.
*   **Multiple Window Support:** Allows users to manage multiple conversations simultaneously.

The primary goal of macLlama is to offer a simple and easy-to-run LLM experience on macOS.
2025-12-29 06:38:08 -06:00
Daniel Hiltgen
6185310f2f tests: drop llama3.2-vision embedding tests (#10837) 2025-12-29 06:38:08 -06:00
frob
56765df3ee docs: remove unsupported quantizations (#10842) 2025-12-29 06:38:07 -06:00
frob
4fed7101b7 server: add hint to the error message when model path access fails (#10843) 2025-12-29 06:38:07 -06:00
Jesse Gross
f34f58bbb2 ml: Improve slog formatting for BackendMemory 2025-12-29 06:38:07 -06:00
Parth Sareen
8cd2b6478e tools: refactor tool call parsing and enable streaming (#10415) 2025-12-29 06:38:07 -06:00
Parth Sareen
5ae2770e0d llama: add minimum memory for grammar (#10820) 2025-12-29 06:38:07 -06:00
Jesse Gross
d1ed4b17ef ml: Panic rather than return error on tensor allocation failure
FromFloatSlice and FromIntSlice return an error if the shape doesn't
match the passed data or if memory can't be allocated. Since these
are inputs, the memory being allocated is system memory rather than VRAM.

In many cases, the caller can't really handle the error and panics.

Empty and Zeros directly panic if they can't allocate memory.

This makes things consistent by panicing for the first two cases,
removing a fair amount of error handling code. This is also consistent
with how Go typically handles these situations.
2025-12-29 06:38:06 -06:00
Jesse Gross
6e68feda00 ollamarunner: Memory usage reporting
This provides granular information about the backend memory allocations
required by the runner:
 - Per backend
 - Per layer
 - Weights, cache and graph
 - Allocation status

This can be used for debugging and validating memory estimates.
2025-12-29 06:38:06 -06:00
Jesse Gross
b3de134eda ggml: Report graph memory for failed allocations
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
2025-12-29 06:38:06 -06:00
Daniel Hiltgen
99880e7254 sched: fix runner leak during reloading unload (#10819)
When the same model is being reloaded rapidly with client connections
being canceled before the model finishes loading, the queued unload
event could cause a leak of runners by deleting a different runner from
the loaded list.
2025-12-29 06:38:06 -06:00
Michael Yang
df4b146c49 fix: mllama quality (#10807)
* fix mllama convert

- transform attn_gate and ffn_gate
- swap attention heads for vision models

* fix mllama

the mlp gate which was applied in the wrong place
2025-12-29 06:38:05 -06:00
Bruce MacDonald
d25bde723c server: improve tensor quantization fallback logic (#10806)
Fall back to alternative quantization types when a tensor's dimensions aren't divisible by the block size required for the original desired quantization type. If retried quantization types fail, the system ultimately falls back to F16 (half-precision floating point) which has a block size of 1 and can handle any tensor dimension.
2025-12-29 06:38:05 -06:00
Daniel Hiltgen
1dbe9ba784 integration: add qwen2.5-vl (#10815)
Replace the older llava model with qwen2.5 for vision tests
Skip split-batch test on small VRAM systems to avoid excessive test time
2025-12-29 06:38:05 -06:00
Michael Yang
197db4eccd remove support for multiple ggufs in a single file (#10722)
* remove support for multiple ggufs in a single file

this was an attempt to make it easier to import multimodal models into
ollama. this was rarely used and error prone so remove it

* fix: create fused model from blob
2025-12-29 06:38:05 -06:00
Daniel Hiltgen
bf0fbfeb0e win: detect background upgrade in progress (#10785)
Give the user a helpful error instead of showing
connection refused errors.
2025-12-29 06:38:05 -06:00
Michael Yang
dc8ee7636b feat: port qwen2 model (#10782) 2025-12-29 06:38:04 -06:00
Michael Yang
9215b190fa feat: qwen3 dense and sparse models (#10708)
* feat: qwen3 dense
* feat: qwen3moe
* fix llama4 moe
2025-12-29 06:38:04 -06:00
Michael Yang
7f3e4d6f06 fix cmakelists (#10804)
this fixes an issue introduced in #10788
2025-12-29 06:38:04 -06:00
Michael Yang
02fd383448 chore: disable debug in binary libraries (#10788) 2025-12-29 06:38:04 -06:00
Michael Yang
9213339549 fix: qwen25vl assign samebatch in multimodal input (#10789)
setting samebatch on the vision start token is problematic because it
will be shared with other inputs that also use images. this will cause
the input to be cached and the runner will not see SameBatch. SameBatch
will also be incorrect since it may be for a different image.

assigning samebatch to the input tokens resolves this by ensure it's
assigned correctly to inputs corresponding to the image.

not setting same batch correctly may cause panics during inference since
images are no longer guaranteed to be in the same batch.
2025-12-29 06:38:03 -06:00
Michael Yang
20dcadf7e8 ml: add more rope options (#10775) 2025-12-29 06:38:03 -06:00
DarkCaster
3decfd28a8 llama: fix incorrect initialization of C.struct_common_sampler_cparams.penalty_present (#10779) 2025-12-29 06:38:03 -06:00
Michael Yang
20a612834f fix llama and mistral3 models (#10774)
* fix llama model

* fix mistral3.1 model

do not set default vision layers
2025-12-29 06:38:03 -06:00
Jesse Gross
dba546a24a llm: Use first layer as memory buffer in estimation
This is a partial revert of 0478d44 "Fixed over vram allcation dure to
small initial layer sizes."

Previously we used the size of the first layer as an extra reserved
amount of space to buffer our memory estimates. The above commit
changed this to use the largest layer. However, this had performance
impacts on more models than the original commit was trying to fix.

There is just a heuristic without an ideal solution so this goes back
to the historic behavior.

Fixes: #10765, #10756, #10752, #10726
2025-12-29 06:38:03 -06:00
Daniel Hiltgen
f7a5f0da58 avoid kv truncation during create (#10761) 2025-12-29 06:38:02 -06:00
Jesse Gross
7b9ab4cb32 ggml: Seperate tensor load from backend creation
Currently, when the backend is created, the tensors are loaded at the
same time, which is a slow operation. This separates them to be two
steps:
 - Create backend, including enumerating tensors and memory allocation
 - Loading tensor data

This allows more flexibility in managing model loading.
2025-12-29 06:38:02 -06:00