Commit Graph

4347 Commits

Author SHA1 Message Date
Jesse Gross 290d4c2c6c
ggml: Check return status for computation.
We don't check the return status after computing the graph, which
can silently lead to bad outputs if we try to keep going and future
computation succeeds. This appears to happens in certain cases on
Apple M2 devices.

Fixes #11070
2025-12-29 06:38:17 -06:00
Daniel Hiltgen 29b668e649
int: add coverage for older models (#11137)
Verified these fail on 0.9.1 and pass on HEAD.
2025-12-29 06:38:17 -06:00
Jeffrey Morgan 6d36b8dcfb
benchmark: remove unused benchmark test (#11120)
Removes a test under benchmark/ that is unused
2025-12-29 06:38:17 -06:00
Jeffrey Morgan 5e3fb4744b
Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
2025-12-29 06:38:16 -06:00
Jeffrey Morgan c5237d9462
Revert "ggml: Export GPU UUIDs" (#11115)
This reverts commit aaa7818000.
2025-12-29 06:38:16 -06:00
Jeffrey Morgan 4f1588bc37
Revert "feat: incremental gguf parser (#10822)" (#11114)
This reverts commit 6b04cad7e8.
2025-12-29 06:38:16 -06:00
曹家巧 8c3501c161
cache: fix comment function name in cache.go (#11110) 2025-12-29 06:38:16 -06:00
Jeffrey Morgan 829e77105a
tools: return empty arguments object instead of null (#11113) 2025-12-29 06:38:16 -06:00
Jeffrey Morgan 1dc12706c5
tools: fix parsing tool calls without any parameters (#11101)
Fixes issue where tool calls that don't expect any parameters were
not being parsed. This also fixes two additional issues: one where
2+ tool calls would not be correctly parsed, and cases where tool calls
with invalid parameters would still get parsed
2025-12-29 06:38:15 -06:00
Jeffrey Morgan 2c371ff357
model: treat 'user defined' tokens as special tokens (#11077) 2025-12-29 06:38:15 -06:00
Michael Yang 142efb91b1
gguf: fix write order (#11068)
* ggml: test write gguf order
* ggml: fix write tensor order
2025-12-29 06:38:15 -06:00
NGC13009 7e0b662c6c
readme: add ollama-launcher to community integrations (#11080) 2025-12-29 06:38:15 -06:00
Phil 4c7cf115fe
readme: add GPTranslate to community integrations (#11071) 2025-12-29 06:38:15 -06:00
Jeffrey Morgan 2d86651985
tools: loosen tool parsing to allow for more formats (#11030) 2025-12-29 06:38:14 -06:00
Michael Yang 2c6f1dc9c8
feat: incremental gguf parser (#10822)
* incremental gguf parser
* gguf: update test to not rely on gguf on disc
* re-use existing create gguf
* read capabilities from gguf kv
* kv exists
* update tests
* s/doneFunc/successFunc/g
* new buffered reader

---------

Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2025-12-29 06:38:14 -06:00
Michael Yang db3a312edf
feat: uneven splits (#11048)
The current splitDim function only operates on tensors that are split evenly which isn't always the case, e.g. a QKV tensor. This change allows the function to be used for arbitrary splits
2025-12-29 06:38:14 -06:00
Michael Yang 0d5c118679
skip tokenizer.model if possible (#11050)
if tokenizer.json is already copied, skip tokenizer.model
2025-12-29 06:38:14 -06:00
Michael Yang eb2c2d61e5
use nn.Linear in place of ml.Tensor (#11049)
while nn.Linear.Forward isn't applicable for sparse MLP, it's still
a nice container for the tensors
2025-12-29 06:38:13 -06:00
Attogram Project 4fff1738a4
readme: add ollama-multirun to community integrations (#11038) 2025-12-29 06:38:13 -06:00
Jeffrey Morgan 26a1129d71
readme: update quickstart link text to Gemma 3 2025-12-29 06:38:13 -06:00
Jeffrey Morgan deaf879fb9
readme: update quickstart example to Gemma 3 2025-12-29 06:38:13 -06:00
Daniel Hiltgen 3d1278ab26
mac: handle "keep" named apps (#11031)
When a user elects to keep the existing app, the
new Ollama is named `Ollama 2.app`
This fixes the app startup flow to handle this naming pattern.
2025-12-29 06:38:13 -06:00
Daniel Hiltgen 1effde30cb
spawn desktop quickly (#11011)
Give the desktop app a hint to start fast.
2025-12-29 06:38:12 -06:00
Krzysztof Jeziorny 874e02626f
docs: update link to AMD drivers in linux.md (#10973) 2025-12-29 06:38:12 -06:00
Jeffrey Morgan 3b70283d35
Revert "server: add model capabilities to the list endpoint (#10174)" (#11004)
This reverts commit 0943001193.
2025-12-29 06:38:12 -06:00
Daniel Hiltgen 5d8b0297df
launch app hidden (#10962)
When starting the app in the background, start it hidden.
2025-12-29 06:38:12 -06:00
Daniel Hiltgen 1b1eb74ab1
win: handle more than 2048 processes (#10997)
Fix an array out of bounds crash
2025-12-29 06:38:11 -06:00
Devon Rifkin 1fb5a3d56a
move thinking logic into its own package (#10990)
move thinking logic into its own package
2025-12-29 06:38:11 -06:00
Hunter Wittenborn 8b158c2049
docs: fix typo in development.md (#10998) 2025-12-29 06:38:11 -06:00
Devon Rifkin 237fdab92d
export ThinkingParser 2025-12-29 06:38:11 -06:00
JasonHonKL 47bebce5f8
server: add model capabilities to the list endpoint (#10174) 2025-12-29 06:38:11 -06:00
HardCodeDev dfd002e57f
readme: add SimpleOllamaUnity to community integrations (#10817) 2025-12-29 06:38:10 -06:00
Parth Sareen b43f6b223c
tools: resiliency upgrade to name and arg extraction from template (#10917) 2025-12-29 06:38:10 -06:00
Jesse Gross 0b9c6cb497
ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-12-29 06:38:10 -06:00
Jesse Gross f6fc508ec6
llm: Make "POST predict" error message more informative
"POST predict" basically means that the runner has crashed, which
can have many reasons. However, many people think this is a specific
error and either report only this message or group together unrelated
bugs. This replaces it with a more friendly and helpful message.
2025-12-29 06:38:10 -06:00
Devon Rifkin 026aba9f11
add thinking support to the api and cli (#10584)
- Both `/api/generate` and `/api/chat` now accept a `"think"`
  option that allows specifying whether thinking mode should be on or
  not
- Templates get passed this new option so, e.g., qwen3's template can
  put `/think` or `/no_think` in the system prompt depending on the
  value of the setting
- Models' thinking support is inferred by inspecting model templates.
  The prefix and suffix the parser uses to identify thinking support is
  also automatically inferred from templates
- Thinking control & parsing is opt-in via the API to prevent breaking
  existing API consumers. If the `"think"` option is not specified, the
  behavior is unchanged from previous versions of ollama
- Add parsing for thinking blocks in both streaming/non-streaming mode
  in both `/generate` and `/chat`
- Update the CLI to make use of these changes. Users can pass `--think`
  or `--think=false` to control thinking, or during an interactive
  session they can use the commands `/set think` or `/set nothink`
- A `--hidethinking` option has also been added to the CLI. This makes
  it easy to use thinking in scripting scenarios like
  `ollama run qwen3 --think --hidethinking "my question here"` where you
  just want to see the answer but still want the benefits of thinking
  models
2025-12-29 06:38:09 -06:00
Patrick Devine a2bdc43bc8
client: add request signing to the client (#10881)
If OLLAMA_AUTH is set, sign each request w/ a timestamp and pass the signature in the token header
2025-12-29 06:38:09 -06:00
Jesse Gross 9c5c197393
kvcache: Skip computing causal mask for worst case graph reservation
Computing an attention mask for a large context and max batch is
expensive - over 100ms. Models like Gemma3 that have multiple types
of caches and custom attention masks need to do this 4 times, so this
adds approximately 500ms to startup time when using 128k context

When we are reserving the worst case graph, we don't need the mask,
only its shape, so we can skip this.
2025-12-29 06:38:09 -06:00
Kyle Steere 8d989025e2
server: abort download on empty digest
Signed-off-by: Kyle Steere <kyle.steere@chainguard.dev>
2025-12-29 06:38:09 -06:00
Parth Sareen 75e3b372a1
tools: relax JSON parse constraints for tool calling (#10872) 2025-12-29 06:38:09 -06:00
Parth Sareen 951b332cd2
tools: remove newline stripping (#10869) 2025-12-29 06:38:08 -06:00
RAPID ARCHITECT 2f6d9234ac
readme: add AWS Strands Agents SDK example to community integrations (#10865) 2025-12-29 06:38:08 -06:00
Min Yoo 02b0285474
readme: Add macLlama to community integrations (#10790)
This commit updates the README to include macLlama within the community integrations section.

macLlama is a native macOS application built for lightweight and efficient LLM interaction.  Key features include:

*   **Lightweight & Native:** Designed to be resource-friendly and perform optimally on macOS.
*   **Chat-like Interface:** Provides a user-friendly, conversational interface.
*   **Multiple Window Support:** Allows users to manage multiple conversations simultaneously.

The primary goal of macLlama is to offer a simple and easy-to-run LLM experience on macOS.
2025-12-29 06:38:08 -06:00
Daniel Hiltgen 6185310f2f
tests: drop llama3.2-vision embedding tests (#10837) 2025-12-29 06:38:08 -06:00
frob 56765df3ee
docs: remove unsupported quantizations (#10842) 2025-12-29 06:38:07 -06:00
frob 4fed7101b7
server: add hint to the error message when model path access fails (#10843) 2025-12-29 06:38:07 -06:00
Jesse Gross f34f58bbb2
ml: Improve slog formatting for BackendMemory 2025-12-29 06:38:07 -06:00
Parth Sareen 8cd2b6478e
tools: refactor tool call parsing and enable streaming (#10415) 2025-12-29 06:38:07 -06:00
Parth Sareen 5ae2770e0d
llama: add minimum memory for grammar (#10820) 2025-12-29 06:38:07 -06:00
Jesse Gross d1ed4b17ef
ml: Panic rather than return error on tensor allocation failure
FromFloatSlice and FromIntSlice return an error if the shape doesn't
match the passed data or if memory can't be allocated. Since these
are inputs, the memory being allocated is system memory rather than VRAM.

In many cases, the caller can't really handle the error and panics.

Empty and Zeros directly panic if they can't allocate memory.

This makes things consistent by panicing for the first two cases,
removing a fair amount of error handling code. This is also consistent
with how Go typically handles these situations.
2025-12-29 06:38:06 -06:00