Commit Graph

4400 Commits

Author SHA1 Message Date
Devon Rifkin
528bd3077a lower default num parallel to 2
this is in part to "pay" for #10452, which doubled the default context length. The combination isn't fully neutral though, because even though the old 4x2k limit and the new 2x4k limit are memory equivalent, the 1x fallback is larger with 4k
2025-12-29 06:37:46 -06:00
Devon Rifkin
b963dd868b config: update default context length to 4096 2025-12-29 06:37:46 -06:00
Devon Rifkin
5a7c6c363e Revert "increase default context length to 4096 (#10364)"
This reverts commit 424f648632.
2025-12-29 06:37:46 -06:00
Michael Yang
b236fcc9bf model: fix build (#10416) 2025-12-29 06:37:45 -06:00
Michael Yang
049aa30191 memory 2025-12-29 06:37:45 -06:00
Michael Yang
644d6c5256 fixes for maverick 2025-12-29 06:37:45 -06:00
Michael Yang
d2d5c5e6d5 chunked attention 2025-12-29 06:37:45 -06:00
Michael Yang
b7f628b9e8 connect vision to text 2025-12-29 06:37:45 -06:00
Michael Yang
b875952e67 image processing
Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-12-29 06:37:44 -06:00
Michael Yang
0f5c45e19d llama4 2025-12-29 06:37:44 -06:00
Michael Yang
371560df26 fix test 2025-12-29 06:37:44 -06:00
Michael Yang
a0d77f1dbe explicitly decode maxarraysize 1024 2025-12-29 06:37:44 -06:00
Michael Yang
8a86190fd4 fix parameter count 2025-12-29 06:37:44 -06:00
Michael Yang
49f807737a default slice values 2025-12-29 06:37:44 -06:00
Michael Yang
51e64c8f69 update comment 2025-12-29 06:37:43 -06:00
Michael Yang
84a6567dee fix token type 2025-12-29 06:37:43 -06:00
Michael Yang
5a8e641272 zero means zero
use a default of 1024 when asking for zero is confusing since most calls
seem to assume 0 means do not ready any data
2025-12-29 06:37:43 -06:00
Michael Yang
f0c5b48f7b convert: use -1 for read all 2025-12-29 06:37:43 -06:00
Michael Yang
96618f6344 generic ggml.array 2025-12-29 06:37:42 -06:00
Michael Yang
5e0d7e9332 fix superfluous call to WriteHeader
the first call to http.ResponseWriter.Write implicitly calls WriteHeader
with http.StatusOK if it hasn't already been called. once WriteHeader
has been called, subsequent calls has no effect. Write is called when
JSON encoding progressUpdateJSON{}. calls to
http.ResponseWriter.WriteHeader after the first encode is useless and
produces a warning:

http: superfluous response.WriteHeader call from github.com/ollama/ollama/server/internal/registry.(*statusCodeRecorder).WriteHeader (server.go:77)
2025-12-29 06:37:42 -06:00
Michael Yang
584c3176d2 convert: change to colmajor 2025-12-29 06:37:42 -06:00
Michael Yang
4f01385151 ci: silence deprecated gpu targets warning 2025-12-29 06:37:42 -06:00
Jeffrey Morgan
85d3f71c02 llama: update to commit 2016f07b (#10352) 2025-12-29 06:37:42 -06:00
Parth Sareen
83e848fcb8 server: improve spacing for JSON grammar (#10131) 2025-12-29 06:37:41 -06:00
Parth Sareen
7cf4c146bc llama: remove model loading for grammar (#10096) 2025-12-29 06:37:41 -06:00
Adrien Duermael
3e201e18c2 api: fix ImageData struct comment to expect raw image bytes (#10386) 2025-12-29 06:37:41 -06:00
Devon Rifkin
770df0887f increase default context length to 4096 (#10364)
* increase default context length to 4096

We lower the default numParallel from 4 to 2 and use these "savings" to
double the default context length from 2048 to 4096.

We're memory neutral in cases when we previously would've used
numParallel == 4, but we add the following mitigation to handle some
cases where we would have previously fallen back to 1x2048 due to low
VRAM: we decide between 2048 and 4096 using a runtime check, choosing
2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
purposefully don't check the available VRAM because we don't want the
context window size to change unexpectedly based on the available VRAM.

We plan on making the default even larger, but this is a relatively
low-risk change we can make to quickly double it.

* fix tests

add an explicit context length so they don't get truncated. The code
that converts -1 from being a signal for doing a runtime check isn't
running as part of these tests.

* tweak small gpu message

* clarify context length default

also make it actually show up in `ollama serve --help`
2025-12-29 06:37:41 -06:00
Richard Shiue
d24108eb86 readme: add AppFlowy to community integrations (#10335) 2025-12-29 06:37:41 -06:00
greengrass821
39a26ec939 cmd: add support for escaping ~ in filepath (#10339)
Co-authored-by: tooth paste <tooth_paste91@Poorneshwars-MacBook-Pro.local>
2025-12-29 06:37:40 -06:00
Michael Yang
1785f37236 create tempdir in models directory
the models directory should have plenty of storage and also ensure
there's no cross-device copy
2025-12-29 06:37:40 -06:00
Blake Mizerany
1003e89348 server/internal/registry: make pull send errors with Error field (#10326)
Previously, the pull handler would send an error message in the Status
field, this prevented the client from using the message as a signal to
stop. In the case of the "run" command, it would follow the pull with a
"show" which would print a nearly identical "not found" message for
unresolved models.

Fixes #10307
2025-12-29 06:37:40 -06:00
Michael Yang
c916dd67bf arange 2025-12-29 06:37:40 -06:00
Blake Mizerany
0114f7008a server/internal/client/ollama: handle some network errors gracefully (#10317) 2025-12-29 06:37:40 -06:00
Jeffrey Morgan
88ea0ff9e8 ml/backend/ggml: use default CUDA compression mode (#10314) 2025-12-29 06:37:39 -06:00
Jeffrey Morgan
8c08f74532 ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-12-29 06:37:39 -06:00
Devon Rifkin
2a8495a8ea docs: change more template blocks to have syntax highlighting
In #8215 syntax highlighting was added to most of the blocks, but there were a couple that were still being rendered as plaintext
2025-12-29 06:37:39 -06:00
Jeffrey Morgan
3824c0803b llama: update to commit 71e90e88 (#10192) 2025-12-29 06:37:39 -06:00
Blake Mizerany
1c91f69556 server/internal/registry: remove superfluous progress bar flush (#10303)
This removes the extra flushProgress() at the end of handlePull. It is
unnecessary because final progress updates are flushed in all cases of
the main select loop.
2025-12-29 06:37:38 -06:00
Blake Mizerany
1248736636 server/internal/client/ollama: cleanup use of multiple counters (#10304)
The completed and received counters must work in tandem and the code
should better reflect that. Previously, the act of updating them was 2-3
lines of code duplicated in multiple places. This consolidates them into
a single update closure for easy reading and maintenance.

This also simplifies error handling in places where we can use a return
parameter and defer to handle the error case for updates.

Also, remove the old Layer field from the trackingReader struct.
2025-12-29 06:37:38 -06:00
Daniel Hiltgen
93656e562d Integration test improvements (#9654)
Add some new test coverage for various model architectures,
and switch from orca-mini to the small llama model.
2025-12-29 06:37:38 -06:00
Daniel Hiltgen
26b6899bdf Give tests more time to run (#10306)
Fix flake failures on windows
2025-12-29 06:37:38 -06:00
Michael Yang
c6e2cf38b8 fix write gguf padding 2025-12-29 06:37:37 -06:00
Blake Mizerany
80e61501b3 cmd: add retry/backoff (#10069)
This commit adds retry/backoff to the registry client for pull requests.

Also, revert progress indication to match original client's until we can
"get it right."

Also, make WithTrace wrap existing traces instead of clobbering them.
This allows clients to compose traces.
2025-12-29 06:37:37 -06:00
Jesse Gross
abb8f89af9 ggml: Free ggml_backend_buffer_t when releasing buffer
When ggml_backend_buffer_free() is called, the device memory
is released but not all backends consistently release the actual
ggml_backend_buffer_t in system RAM, causing a memory leak.

Bug #10040
2025-12-29 06:37:37 -06:00
Devon Rifkin
52c65b0d68 server: add OpenAI-Beta header to CORS safelist
alphabetized the compat list and then added a single header

fixes: #9801
2025-12-29 06:37:34 -06:00
Devon Rifkin
378d3210dc docs: update some response code blocks to json5
This is to prevent rendering bright red comments indicating invalid JSON when the comments are just supposed to be explanatory
2025-04-14 17:09:06 -07:00
CYJiang
64a9cc8f05 cmd: add missing file close in tests (#10179) 2025-04-14 07:49:41 -04:00
Jesse Gross
f50d691254 ggml: Fix memory leak on input tensors
For every forward pass through the model, we need to allocate input
tensors: tokens, images, positions, outputs and masks. These get
allocated in system memory.

However, when we close the context that the tensors were allocated
through, the metadata gets freed but the actual backend memory does
not. This results in a significant memory leak.

This makes it so that all the memory allocated through a context
gets freed when it is closed.

Fixes #10040
2025-04-11 11:13:22 -07:00
Jesse Gross
34c3b68fc8 ggml: Don't allocate CPU buffers as CUDA Host buffers
Allocating (and in particular, freeing) memory from CUDA host buffers
is expensive and can cause a significant performance hit if we do
it for every token. Using normal system memory avoids this issue
and also gives the OS more flexibility to manage it.

There is no performance impact from this patch directly (either
positive or negative) but it makes a difference once we start
freeing memory correctly.
2025-04-11 11:13:22 -07:00
Jesse Gross
f33ccd5d27 ggml: Use pointer receivers for Context
Context is currently mixed between pointer and value receivers. Change
this to be all pointer receivers so don't have to reason about whether
the things we are updating in the struct will be retained.
2025-04-11 11:13:22 -07:00