Commit Graph

194 Commits

Author SHA1 Message Date
Gabe Goodhart 34ff84df43 fix: Use c++17 and include vendor for go wrapper modules
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:23:27 -06:00
Gabe Goodhart d395132510 fix: Add sync'ed stb vendored header
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:17:23 -06:00
Gabe Goodhart 16c116c2b7 fix: Add missing stb to llama.cpp rsync-filter
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:16:58 -06:00
Gabe Goodhart 58300273f4 fix: Apply patch for mtmd_text_input
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:09:48 -06:00
Gabe Goodhart f358dd5a1c fix: Use mtmd_helper to correctly load the bitmap for the image
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:09:05 -06:00
Gabe Goodhart 7334a0ea07 chore: Ignore *.patched in the patch directory
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:08:42 -06:00
Gabe Goodhart 1664d52be6 fix: Add patch for mtmd_input_text
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:08:29 -06:00
Gabe Goodhart 3d70237fd1 fix: Update llama.go to use mtmd instead of clip/llava
It's _very_ possible that this is broken!

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:06:47 -06:00
Gabe Goodhart fa54a3cf3a fix: Add missing include in sampling_ext.cpp
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:06:40 -06:00
Gabe Goodhart d0fd9e5aa2 fix: Remove mtmd main cpp files
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:06:31 -06:00
Gabe Goodhart 1cd9352cc3 fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:06:18 -06:00
Gabe Goodhart 414a097372 fix: Add files missing from sync
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:05:25 -06:00
Gabe Goodhart 424e05c20e fix: Update rsync-filter for all moved/new/removed files
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:04:51 -06:00
Gabe Goodhart 2613f5da2d feat: Sync llama.cpp and ggml
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 17:01:24 -06:00
Gabe Goodhart 73d089bb90 feat: Update all patches
There are a number that are no longer needed at all:

- 0003-embeddings: Embeddings entirely overhauled on master
- 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
    overhauled on master
- 0019-metal-add-mean-kernel-14267: Merged upstream
- 0020-CUDA-add-mean-operation-14313: Merged upstream

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-27 16:57:05 -06:00
Michael Yang 73b642e6f3
add new gemma model (#11204)
* update patches

* cherry pick metal mean kernel

* cherry pick cuda mean kernel

* gemma3n
2025-06-25 21:47:09 -07:00
Daniel Hiltgen 1c6669e64c
Re-remove cuda v11 (#10694)
* Re-remove cuda v11

Revert the revert - drop v11 support requiring drivers newer than Feb 23

This reverts commit c6bcdc4223.

* Simplify layout

With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)

* distinct sbsa variant for linux arm64

This avoids accidentally trying to load the sbsa cuda libraries on
a jetson system which results in crashes.

* temporary prevent rocm+cuda mixed loading
2025-06-23 14:07:00 -07:00
Jeffrey Morgan 6baf1e31e2
Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
2025-06-18 07:30:49 -07:00
Jeffrey Morgan ed567ef43b
Revert "ggml: Export GPU UUIDs" (#11115)
This reverts commit aaa7818000.
2025-06-18 05:45:00 -07:00
Jesse Gross aaa7818000 ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-05-29 14:01:26 -07:00
Parth Sareen 884d26093c
llama: add minimum memory for grammar (#10820) 2025-05-22 18:53:31 -07:00
Jesse Gross 6db8a3771c ggml: Report graph memory for failed allocations
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
2025-05-22 14:38:09 -07:00
DarkCaster e6a800ca11
llama: fix incorrect initialization of C.struct_common_sampler_cparams.penalty_present (#10779) 2025-05-20 10:41:15 -07:00
Michael Yang 333e360422
model: handle multiple eos tokens (#10577)
* get eos_token_id from generation_config.json

* refactor

* include both ids and strings in trace

* comments

* remove special case for gemma3 special vocab (#10743)
2025-05-16 13:40:23 -07:00
Bruce MacDonald 0aa8b371dd
model: add Qwen2.5-VL support (#10385) 2025-05-13 20:58:02 -07:00
Michael Yang 23125648b8
chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
Parth Sareen 8cc33f4c2b
llama: fix memory leak for grammar (#10696) 2025-05-13 15:39:27 -07:00
Jeffrey Morgan f46df4e5d2
llama: fix defrag patch to defragment when no slots are available (#10695) 2025-05-13 14:02:08 -07:00
Jeffrey Morgan 4b903f088a
llama: fix crash on snowflake embedding model (#10690) 2025-05-13 13:11:11 -07:00
Jeffrey Morgan 0cefd46f23
llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
frob ecf14a220f
llama: allocate grammar buffer based on schema length (#10649) 2025-05-10 11:57:30 -07:00
Jeffrey Morgan fa9973cd7f
api: remove unused sampling parameters (#10581) 2025-05-08 08:31:08 -07:00
Daniel Hiltgen 424810450f
Move quantization to new backend (#10363)
* Move quantization logic to GGML via new backend

This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.

* Remove "add model quantizations"

This is no longer needed now that quantization is implemented in Go+GGML code directly.
2025-05-06 11:20:48 -07:00
Jeffrey Morgan 3b2d2c8326
api: remove unused or unsupported api options (#10574)
Some options listed in api/types.go are not supported in
newer models, or have been deprecated in the past. This is
the first of a series of PRs to clean up the API options
2025-05-05 14:54:40 -07:00
Jeffrey Morgan 913905028b
all: fix cgo compiler warnings on windows (#10563) 2025-05-05 08:02:39 -07:00
Jesse Gross c2f5d6662b ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-05-02 12:22:47 -07:00
Jeffrey Morgan 8dd12c873d
llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
Jeffrey Morgan e9e5f61c45
llama: update to commit 2016f07b (#10352) 2025-04-24 17:26:02 -07:00
Parth Sareen a53d744b01
llama: remove model loading for grammar (#10096) 2025-04-24 11:51:19 -07:00
Jeffrey Morgan dc264be6ff
ml: add missing cmake property and remove additional CMakeLists.txt (#10310) 2025-04-16 18:56:29 -07:00
Jeffrey Morgan 943464ccb8
llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
Jesse Gross ccb7eb8135 ggml: Free ggml_backend_buffer_t when releasing buffer
When ggml_backend_buffer_free() is called, the device memory
is released but not all backends consistently release the actual
ggml_backend_buffer_t in system RAM, causing a memory leak.

Bug #10040
2025-04-15 15:29:58 -07:00
Bruce MacDonald 6bd0a983cd model: support for mistral-small in the ollama runner
Mistral is a popular research lab making open source models. This updates
the forward pass of llama architecture models to support both llama models
and mistral models by accounting for additional metadata present in mistral
models, and finding the correct dimensions for the output projection.
2025-04-03 16:57:36 -07:00
Bruce MacDonald 66b2539238
runner: clear cache when shift is not possible (#9433)
Clear KV cache when shift operation is not supported by model.
Added KvCacheCanShift() check to handle models that can't perform cache shifts,
falling back to full cache clear while preserving logical token history to
maintain expected behavior when context window fills up.
2025-03-31 12:54:45 -07:00
saman-amd ead27aa9fe
Add gfx1200 & gfx1201 support on linux (#9878) 2025-03-27 07:35:19 -07:00
Patrick Devine ef378ad673
gemma3 quantization (#9776) 2025-03-14 17:41:07 -07:00
Michael Yang 9e4642e9b3 ollama debug tensor 2025-03-11 14:49:19 -07:00
Jeffrey Morgan e093db92c4
sample: temporarily use grammars for constrained generation in new engine (#9586) 2025-03-10 16:17:39 +01:00
Jeffrey Morgan 4289c74359
llama: fix kv loading on snowflake-arctic-embed models (#9536) 2025-03-07 09:25:34 -08:00
Michael Yang 05a01fdecb ml/backend/ggml: consolidate system info logging
- output backend system info when initializing the backend. this ensures
  this information is always present without needing to be called
  explicitly
- convert to structured logging
- enumerate devices rather than backends since devices are ordered
- track device indices grouped by device name
2025-03-04 15:14:31 -08:00