ollama/llama/patches
Michael Yang dcfb7a105c
next build (#8539)
* add build to .dockerignore

* test: only build one arch

* add build to .gitignore

* fix ccache path

* filter amdgpu targets

* only filter if autodetecting

* Don't clobber gpu list for default runner

This ensures the GPU specific environment variables are set properly

* explicitly set CXX compiler for HIP

* Update build_windows.ps1

This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.

* build: add ollama subdir

* add .git to .dockerignore

* docs: update development.md

* update build_darwin.sh

* remove unused scripts

* llm: add cwd and build/lib/ollama to library paths

* default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS

* add additional cmake output vars for msvc

* interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12

* remove unncessary filepath.Dir, cleanup

* add hardware-specific directory to path

* use absolute server path

* build: linux arm

* cmake install targets

* remove unused files

* ml: visit each library path once

* build: skip cpu variants on arm

* build: install cpu targets

* build: fix workflow

* shorter names

* fix rocblas install

* docs: clean up development.md

* consistent build dir removal in development.md

* silence -Wimplicit-function-declaration build warnings in ggml-cpu

* update readme

* update development readme

* llm: update library lookup logic now that there is one runner (#8587)

* tweak development.md

* update docs

* add windows cuda/rocm tests

---------

Co-authored-by: jmorganca <jmorganca@gmail.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-01-29 15:03:38 -08:00
..
0001-cuda.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0002-pretokenizer.patch llama: update vendored code to commit 46e3556 (#8308) 2025-01-08 11:22:01 -08:00
0003-embeddings.patch llama: update vendored code to commit 46e3556 (#8308) 2025-01-08 11:22:01 -08:00
0004-clip-unicode.patch llama: update vendored code to commit 46e3556 (#8308) 2025-01-08 11:22:01 -08:00
0005-solar-pro.patch llama: update vendored code to commit 46e3556 (#8308) 2025-01-08 11:22:01 -08:00
0006-conditional-fattn.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0007-add-mllama-support.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0008-add-unpad-operator.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0009-fix-deepseek-deseret-regex.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0010-Maintain-ordering-for-rules-for-grammar.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0011-fix-missing-arg-in-static-assert-on-windows.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0012-llama-Ensure-KV-cache-is-fully-defragmented.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0013-re-enable-gpu-for-clip.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0014-sort-devices-by-score.patch next build (#8539) 2025-01-29 15:03:38 -08:00
0015-add-phony-target-ggml-cpu-for-all-cpu-variants.patch next build (#8539) 2025-01-29 15:03:38 -08:00