Commit Graph

3835 Commits

Author SHA1 Message Date
Antoine Viallon 449e5c07ae Sync vendored ggml to add Vulkan support 2025-02-04 11:51:17 +01:00
pufferffish 2d443b3dd6 Add vulkan build patch from @jmorganca 2025-02-03 14:46:59 +00:00
pufferffish 582d41e002 Merge github.com:ollama/ollama into vulkan 2025-02-03 14:44:30 +00:00
Davide Bertoni ad22ace439
docs: add missing json and shell code blocks in api.md (#8766) 2025-02-02 13:12:55 -08:00
Anıl Kaynar f4321a421c
readme: add MinimalNextOllamaChat to community integrations (#8767) 2025-02-02 12:56:10 -08:00
Michael Yang 475333d533 fix docker build-args
env context is not accessible from job.*.strategy. since it's in the
environment, just tell docker to use the environment variable[1]

[1]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-arg
2025-01-31 14:56:02 -08:00
Michael Yang 39fd89308c build: set CFLAGS=-O3 specifically for cpu.go 2025-01-31 10:25:39 -08:00
Michael Yang 548a9f56a6 Revert "cgo: use O3"
This reverts commit bea1f1fac6.
2025-01-31 10:25:39 -08:00
Michael Yang 3f0cb36bdb build: set goflags in linux release 2025-01-30 13:07:32 -08:00
Michael Yang bea1f1fac6 cgo: use O3 2025-01-30 12:21:50 -08:00
Jeffrey Morgan 5d75d837ef
discover: fix default LibOllamaPath value (#8702) 2025-01-30 12:21:38 -08:00
pufferfish 3839e8f22d
Merge pull request #4 from tomaThomas/vulkan
Fix variable name
2025-01-30 14:40:39 +00:00
Parth Sareen 711648c9bb
docs: update api.md with streaming with tools is enabled (#8676) 2025-01-29 15:14:30 -08:00
Michael Yang dcfb7a105c
next build (#8539)
* add build to .dockerignore

* test: only build one arch

* add build to .gitignore

* fix ccache path

* filter amdgpu targets

* only filter if autodetecting

* Don't clobber gpu list for default runner

This ensures the GPU specific environment variables are set properly

* explicitly set CXX compiler for HIP

* Update build_windows.ps1

This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.

* build: add ollama subdir

* add .git to .dockerignore

* docs: update development.md

* update build_darwin.sh

* remove unused scripts

* llm: add cwd and build/lib/ollama to library paths

* default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS

* add additional cmake output vars for msvc

* interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12

* remove unncessary filepath.Dir, cleanup

* add hardware-specific directory to path

* use absolute server path

* build: linux arm

* cmake install targets

* remove unused files

* ml: visit each library path once

* build: skip cpu variants on arm

* build: install cpu targets

* build: fix workflow

* shorter names

* fix rocblas install

* docs: clean up development.md

* consistent build dir removal in development.md

* silence -Wimplicit-function-declaration build warnings in ggml-cpu

* update readme

* update development readme

* llm: update library lookup logic now that there is one runner (#8587)

* tweak development.md

* update docs

* add windows cuda/rocm tests

---------

Co-authored-by: jmorganca <jmorganca@gmail.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-01-29 15:03:38 -08:00
Xiaofu Huang 2ef3c803a1
readme: add AI Toolkit for VSCode to community integrations (#8604) 2025-01-27 00:36:23 -08:00
tomaThomas 0d277d32db
Fix variable name 2025-01-25 11:23:25 +01:00
Matěj Štágl 453e4d090b
readme: add LlmTornado to community integrations (#8551) 2025-01-25 01:04:07 -08:00
Daniel Jalkut ca2f9843c8
docs: remove reference to the deleted examples folder (#8524) 2025-01-22 22:52:15 -08:00
frob 294b6f5a22
docs: remove tfs_z option from documentation (#8515) 2025-01-21 09:28:59 -08:00
EndoTheDev 7bb356c680
docs: update suspend header in gpu.md (#8487) 2025-01-19 18:45:35 -08:00
pufferfish f7e40b587f
Merge branch 'ollama:main' into vulkan 2025-01-19 19:57:36 +00:00
pufferfish 481ab07abe
Merge pull request #3 from yeongbba/vulkan
Making amd gpu work on arm achitecture with vulkan
2025-01-19 06:52:27 +00:00
yeongbba 2bf59a512b add aarch64 lines in vulkanGlobs and capLinuxGlobs 2025-01-19 12:51:10 +09:00
yeongbba 9ac01e88dd Merge remote-tracking branch 'upstream/vulkan' into vulkan 2025-01-19 12:49:38 +09:00
yeongbba 6d7579b567 add x86_64 lines in VulkanGlobs and capLinuxGlobs 2025-01-19 12:41:08 +09:00
yeongbba 4b74cee096 making amdgpu work on arm achitecutre with vulkan 2025-01-19 01:30:34 +09:00
Jannik Maierhöfer 021817e59a
readme: add link to Langfuse (#8455) 2025-01-16 22:41:12 -08:00
Patrick Devine a420a453b4
fix default modelfile for create (#8452) 2025-01-16 01:14:04 -08:00
Jeffrey Morgan 42cf4db601
parser: fix parsing Modelfiles with multiple FROM commands (#8449) 2025-01-16 00:14:04 -08:00
Josh 93a8daf285
convert: import support for command-r models from safetensors (#6063)
---------

Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-01-15 16:31:22 -08:00
Gloryjaw a041b4df7c
docs: fix path to examples (#8438) 2025-01-15 11:49:12 -08:00
Patrick Devine 2539f2dbf9
Fix absolute path names + gguf detection (#8428) 2025-01-14 19:01:24 -08:00
Jeffrey Morgan 61676fb506
llama: move grammar tests to llama_test.go (#8411) 2025-01-14 12:55:45 -08:00
Bruce MacDonald f6f3713001
convert: qwen2 from safetensors (#8408)
Add native support for converting Qwen2 family models (including Qwen2.5)
from safetensors to gguf format so we can run it.
2025-01-14 10:34:37 -08:00
Steve Berdy a30f347201
readme: add LangChain for .NET to community integrations (#8352) 2025-01-14 09:37:35 -08:00
Jeffrey Morgan 74ea4fb604
remove .prettierrc.json (#8413) 2025-01-14 09:30:34 -08:00
Jeffrey Morgan 6982e9cc96
readme: remove link to missing page 2025-01-13 18:56:31 -08:00
Patrick Devine ab39872cb4
add new create api doc (#8388) 2025-01-13 17:30:24 -08:00
Parth Sareen 84a2314463
examples: remove codified examples (#8267) 2025-01-13 11:26:22 -08:00
Jeffrey Morgan 17fcdea698
readme: move discord link 2025-01-12 22:45:47 -08:00
pufferffish 9ad63a747b fix conflict 2025-01-12 01:00:41 +00:00
Patrick Devine 32bd37adf8
make the modelfile path relative for `ollama create` (#8380) 2025-01-10 16:14:08 -08:00
Michael Yang 9446c2c902
Merge pull request #8196 from ollama/mxyng/gods-v2
chore: upgrade to gods v2
2025-01-10 13:50:11 -08:00
Jeffrey Morgan 9aa141d023
readme: remove discord badge image for now 2025-01-09 22:02:18 -08:00
Patrick Devine 8bccae4f92
show a more descriptive error in the client if it is newer than the server (#8351) 2025-01-09 10:12:30 -08:00
isamu arimoto 6ae2adc1af
openai: accept additional headers to fix CORS errors (#8343) 2025-01-08 11:28:11 -08:00
Jeffrey Morgan 1deafd8254
llama: update vendored code to commit 46e3556 (#8308) 2025-01-08 11:22:01 -08:00
Michael 57f038ec7b
readme: add phi4 model (#8350) 2025-01-08 11:21:39 -08:00
frob cdf3a181dc
Add CUSTOM_CPU_FLAGS to Dockerfile. (#8284)
* Add CUSTOM_CPU_FLAGS.

* fix golangci-lint error.

---------

Co-authored-by: Richard Lyons <rick@frob.com.au>
2025-01-06 09:17:19 -08:00
Ubaldo Porcheddu 3919f4ba3d
llama: fix runner api example url in README.md (#8307) 2025-01-04 15:45:16 -08:00