ollama/convert
Bruce MacDonald c1f9bcb4dd restructure
image processing

Update model.go

Update model.go

Update model.go

no projector

no projector

vision model scaffold

...

...

wip

...

rebase

fix patch merger

tidy

...

Update model_vision.go

server: do not attempt to parse offset file as gguf

This logic was causing issues for me when importing a gguf that had some padding at the end of the file. The valid gguf would be read, but then it would try to read the offset as a different gguf file. This does not seem right.

Update process_image_test.go

apply norm

prompt processing

prompt processing

fix post tokenize

fix gguf padding + populate the split patch embeddings

...

...

another shot at patch embeddings

...

patch embedding

Update model_vision.go

split pixels
2025-05-12 13:49:41 -07:00
..
sentencepiece chore(all): replace instances of interface with any (#10067) 2025-04-02 09:44:27 -07:00
testdata convert: import support for command-r models from safetensors (#6063) 2025-01-15 16:31:22 -08:00
convert.go restructure 2025-05-12 13:49:41 -07:00
convert_bert.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_commandr.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_gemma.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_gemma2.go next ollama runner (#7913) 2025-02-13 16:31:21 -08:00
convert_gemma2_adapter.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_gemma3.go fix: change default context size for gemma3 (#9744) 2025-03-13 13:59:19 -07:00
convert_llama.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_llama4.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_llama_adapter.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_mistral.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_mixtral.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_phi3.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_qwen2.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
convert_qwen25vl.go restructure 2025-05-12 13:49:41 -07:00
convert_test.go file close check and close. (#10554) 2025-05-04 15:37:59 -07:00
reader.go llama4 2025-04-25 16:59:20 -07:00
reader_safetensors.go llama4 2025-04-25 16:59:20 -07:00
reader_torch.go llama4 2025-04-25 16:59:20 -07:00
sentencepiece_model.proto all: fix typos in documentation, code, and comments (#7021) 2024-12-10 12:58:06 -08:00
tokenizer.go convert: qwen2 from safetensors (#8408) 2025-01-14 10:34:37 -08:00
tokenizer_spm.go temporary work around for converting spm 2025-03-11 14:49:18 -07:00
tokenizer_test.go fix unmarshaling merges 2024-12-04 09:21:56 -08:00