image processing
Update model.go
Update model.go
Update model.go
no projector
no projector
vision model scaffold
...
...
wip
...
rebase
fix patch merger
tidy
...
Update model_vision.go
server: do not attempt to parse offset file as gguf
This logic was causing issues for me when importing a gguf that had some padding at the end of the file. The valid gguf would be read, but then it would try to read the offset as a different gguf file. This does not seem right.
Update process_image_test.go
apply norm
prompt processing
prompt processing
fix post tokenize
fix gguf padding + populate the split patch embeddings
...
...
another shot at patch embeddings
...
patch embedding
Update model_vision.go
split pixels
This commit refactors the Rotary Position Embedding (RoPE) implementation across the codebase to use a structured configuration approach instead of individual parameters.
Key changes:
- Add new RoPEConfig struct with fields for dimension, type, base frequency, and scaling
- Add RopeType enum to formalize different RoPE implementation variants
- Add YarnConfig struct and related configuration for YaRN (Yet Another RoPE extensioN) context extension
- Update RoPE method signature across all tensor interfaces and implementations
- Refactor all model implementations (llama, gemma2, gemma3, mllama) to use the new configuration structure
This change improves code organization, makes the RoPE configuration more explicit, and provides better support for different RoPE variants and context extension methods.
Mistral is a popular research lab making open source models. This updates
the forward pass of llama architecture models to support both llama models
and mistral models by accounting for additional metadata present in mistral
models, and finding the correct dimensions for the output projection.
Models may require that a set of inputs all be processed as part
of the same batch. For example, if an image has multiple patches
with fully connected attention between them, we should not split
the batch in the middle of an image.
Fixes#9697
Softcap isn't in the whitepaper/implementation for the language model so we should remove it. There is no discernible difference in output with it removed.