ollama/model/models/mllama
Jesse Gross f53f4198c3 ml: Abstract attention out of model definitions
There are two benefits to doing this:
 - Provide a library function that models can use, reducing code for
   each model implementation
 - Enables a single place to drop in optimized implementations of
   attention based on the backend or other factors. One is provided for
   GGML.

On CUDA this improves token generation rate by about 3%. It does not
have a significant effect on Metal.

Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-02-21 13:16:21 -08:00
..
imageproc.go models: Move model into their own directory 2025-02-13 17:09:26 -08:00
imageproc_test.go models: Move model into their own directory 2025-02-13 17:09:26 -08:00
model.go models: Prune unused outputs earlier in the forward pass 2025-02-20 14:49:47 -08:00
model_text.go ml: Abstract attention out of model definitions 2025-02-21 13:16:21 -08:00
model_vision.go models: Move model into their own directory 2025-02-13 17:09:26 -08:00
process_image.go models: Move model into their own directory 2025-02-13 17:09:26 -08:00