* fix mllama convert - transform attn_gate and ffn_gate - swap attention heads for vision models * fix mllama the mlp gate which was applied in the wrong place |
||
|---|---|---|
| .. | ||
| model.go | ||
| model_text.go | ||
| model_vision.go | ||
| process_image.go | ||
| process_image_test.go | ||
* fix mllama convert - transform attn_gate and ffn_gate - swap attention heads for vision models * fix mllama the mlp gate which was applied in the wrong place |
||
|---|---|---|
| .. | ||
| model.go | ||
| model_text.go | ||
| model_vision.go | ||
| process_image.go | ||
| process_image_test.go | ||