ggml: Prevent kv cache quanitization on gpt-oss

KV cache quantization has a dependency on the flash attention kernel.
We currently cannot use flash attention with gpt-oss as it requires
additional operations.

The model definition does not call flash attention, so it works
regardless of the setting but the cache will pick up the
quantization type. This updates the flash attention setting earlier
in the loading flow so that all downstream settings are also set correctly.

Fixes: #11671
This commit is contained in:
Jesse Gross 2025-08-05 12:42:07 -07:00 committed by Ryan Schumacher
parent ed2e8a9022
commit ae8a041461
No known key found for this signature in database
1 changed files with 4 additions and 0 deletions

View File

@ -761,6 +761,10 @@ func (f GGML) SupportsFlashAttention() bool {
return false return false
} }
if f.KV().Architecture() == "gptoss" {
return false
}
// Check head counts match and are non-zero // Check head counts match and are non-zero
headCountK := f.KV().EmbeddingHeadCountK() headCountK := f.KV().EmbeddingHeadCountK()
headCountV := f.KV().EmbeddingHeadCountV() headCountV := f.KV().EmbeddingHeadCountV()