use a similar strategy as llama.cpp for deciding where tensors should be allocated. this will be improved later to be aware of usable memory before assigning the tensor |
||
|---|---|---|
| .. | ||
| backend | ||
| nn | ||
| backend.go | ||
use a similar strategy as llama.cpp for deciding where tensors should be allocated. this will be improved later to be aware of usable memory before assigning the tensor |
||
|---|---|---|
| .. | ||
| backend | ||
| nn | ||
| backend.go | ||