Problem description & steps to reproduce After the recent changes, llama.cpp is consistently crashing when using Vulkan with the default physical batch size (-ub 512). I've found that reducing the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results