This website requires JavaScript.
Explore
Help
Sign in
thek0tyara
/
llama-cpp-turboquant
Watch
1
Star
0
Fork
You've already forked llama-cpp-turboquant
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
4
723c71064d
llama-cpp-turboquant
/
ggml
History
Download ZIP
Download TAR.GZ
Ruben Ortlam
723c71064d
vulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (
#19921
)
2026-02-26 19:11:04 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
ggml/gguf : prevent integer overflows (
#19856
)
2026-02-24 20:17:11 +02:00
src
vulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (
#19921
)
2026-02-26 19:11:04 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : bump version to 0.9.7 (ggml/1425)
2026-02-15 22:24:29 +02:00