llama-cpp-turboquant/tests
Jeff Bolz de5627910d
vulkan: Optimize argsort (#15354)
- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.
2025-08-17 10:41:45 +02:00
..
.gitignore
CMakeLists.txt finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-arg-parser.cpp
test-autorelease.cpp
test-backend-ops.cpp vulkan: Optimize argsort (#15354) 2025-08-17 10:41:45 +02:00
test-barrier.cpp
test-c.c ggml : remove kompute backend (#14501) 2025-07-03 07:48:32 +03:00
test-chat-parser.cpp
test-chat-template.cpp chat : fix yandex chat template (#15116) 2025-08-06 13:26:49 +02:00
test-chat.cpp gpt-oss: implement harmony parsing (#15181) 2025-08-14 17:23:11 +03:00
test-double-float.cpp
test-gbnf-validator.cpp
test-gguf.cpp
test-grammar-integration.cpp
test-grammar-llguidance.cpp
test-grammar-parser.cpp
test-json-partial.cpp
test-json-schema-to-grammar.cpp
test-llama-grammar.cpp
test-log.cpp
test-lora-conversion-inference.sh
test-model-load-cancel.cpp
test-mtmd-c-api.c
test-opt.cpp test-opt: fix backend support check (#15317) 2025-08-15 11:23:17 +02:00
test-quantize-fns.cpp
test-quantize-perf.cpp
test-quantize-stats.cpp
test-regex-partial.cpp
test-rope.cpp
test-sampling.cpp
test-thread-safety.cpp tests : update for LLAMA_SET_ROWS=1 (#14961) 2025-07-30 15:12:02 +03:00
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp
test-tokenizer-1-spm.cpp
test-tokenizer-random.py
test-tokenizers-repo.sh