llama-cpp-turboquant/tests
2024-02-03 13:23:37 +02:00
..
.gitignore tests : gitignore test-c.o 2024-01-26 14:48:15 +02:00
CMakeLists.txt
get-model.cpp
get-model.h
test-autorelease.cpp
test-backend-ops.cpp llava : add MobileVLM support (#5132) 2024-01-31 15:10:15 +02:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-double-float.cpp
test-grad0.cpp
test-grammar-parser.cpp
test-llama-grammar.cpp refactor : switch to emplace_back to avoid extra object (#5291) 2024-02-03 13:23:37 +02:00
test-model-load-cancel.cpp
test-opt.cpp
test-quantize-fns.cpp SOTA 3-bit quants (#5196) 2024-01-30 15:14:12 +02:00
test-quantize-perf.cpp SOTA 3-bit quants (#5196) 2024-01-30 15:14:12 +02:00
test-rope.cpp
test-sampling.cpp Tests for min_p, sampling queue (#5147) 2024-01-28 09:35:14 +01:00
test-tokenizer-0-falcon.cpp
test-tokenizer-0-falcon.py
test-tokenizer-0-llama.cpp
test-tokenizer-0-llama.py
test-tokenizer-1-bpe.cpp
test-tokenizer-1-llama.cpp