From 838fe44f4477adc9b049029cdaf2eecfc713090e Mon Sep 17 00:00:00 2001 From: Vitaly Chikunov Date: Tue, 28 May 2024 22:19:13 +0300 Subject: [PATCH] spec: check: Skip test-eval-callback As we dont have tinyllamas[1] dataset. 10/25 Test #25: test-eval-callback ...............***Failed 0.00 sec warning: not compiled with GPU offload support, --n-gpu-layers option will be ignored warning: see main README.md for information on enabling GPU BLAS support main: build = 0 (unknown) main: built with x86_64-alt-linux-gcc (GCC) 13.2.1 20240128 (ALT Sisyphus 13.2.1-alt3) for x86_64-alt-linux llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported. llama_init_from_gpt_params: error: failed to load model 'stories260K.gguf' main : failed to init Link: https://huggingface.co/karpathy/tinyllamas Signed-off-by: Vitaly Chikunov --- .gear/llama.cpp.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gear/llama.cpp.spec b/.gear/llama.cpp.spec index c482aec1c..15287678a 100644 --- a/.gear/llama.cpp.spec +++ b/.gear/llama.cpp.spec @@ -113,7 +113,7 @@ LLAMA_ARGS="-m %_datadir/%name/ggml-model-f32.bin" EOF %check -%cmake_build --target test +%ctest -j1 -E test-eval-callback %files %define _customdocdir %_docdir/%name