spec: check: Skip test-eval-callback

As we dont have tinyllamas[1] dataset.

  10/25 Test #25: test-eval-callback ...............***Failed    0.00 sec
  warning: not compiled with GPU offload support, --n-gpu-layers option will be ignored
  warning: see main README.md for information on enabling GPU BLAS support
  main: build = 0 (unknown)
  main: built with x86_64-alt-linux-gcc (GCC) 13.2.1 20240128 (ALT Sisyphus 13.2.1-alt3) for x86_64-alt-linux
  llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported.
  llama_init_from_gpt_params: error: failed to load model 'stories260K.gguf'
  main : failed to init

Link: https://huggingface.co/karpathy/tinyllamas
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
This commit is contained in:
Vitaly Chikunov 2024-05-28 22:19:13 +03:00
parent f849ad091c
commit 838fe44f44

View file

@ -113,7 +113,7 @@ LLAMA_ARGS="-m %_datadir/%name/ggml-model-f32.bin"
EOF
%check
%cmake_build --target test
%ctest -j1 -E test-eval-callback
%files
%define _customdocdir %_docdir/%name