spec: check:Disable test-llama-archs

4/41 Test #22: test-llama-archs ..................***Failed    0.00 sec
  build: 8245 (d417bc43 [alt1]) with GNU 14.3.1 for Linux x86_64
  encountered runtime error: failed to create llama model
  llama_model_load_from_file_impl: no backends are loaded. hint: use ggml_backend_load() or ggml_backend_load_all() to load a backend before calling this function
  |    Model arch.|                        Device|Config|    NMSE|Status|
  |---------------|------------------------------|------|--------|------|

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
This commit is contained in:
Vitaly Chikunov 2026-03-09 07:56:01 +03:00
parent 2e14561ca4
commit ed4178bf90

View file

@ -205,7 +205,7 @@ export LD_LIBRARY_PATH=$PWD/%_cmake__builddir/bin PATH+=:$PWD/%_cmake__builddir/
llama-server --version
llama-server --version |& grep -Ex 'version: %version \(\S+ \[%release\]\)'
# test-eval-callback wants network.
%ctest -E 'test-download-model|test-eval-callback|test-state-restore-fragmented'
%ctest -E 'test-download-model|test-eval-callback|test-state-restore-fragmented|test-llama-archs'
llama-completion -m /usr/share/tinyllamas/stories260K.gguf -p "Hello" -s 42 -n 500 2>/dev/null
llama-completion -m /usr/share/tinyllamas/stories260K.gguf -p "Once upon a time" -s 55 -n 33 2>/dev/null |
grep 'Once upon a time, there was a boy named Tom. Tom had a big box of colors.'