spec: Enable libcurl support
libcurl allows to download models from Hugging Face repositories. Fixes: $ llama-main --hf-repo ggml-org/tiny-llamas -m stories15M-q4_0.gguf -n 400 -p "Once opon a time" Log start main: build = 0 (unknown) main: built with x86_64-alt-linux-gcc (GCC) 13.2.1 20240128 (ALT Sisyphus 13.2.1-alt3) for x86_64-alt-linux main: seed = 1717384867 llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported. llama_init_from_gpt_params: error: failed to load model 'stories15M-q4_0.gguf' main: error: unable to load model Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
This commit is contained in:
parent
a371d7965e
commit
3105e0043d
1 changed files with 4 additions and 1 deletions
|
|
@ -26,6 +26,7 @@ BuildRequires(pre): rpm-macros-cmake
|
|||
BuildRequires: cmake
|
||||
BuildRequires: ctest
|
||||
BuildRequires: gcc-c++
|
||||
BuildRequires: libcurl-devel
|
||||
|
||||
%description
|
||||
Plain C/C++ implementation (of inference of LLaMA model) without
|
||||
|
|
@ -66,7 +67,9 @@ Overall this is all raw and EXPERIMENTAL, no warranty, no support.
|
|||
%setup
|
||||
|
||||
%build
|
||||
%cmake
|
||||
%cmake \
|
||||
-DLLAMA_CURL=ON \
|
||||
%nil
|
||||
grep ^LLAMA %_cmake__builddir/CMakeCache.txt | sort | tee build-options.txt
|
||||
%cmake_build
|
||||
find -name '*.py' | xargs sed -i '1s|#!/usr/bin/env python3|#!%__python3|'
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue