spec: Enable libcurl support

libcurl allows to download models from Hugging Face repositories.
Fixes:

	$ llama-main --hf-repo ggml-org/tiny-llamas -m stories15M-q4_0.gguf -n 400 -p "Once opon a time"
	Log start
	main: build = 0 (unknown)
	main: built with x86_64-alt-linux-gcc (GCC) 13.2.1 20240128 (ALT Sisyphus 13.2.1-alt3) for x86_64-alt-linux
	main: seed  = 1717384867
	llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported.
	llama_init_from_gpt_params: error: failed to load model 'stories15M-q4_0.gguf'
	main: error: unable to load model

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
This commit is contained in:
Vitaly Chikunov 2024-06-03 06:36:17 +03:00
parent a371d7965e
commit 3105e0043d

View file

@ -26,6 +26,7 @@ BuildRequires(pre): rpm-macros-cmake
BuildRequires: cmake
BuildRequires: ctest
BuildRequires: gcc-c++
BuildRequires: libcurl-devel
%description
Plain C/C++ implementation (of inference of LLaMA model) without
@ -66,7 +67,9 @@ Overall this is all raw and EXPERIMENTAL, no warranty, no support.
%setup
%build
%cmake
%cmake \
-DLLAMA_CURL=ON \
%nil
grep ^LLAMA %_cmake__builddir/CMakeCache.txt | sort | tee build-options.txt
%cmake_build
find -name '*.py' | xargs sed -i '1s|#!/usr/bin/env python3|#!%__python3|'