From 3105e0043db2e52c991f2decd8a63c6ab39fc141 Mon Sep 17 00:00:00 2001 From: Vitaly Chikunov Date: Mon, 3 Jun 2024 06:36:17 +0300 Subject: [PATCH] spec: Enable libcurl support libcurl allows to download models from Hugging Face repositories. Fixes: $ llama-main --hf-repo ggml-org/tiny-llamas -m stories15M-q4_0.gguf -n 400 -p "Once opon a time" Log start main: build = 0 (unknown) main: built with x86_64-alt-linux-gcc (GCC) 13.2.1 20240128 (ALT Sisyphus 13.2.1-alt3) for x86_64-alt-linux main: seed = 1717384867 llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported. llama_init_from_gpt_params: error: failed to load model 'stories15M-q4_0.gguf' main: error: unable to load model Signed-off-by: Vitaly Chikunov --- .gear/llama.cpp.spec | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.gear/llama.cpp.spec b/.gear/llama.cpp.spec index 60faa0d46..5b432a343 100644 --- a/.gear/llama.cpp.spec +++ b/.gear/llama.cpp.spec @@ -26,6 +26,7 @@ BuildRequires(pre): rpm-macros-cmake BuildRequires: cmake BuildRequires: ctest BuildRequires: gcc-c++ +BuildRequires: libcurl-devel %description Plain C/C++ implementation (of inference of LLaMA model) without @@ -66,7 +67,9 @@ Overall this is all raw and EXPERIMENTAL, no warranty, no support. %setup %build -%cmake +%cmake \ + -DLLAMA_CURL=ON \ + %nil grep ^LLAMA %_cmake__builddir/CMakeCache.txt | sort | tee build-options.txt %cmake_build find -name '*.py' | xargs sed -i '1s|#!/usr/bin/env python3|#!%__python3|'