llama-cpp-turboquant/examples
Daniel Bevenius 4ab99d8d47
clip : rename lerp function to avoid conflict (#6894)
This commit renamesthe lerp (linear interpolation) function in clip.cpp
to avoid a conflict with the lerp function in the <cmath> standard C++
library when using c++20.

The motivation for this change is to enable projects that use c++20 to
be able to compile clip.cpp without having to resort to patching it. The
lerp function was added to cmath in version C++20 (202002L) and is why
this is not causing any issue at the moment as C++11/C++17 is currently
used by llama.cpp.

I realize that llama.cpp uses either C++11 (or C++17 in the case for
SYCL) but wanted to ask if this would be an acceptable change just the
same.

Refs: https://en.cppreference.com/w/cpp/numeric/lerp

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-25 15:38:14 +03:00
..
baby-llama
batched llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
batched-bench
batched.swift llama : add option to render special/control tokens (#6807) 2024-04-21 18:36:45 +03:00
beam-search llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
benchmark
convert-llama2c-to-ggml
embedding BERT tokenizer fixes (#6498) 2024-04-09 13:44:08 -04:00
eval-callback
export-lora
finetune
gbnf-validator
gguf
gguf-split tests : minor bash stuff (#6902) 2024-04-25 14:27:20 +03:00
gritlm
imatrix
infill llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
jeopardy
llama-bench
llama.android llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
llama.swiftui llama : add option to render special/control tokens (#6807) 2024-04-21 18:36:45 +03:00
llava clip : rename lerp function to avoid conflict (#6894) 2024-04-25 15:38:14 +03:00
lookahead llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
lookup Server: fix seed for multiple slots (#6835) 2024-04-24 11:08:36 +02:00
main Server: fix seed for multiple slots (#6835) 2024-04-24 11:08:36 +02:00
main-cmake-pkg
parallel llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
passkey llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
perplexity
quantize tests : minor bash stuff (#6902) 2024-04-25 14:27:20 +03:00
quantize-stats
retrieval
save-load-state
server tests : minor bash stuff (#6902) 2024-04-25 14:27:20 +03:00
simple llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
speculative llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
sycl
tokenize
train-text-from-scratch train : add general name (#6752) 2024-04-19 10:16:45 +03:00
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
gpt4all.sh
json-schema-pydantic-example.py
json_schema_to_grammar.py
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
regex-to-grammar.py
server-embd.py
server-llama2-13B.sh
ts-type-to-grammar.sh