* sampling : add support for backend sampling
This commit adds support for performing sampling operations on the
backend (e.g. GPU) as part of the model computation graph.
The motivation for this feature is to enable sampling to be performed
directly on the backend as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the backend.
For example, the backend sampler chain might select/sample a token
directly in which case only the sampled token needs to be transferred
from device memory to host memory.
It is also possible for the backend samplers to perform filtering of
the logits, or compute and filter the probability distribution, in
which case only the filtered logits or probabilites need to be
transferred back to system memory for further processing by CPU
samplers.
Currently the backend sampling works in a similar manner to how
pooling works, it is a function that is called by build_graph and the
sampler operations become part of the models computation graph.
* llama-cli : add backend sampler configuration
* server : add backend sampling options/configuration
* webui : add backend sampling options
* ggml : add initial cumsum implementation for CUDA
* sampling : enable all backend sampler tests
This commit enables all exisiting backend sampler tests in the
test-backend-sampler. Previously, some tests were disabled because
there were missing ggml operation implementations.
* graph : do not include llama-model.h
* sampling : always expose sampled_ids
This commit precomputes and caches the full-vocab token id list in
llama_context's constructor, so llama_get_backend_sampled_token_ids_ith
always returns a valid pointer.
The motivation for this is that this enables both common/sampling.cpp
and src/llama-sampling.cpp can simplify their logic.
Not all backends samplers that process logits need to set the
sampled_tokens_id as they may not change the order of the logits, for
example the temperature sampler only scales the logits but does not
change their order. Simliar the logit bias sampler only adds bias to
specific token ids but does not change the order of the logits. In
these cases there will not be a device to host copy of the sampled
token ids, and this is the use case where having this precomputed
list is useful.
* sampling : ensure at most one output token per seq
This commit adds a check in the batch allocator to ensure that when
backend sampling is enabled, at most one output token is specified per
sequence.
* CUDA: Optimize argsort for gpu-based token sampling
Argsort is used for top-k currently. WE optimize argsort by 2 things:
1. Use `DeviceRadixSort` for single-row/sequence to parallelize it
across our SMs
2. Use `DeviceSegmentedSort` for multi-row/sequence as this is the
correct entrypoint (the function chooses different execution paths,
it contains `DeviceSegmentedRadixSort` as one of the paths and will
choose the best one according to heuristics.
https://nvidia.github.io/cccl/cub/api/structcub_1_1DeviceSegmentedSort.html#overview
Some perf numbers for a RTX PRO 6000:
On the kernel level, tested with
`GGML_CUDA_DISABLE_GRAPHS=1 ./test-backend-ops -o ARGSORT perf`
Before:
```
ARGSORT(type=f32,ne=[65000,16,1,1],order=0): 4130 runs - 359.24 us/run
ARGSORT(type=f32,ne=[200000,1,1,1],order=0): 8192 runs - 861.34 us/run
ARGSORT(type=f32,ne=[200000,16,1,1],order=0): 1343 runs - 1020.01 us/run
```
After:
```
ARGSORT(type=f32,ne=[65000,16,1,1],order=0): 4130 runs - 312.41 us/run
ARGSORT(type=f32,ne=[200000,1,1,1],order=0): 16384 runs - 63.48 us/run
ARGSORT(type=f32,ne=[200000,16,1,1],order=0): 1343 runs - 874.36 us/run
```
---
On the model level, tested with
`llama-cli -m gpt-oss-20b-mxfp4.gguf -n 200 -p "What is
the Capital of Sweden?" -no-cnv -fa 1 --backend-sampling`
Before:
```
llama_perf_sampler_print: sampling time = 0.25 ms / 207 runs ( 0.00 ms per token, 824701.20 tokens per second)
llama_perf_context_print: load time = 18215.58 ms
llama_perf_context_print: prompt eval time = 28.20 ms / 7 tokens ( 4.03 ms per token, 248.19 tokens per second)
llama_perf_context_print: eval time = 714.79 ms / 199 runs ( 3.59 ms per token, 278.40 tokens per second)
llama_perf_context_print: total time = 857.62 ms / 206 tokens
```
After
```
llama_perf_sampler_print: sampling time = 0.25 ms / 207 runs ( 0.00 ms per token, 828000.00 tokens per second)
llama_perf_context_print: load time = 18366.92 ms
llama_perf_context_print: prompt eval time = 35.92 ms / 7 tokens ( 5.13 ms per token, 194.87 tokens per second)
llama_perf_context_print: eval time = 532.79 ms / 199 runs ( 2.68 ms per token, 373.50 tokens per second)
llama_perf_context_print: total time = 683.65 ms / 206 tokens
```
* sampling : remove version from sampler chain
This commit removes the version field from the sampler chain and instead
used the sampler pointer itself for change detection.
* sampling : always populate logits for sampled probs
This commit updates common/sampler.cpp set_logits and
src/llama-sampling.cpp llama_sampler_sample to always populate the
logits field when backend sampled probabilities are available.
The motivation for this is that this ensure that CPU sampler always have
access to the logits values even when probabilites have been produced by
backend samplers.
* sampling : simplify backend sampling logic decode
This commit tries to simplify the backend sampling logic in
llama_context::decode.
* squash! sampling : simplify backend sampling logic decode
Fix condition to check if backend actually sampled tokens, not just that
backend samplers are available.
* common : fix regression caused by extra memory allocations during sampling
* squash! sampling : simplify backend sampling logic decode
The commit fixes a variable shadowing issue in the
`llama_context::decode` function which was introduced in a previous
refactoring.
* squash! common : fix regression caused by extra memory allocations during sampling
Apply the same changes to llama-sampling.cpp, llama_sampler_sample as
were applied in commit 38f408c25.
* sampling : introduce sampling_info struct
This commit introduces a sampling_info struct to encapsulate all
backend sampling related data within the llama_context class.
It also updates to use more descriptive names for sampled tokens and
candidates in the backend sampler ggml data structure.
* sampling : return early if backend sampling is disabled
* sampling : use pinned memory for backend sampling buffers
* common, tools : refactor model loading to support backend samplers
This commit refactors the model loading process in common/common.cpp
to enable backend sampler to be configure prior to the llama_context
creation.
The motivation for this change is that just being able to set/reset the
backend samplers after the llama_context has been created will cause a
resize to occur in llama_context::output_reserve which we want to avoid.
* sampling : add stride variable for clarity
* sampling: clarify candidate ids usage in comments
* sampling : fix copying both sampled tokens and logits/probs from backend
This commit fixes the issue where both sampled tokens and logits/probs
were not being copied correctly from the backend to the host when
multiple backend samplers were used.
A test for this scenario has also been added to ensure that both types
of data are copied correctly when different backend samplers are
employed.
* tests : cleanup test-backend-sampler.cpp
* common : remove build-info.cpp from commit [no ci]
This file was generated during the build process and should not be
included in previous commits.
* sampling : cleanup and clarify output_reserve
* sampling : remove redundant checks for stride and size [no ci]
* sampling : add debug log when backend sampler selects token
This commit adds a debug log statement in the llama_sampler_sample
to indicate when a backend sampler has selected a token for a given
index.
The modification helps in tracing the sampling process and understanding
the flow of control when backend samplers are used.
* examples : update batched to use backend sampling
This commit updates the batched example to demonstrate how to use
backend samplers.
* llama-cli : fix dangling reference to sampler config
* common : initialize backend samplers
* samplers : add missing cont
* sampling : add assertions for contiguous tensors in async copy functions
* examples : add info about hybrid sampling in batched [no ci]
* sampling : remove backend-dist option (wip)
This commit removes the `--backend-dist` option and instead uses the
configured --samplers chain to determine which samplers run on the
backend.
Backend sampling is still enabled using With `--backend_sampling`, and
the sampler chain, either explictly specified using `--samplers` or the
default, is automatically analyzed to determine which samplers can run
on the backend. The system finds the longest contiguous chain of
backend supported samplers from the start of the sampler sequence.
For example:
* If the chain is `top-k -> temperature -> top-p`, and both `top-k` and
`temperature` are backend-supported but `top-p` is not, then `top-k`
and `temperature` will run on the backend, while `top-p` and
subsequent samplers run on the CPU.
* If all configured samplers are supported, the final distribution
sampling will also happen on the backend, transferring only the
sampled token IDs back to the host.
* If the sampler chain starts with an unsupported sampler (e.g.,
`penalties`), all sampling runs on the CPU. Note that this is
currently the case with the default sampler so to use backend sampling
it is required to specify a sampler chain. See below for an example.
The following shows how llama-cli can be run with backend sampling:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
--prompt 'What is the capital of Sweden?' \
-n 20 \
-no-cnv \
--verbose-prompt \
-ngl 40 \
--backend-sampling \
--samplers 'top_k;temperature'
```
In this case the all sampling will happen on the backend since both
`top_k` and `temperature` are supported backend samplers.
To enable a partial backend sampling (hybrid sampling), for example
running `top_k` and `temperature` on the backend and `typ_p` on the CPU
the following sampler chain could be specified:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
--prompt 'What is the capital of Sweden?' \
-n 20 \
-no-cnv \
--verbose-prompt \
-ngl 40 \
--backend-sampling \
--samplers 'top_k;temperature;top_p'
```
If this looks good then I'll follow up with updates the llama-cli and
llama-server documentation to reflect these changes.
* CUDA: Add top-k implementation
* sampling : add min-p backend sampler
* Use `FetchContent` over CPM as it's bundled with CMake
Thanks @ggerganov for the suggestion
* common : add get_active_samplers function to check enabled samplers
This commit adds a function to check if a sampler is actually enabled,
meaning that it does not have values that disables its effect. This is
then used by the backend samplers initialization to avoid considering
samplers that are not enabled when determining the split point between
them.
The motivation for this is that this allows the default sampler chain
for `--samplers` to be used and any sampler that is not enabled will not
cause the backend samplers to be skipped.
For example, before this change if the penalties sampler was included in
the samplers list but had default values that disable it, it would cause
the backend samplers to be skipped entirely.
This commit also contains some refactoring to remove some code
duplication.
* cuda : fix editorconfig-checker warning
* sampling : use argmax for min-p sampling
* sampling : fix temperature check to allow zero temperature
This commit modifies the temperature sampling check to allow a
temperature value of zero. Previously, the check only allowed
positive temperature values, which excluded the valid case of
zero temperature.
The motivation for this is to enable a zero temperature setting which is
also currently causing the following test to fail:
```console
(venv) $ cd tools/server/tests
(venv) $ ./tests.sh unit/test_basic.py::test_load_split_model
```
* cuda : fix top-k compilation when CUB is unavailable
This commit adds a macro guard around argsort_f32_i32_cuda_cub usage
in the top-k fallback path, falling back to bitonic sort when
GGML_CUDA_USE_CUB is not defined.
The motivation for this is that some environments like AMD HIP
do not have CUB available, causing compilation failure.
Refs: https://github.com/ggml-org/llama.cpp/actions/runs/19728226426/job/56523606840#step:6:208
* sampling : add comments about backend sampler [no ci]
This commit adds a comment to llama_context's constructor explaining why
backend samplers are initialized early in the process.
* sampling : remove backend sampling chain from common_sampler
This commit removes the backend sampling chain from the common_sampler
structure and related functions.
The motivation for this change is that the backend samplers are not
currently set on the context, and if they are they would cause the
a graph reallocation to occur. Instead, the intialization is handled
like it currently is by llama_context's constructor.
* Fix top-k comp & behavior for non-CUB path
Some changes were made in 5ea3be265ba6f8916daf52e19e3fb8efe9a03637
which were incomplete. In the case of non-CUB, bitonic sort and its
limitations of ncols < 1024 have to apply, similar to argsort.cu
* sampling : support intermixed backend/cpu samplers
This commit updates the backend sampling implementation to support
intermixed usage of backend and CPU samplers within the same batch.
The initial implementation was developed as an all-or-nothing solution:
either perform backend sampling for the entire batch, or perform CPU
sampling for the entire batch.
The motivation for this change is to support batches with mixed
sequences. For example, we may have a backend sampler configured for
sequence 0, while sequence 1 in the same batch uses CPU sampling. This
was not supported in the initial implementation.
This issue manifested in llama-server with the webui: decoding with
backend samplers would work initially, but after changing to CPU
sampling, a slot (sequence) could still be using a backend sampler.
This meant that logits in output_reserve would not be allocated,
resulting in an error.
The solution in this commit inspects the batch to determine which
sampling modes are needed and allocates buffers accordingly. However,
there is a known inefficiency: when we have intermixed backend/CPU
samplers in the same batch, we currently copy all logits to the host,
even for sequences using backend samplers.
Added test_backend_cpu_mixed_batch to verify correct behavior with
mixed backend/CPU samplers in a single batch, including dynamic
sampler switching between decode calls.
* squash! sampling : support intermixed backend/cpu samplers
Add check that logits is not null which is can happen for embeddings.
* squash! sampling : support intermixed backend/cpu samplers
Fix llama-save-load-state which currently fails by handling the case
when batch.logits is nullptr (like when loading state) by allocating
space for all outputs as CPU logits.
* refactor : simplify and improve memory management
* Add initial version for top-p sampling
As we only support static graphs for the time and we don't know the size
of the output of top-p, we have to do value-scaling same as for min-p
operator.
Further improvements can be applied to the unit-test (i.e. check for
equivalence of top_p happening on backend with top_p happening on cpu)
and also by constructing candidates and sorting those as opposed to
reversing the sort of the logits (this would be arange +
get_rows instead of argsort + get_rows)
* sampling : use logits directly for min-p filtering
* sampling : simplify
* llama : simplify
* llama : cleanup + naming
* llama : call backend_init once
* llama : reserve graphs with samplers
* llama : naming
* cont : naming
* sampling : lower log level for output buffer reallocations [no ci]
This commit changes the logging level for output buffer reallocations
in the llama_context::output_reserve function from INFO to DEBUG.
The motivation for this is that it currently logs to info and when
enabling verbose logging for llama-cli this will get mixed with the
output, for example:
```console
What is the capital of Sweden?output_reserve: reallocating output buffer from size 0.58 MiB to 1.74 MiB
1. Stockholm
2\. Helsinki
Based are the options
1. Stockholm
Explanation: Stockholm is the capital of
...
```
* Fix backend_top_p_sampler
softmax(softmax) will return uniform distribution, so we should not
return the softmax but the logits instead.
* Factor out `ggml_sort` into its own function
* Make backend's top_p sampler inclusive
In addition to match the algorithm proposed in the original
[paper](https://arxiv.org/abs/1904.09751), this resolves the edge-case
where `max_p is > top_p` for a single logit, where the mask would
otherwise be empty (and we thus sample from the whole vocabulary with
equal likelihood)
* common : simplify sampler chain initialization
* sampling : do not create empty samplers
* sampling : fix top_p empty condition
* examples : remove outdated backend sampling section
This commit removes the outdated section about using backend samplers
from the README.md file in the examples/batched.
* sampling : fix backend temp sampler for zero temperature
This commit fixes the implementation of the temperature-based sampler
for the case when the temperature is set to zero. This now correctly
selects the most probable token by masking out all other tokens in the
logits.
* CUDA: Move cccl fetch to after cuda has been enabled in CMakeLists.txt
This will allow cccl to set build flags for the CUDA compiler, required
e.g. for MSVC compat, see also
https://github.com/NVIDIA/cccl/pull/6791
* CUDA: Use standard-compliant preprocessor for MSVC builds
Workarounds of https://github.com/NVIDIA/cccl/pull/6791 will not be
backported to CCCL 3.2, only the diagnostics/error messages will:
https://github.com/NVIDIA/cccl/pull/6827
* CUDA: Update CCCL's rc candidate
* squash! sampling : fix backend temp sampler for zero temperature
This modifies the parent commit to simply return the most probably token
instead of masking the logits.
* sampling : implement temp_ext_backend sampling
This commit implements the apply function for the extended temperature
sampling.
* sampling : minor cleanup
* sampling : stop short if backend sampler sampled a token
This commit modifies the graph building logic to immediately continue
when a token has already been sampled by the backend sampler.
It also updates the test for backend temporary sampling to include
top-k and distribution samplers in the chain to verify that they are not
producing any logits (they are not run).
* Revert "sampling : stop short if backend sampler sampled a token"
This reverts commit 87b2719eca55b30afff600fc7f61c6cce9452cbf.
* sampling : fix backend temp sampling to use logits masking
* sampling : simplify temp sampling
* sampling : remove redundant calls to ggml_build_forward_expand
* sampling : check backend support during init
* cont : keep backend sampling disabled for now
* sampling : fix outputs and device checks
* sampling : fix candidates logic
* Add perf-tests for CUMSUM
* Readd `cub::DeviceScan::InclusiveSum`-based CumSum
For single rows and large columns doing a for-loop over the function
`cub::DeviceScan::InclusiveSum` offered by CUB outperforms the
`cumsum_cub_kernel` where `cub::BlockScan` is used.
Numbers before this change
Backend 1/3: CUDA0
Device description: NVIDIA RTX 6000 Ada Generation
Device memory: 48510 MB (48039 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 311258 runs - 3.26 us/run - 2048 kB/run - 599.76 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 229390 runs - 4.40 us/run - 5120 kB/run - 1110.23 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 37583 runs - 29.63 us/run - 6250 kB/run - 201.18 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 892819 runs - 1.12 us/run - 1 kB/run - 0.85 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 450505 runs - 2.25 us/run - 8 kB/run - 3.39 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 155629 runs - 6.61 us/run - 32 kB/run - 4.62 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 81910 runs - 12.60 us/run - 64 kB/run - 4.85 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 49146 runs - 23.99 us/run - 128 kB/run - 5.09 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 24573 runs - 47.10 us/run - 256 kB/run - 5.18 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 16382 runs - 93.57 us/run - 512 kB/run - 5.22 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 8191 runs - 184.79 us/run - 1024 kB/run - 5.29 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 8191 runs - 280.43 us/run - 1562 kB/run - 5.31 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 2148 runs - 2771.23 us/run - 15625 kB/run - 5.38 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 458696 runs - 2.21 us/run - 4 kB/run - 1.73 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 360404 runs - 2.82 us/run - 32 kB/run - 10.83 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 147438 runs - 7.12 us/run - 128 kB/run - 17.15 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 81910 runs - 12.90 us/run - 256 kB/run - 18.92 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 49146 runs - 24.32 us/run - 512 kB/run - 20.08 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 24573 runs - 47.28 us/run - 1024 kB/run - 20.66 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 16382 runs - 93.21 us/run - 2048 kB/run - 20.96 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 8191 runs - 185.04 us/run - 4096 kB/run - 21.11 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 5369 runs - 282.08 us/run - 6250 kB/run - 21.13 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 537 runs - 2806.46 us/run - 62500 kB/run - 21.26 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 458696 runs - 2.20 us/run - 8 kB/run - 3.47 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 360404 runs - 2.82 us/run - 64 kB/run - 21.66 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 147438 runs - 7.12 us/run - 256 kB/run - 34.28 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 81910 runs - 12.90 us/run - 512 kB/run - 37.84 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 49146 runs - 24.32 us/run - 1024 kB/run - 40.15 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 47.28 us/run - 2048 kB/run - 41.31 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 93.20 us/run - 4096 kB/run - 41.92 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 8194 runs - 185.05 us/run - 8192 kB/run - 42.22 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 5370 runs - 282.15 us/run - 12500 kB/run - 42.26 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 269 runs - 4067.61 us/run - 125000 kB/run - 29.36 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 303067 runs - 3.32 us/run - 16 kB/run - 4.60 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 303067 runs - 3.32 us/run - 128 kB/run - 36.76 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 147438 runs - 7.17 us/run - 512 kB/run - 68.13 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 81910 runs - 12.90 us/run - 1024 kB/run - 75.68 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 49146 runs - 24.33 us/run - 2048 kB/run - 80.28 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 47.30 us/run - 4096 kB/run - 82.59 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 93.24 us/run - 8192 kB/run - 83.80 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 6147 runs - 185.07 us/run - 16384 kB/run - 84.45 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 4029 runs - 282.40 us/run - 25000 kB/run - 84.46 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 270 runs - 4118.40 us/run - 250000 kB/run - 58.11 GB/s
Backend CUDA0: OK
Backend 2/3: CUDA1
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96677 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 368595 runs - 2.73 us/run - 2048 kB/run - 715.83 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 216282 runs - 4.72 us/run - 5120 kB/run - 1035.32 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 32214 runs - 34.33 us/run - 6250 kB/run - 173.64 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 810909 runs - 1.24 us/run - 1 kB/run - 0.77 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 401359 runs - 2.52 us/run - 8 kB/run - 3.03 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 139247 runs - 7.44 us/run - 32 kB/run - 4.10 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 73719 runs - 14.27 us/run - 64 kB/run - 4.28 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 40955 runs - 27.24 us/run - 128 kB/run - 4.48 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 24573 runs - 53.46 us/run - 256 kB/run - 4.57 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 16382 runs - 105.29 us/run - 512 kB/run - 4.64 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 8191 runs - 210.15 us/run - 1024 kB/run - 4.65 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 8191 runs - 318.22 us/run - 1562 kB/run - 4.68 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 2148 runs - 3142.23 us/run - 15625 kB/run - 4.74 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 303067 runs - 3.34 us/run - 4 kB/run - 1.14 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 253921 runs - 4.03 us/run - 32 kB/run - 7.58 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 73719 runs - 14.96 us/run - 256 kB/run - 16.32 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 40955 runs - 28.66 us/run - 512 kB/run - 17.04 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 24573 runs - 54.21 us/run - 1024 kB/run - 18.01 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 16382 runs - 106.49 us/run - 2048 kB/run - 18.34 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 8191 runs - 210.88 us/run - 4096 kB/run - 18.52 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 5369 runs - 321.77 us/run - 6250 kB/run - 18.53 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 537 runs - 3191.79 us/run - 62500 kB/run - 18.69 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 376786 runs - 2.67 us/run - 8 kB/run - 2.86 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 245730 runs - 4.10 us/run - 64 kB/run - 14.90 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 122865 runs - 8.20 us/run - 256 kB/run - 29.79 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 65528 runs - 16.38 us/run - 512 kB/run - 29.82 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 40955 runs - 28.69 us/run - 1024 kB/run - 34.04 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 55.28 us/run - 2048 kB/run - 35.33 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 108.50 us/run - 4096 kB/run - 36.00 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 8194 runs - 213.75 us/run - 8192 kB/run - 36.55 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 5370 runs - 326.31 us/run - 12500 kB/run - 36.54 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 538 runs - 3252.68 us/run - 125000 kB/run - 36.72 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 303067 runs - 3.32 us/run - 16 kB/run - 4.60 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 253921 runs - 4.06 us/run - 128 kB/run - 30.09 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 122865 runs - 8.20 us/run - 512 kB/run - 59.57 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 65528 runs - 16.38 us/run - 1024 kB/run - 59.63 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 28.69 us/run - 2048 kB/run - 68.09 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 55.28 us/run - 4096 kB/run - 70.67 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 108.50 us/run - 8192 kB/run - 72.02 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 6147 runs - 213.60 us/run - 16384 kB/run - 73.17 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 4029 runs - 326.04 us/run - 25000 kB/run - 73.15 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 270 runs - 5458.69 us/run - 250000 kB/run - 43.84 GB/s
----
Numbers after:
Backend 1/3: CUDA0
Device description: NVIDIA RTX 6000 Ada Generation
Device memory: 48510 MB (48039 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 311258 runs - 3.25 us/run - 2048 kB/run - 601.62 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 229390 runs - 4.40 us/run - 5120 kB/run - 1110.14 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 37583 runs - 29.67 us/run - 6250 kB/run - 200.89 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 892819 runs - 1.12 us/run - 1 kB/run - 0.85 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 458696 runs - 2.21 us/run - 8 kB/run - 3.45 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 376786 runs - 2.66 us/run - 32 kB/run - 11.46 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 393168 runs - 2.59 us/run - 64 kB/run - 23.57 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 393168 runs - 2.59 us/run - 128 kB/run - 47.15 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 376786 runs - 2.69 us/run - 256 kB/run - 90.69 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 327640 runs - 3.06 us/run - 512 kB/run - 159.65 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 311258 runs - 3.28 us/run - 1024 kB/run - 297.77 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 270303 runs - 3.74 us/run - 1562 kB/run - 398.14 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 137472 runs - 7.35 us/run - 15625 kB/run - 2026.94 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 876437 runs - 1.14 us/run - 4 kB/run - 3.33 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 442314 runs - 2.28 us/run - 32 kB/run - 13.39 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 155629 runs - 6.69 us/run - 128 kB/run - 18.24 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 81910 runs - 12.53 us/run - 256 kB/run - 19.49 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 49146 runs - 24.18 us/run - 512 kB/run - 20.20 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 65528 runs - 15.34 us/run - 1024 kB/run - 63.66 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 73719 runs - 14.76 us/run - 2048 kB/run - 132.35 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 65528 runs - 16.01 us/run - 4096 kB/run - 244.07 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 64428 runs - 16.51 us/run - 6250 kB/run - 360.97 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 33831 runs - 29.59 us/run - 62500 kB/run - 2016.08 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 868246 runs - 1.16 us/run - 8 kB/run - 6.59 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 442314 runs - 2.28 us/run - 64 kB/run - 26.76 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 155629 runs - 6.69 us/run - 256 kB/run - 36.48 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 81910 runs - 12.53 us/run - 512 kB/run - 38.97 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 49146 runs - 24.17 us/run - 1024 kB/run - 40.41 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 47.53 us/run - 2048 kB/run - 41.10 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 61.25 us/run - 4096 kB/run - 63.77 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 32776 runs - 31.79 us/run - 8192 kB/run - 245.82 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 32220 runs - 32.90 us/run - 12500 kB/run - 362.35 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 6725 runs - 151.99 us/run - 125000 kB/run - 785.77 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 851864 runs - 1.18 us/run - 16 kB/run - 12.97 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 442314 runs - 2.30 us/run - 128 kB/run - 53.13 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 155629 runs - 6.68 us/run - 512 kB/run - 73.13 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 81910 runs - 12.68 us/run - 1024 kB/run - 77.00 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 24.56 us/run - 2048 kB/run - 79.53 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 47.52 us/run - 4096 kB/run - 82.21 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 93.44 us/run - 8192 kB/run - 83.62 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 16392 runs - 63.36 us/run - 16384 kB/run - 246.68 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 16116 runs - 65.25 us/run - 25000 kB/run - 365.53 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 3375 runs - 304.46 us/run - 250000 kB/run - 785.98 GB/s
Backend CUDA0: OK
Backend 2/3: CUDA1
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96677 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 376786 runs - 2.69 us/run - 2048 kB/run - 727.04 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 216282 runs - 4.64 us/run - 5120 kB/run - 1053.30 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 32214 runs - 34.21 us/run - 6250 kB/run - 174.27 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 819100 runs - 1.22 us/run - 1 kB/run - 0.78 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 409550 runs - 2.47 us/run - 8 kB/run - 3.09 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 303067 runs - 3.31 us/run - 32 kB/run - 9.21 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 237539 runs - 4.33 us/run - 64 kB/run - 14.08 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 237539 runs - 4.33 us/run - 128 kB/run - 28.17 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 188393 runs - 5.37 us/run - 256 kB/run - 45.47 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 188393 runs - 5.41 us/run - 512 kB/run - 90.20 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 188393 runs - 5.41 us/run - 1024 kB/run - 180.41 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 188393 runs - 5.41 us/run - 1562 kB/run - 275.27 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 128880 runs - 7.76 us/run - 15625 kB/run - 1920.33 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 802718 runs - 1.26 us/run - 4 kB/run - 3.03 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 401359 runs - 2.51 us/run - 32 kB/run - 12.18 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 139247 runs - 7.51 us/run - 128 kB/run - 16.26 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 73719 runs - 14.17 us/run - 256 kB/run - 17.23 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 40955 runs - 27.37 us/run - 512 kB/run - 17.84 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 40955 runs - 26.33 us/run - 1024 kB/run - 37.10 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 40955 runs - 26.19 us/run - 2048 kB/run - 74.59 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 40955 runs - 26.35 us/run - 4096 kB/run - 148.26 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 42952 runs - 24.18 us/run - 6250 kB/run - 246.51 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 32757 runs - 31.01 us/run - 62500 kB/run - 1923.68 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 786336 runs - 1.28 us/run - 8 kB/run - 5.95 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 393168 runs - 2.57 us/run - 64 kB/run - 23.73 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 131056 runs - 7.67 us/run - 256 kB/run - 31.82 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 73719 runs - 14.43 us/run - 512 kB/run - 33.84 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 40955 runs - 27.90 us/run - 1024 kB/run - 35.01 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 54.63 us/run - 2048 kB/run - 35.75 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 72.24 us/run - 4096 kB/run - 54.08 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 20485 runs - 52.66 us/run - 8192 kB/run - 148.37 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 21480 runs - 48.00 us/run - 12500 kB/run - 248.42 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 16140 runs - 61.99 us/run - 125000 kB/run - 1926.51 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 786336 runs - 1.28 us/run - 16 kB/run - 11.90 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 393168 runs - 2.57 us/run - 128 kB/run - 47.57 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 131056 runs - 7.65 us/run - 512 kB/run - 63.83 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 73719 runs - 14.42 us/run - 1024 kB/run - 67.74 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 27.87 us/run - 2048 kB/run - 70.09 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 54.54 us/run - 4096 kB/run - 71.63 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 107.53 us/run - 8192 kB/run - 72.66 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 10245 runs - 105.10 us/run - 16384 kB/run - 148.70 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 10744 runs - 95.36 us/run - 25000 kB/run - 250.11 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 5400 runs - 186.97 us/run - 250000 kB/run - 1279.90 GB/s
* sampling : expand support (wip)
* tests : fix memory leaks
* cont : fixes
* tests : check temp back to 0.0
* sampling : fix top-p
* sampling : handle n_probs case
* server : handle unsupported cases
* metal : print node names for debugging
* ggml : remove redundant src in ggml_cast
* ggml-alloc : fix reuse-parent logic for misaligned sizes
* Revert "ggml : remove redundant src in ggml_cast"
This reverts commit 62d1b0082dbad699fbeea85a096bc334e3c1c0e6.
* CUDA: Add Cooperative-Groups-based parallelization of ncols in softmax
Old implementation parallelizes rows across SMs, which does not fit the
needs of backend-sampling (where we have ncols >> nrows and thus want to
parallelize ncols across SMs)
* Add TODOs to and adjust heuristics of row-wise soft_max in CUDA
Heuristics were selected based on the following numbers:
```
-- Before
Backend 1/2: CUDA0
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96691 MB free)
SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 2236 runs - 450.34 us/run - 655360 kB/run - 1401.20 GB/s
SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 17748 runs - 56.80 us/run - 128880 kB/run - 2168.19 GB/s
SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 57204 runs - 18.35 us/run - 12320 kB/run - 640.57 GB/s
SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9840 runs - 102.46 us/run - 81920 kB/run - 763.45 GB/s
SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98064 runs - 10.25 us/run - 6160 kB/run - 573.43 GB/s
SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98310 runs - 10.25 us/run - 10240 kB/run - 953.20 GB/s
SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 5.99 us/run - 640 kB/run - 101.84 GB/s
SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 5.97 us/run - 770 kB/run - 123.02 GB/s
SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 6.00 us/run - 64 kB/run - 10.16 GB/s
SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 163820 runs - 6.12 us/run - 256 kB/run - 39.91 GB/s
SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.88 us/run - 1024 kB/run - 141.92 GB/s
SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 114674 runs - 8.87 us/run - 512 kB/run - 55.06 GB/s
SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 2048 kB/run - 190.82 GB/s
SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 21.37 us/run - 256 kB/run - 11.43 GB/s
SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 22.54 us/run - 1024 kB/run - 43.33 GB/s
SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 23.92 us/run - 4096 kB/run - 163.32 GB/s
SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 32764 runs - 38.94 us/run - 512 kB/run - 12.54 GB/s
SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24573 runs - 41.94 us/run - 2048 kB/run - 46.57 GB/s
SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24582 runs - 43.09 us/run - 8192 kB/run - 181.32 GB/s
SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 16382 runs - 74.56 us/run - 1024 kB/run - 13.10 GB/s
SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 16382 runs - 79.85 us/run - 4096 kB/run - 48.92 GB/s
SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 12294 runs - 82.41 us/run - 16384 kB/run - 189.64 GB/s
SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8191 runs - 145.16 us/run - 2048 kB/run - 13.46 GB/s
SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8194 runs - 155.46 us/run - 8192 kB/run - 50.26 GB/s
SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 7175 runs - 160.70 us/run - 32768 kB/run - 194.56 GB/s
SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8191 runs - 285.81 us/run - 4096 kB/run - 13.67 GB/s
SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 4098 runs - 306.91 us/run - 16384 kB/run - 50.92 GB/s
SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 3591 runs - 317.06 us/run - 65536 kB/run - 197.32 GB/s
-- After
Backend 1/2: CUDA0
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96691 MB free)
SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 2236 runs - 450.67 us/run - 655360 kB/run - 1400.15 GB/s
SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 17748 runs - 56.97 us/run - 128880 kB/run - 2161.50 GB/s
SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 57204 runs - 18.35 us/run - 12320 kB/run - 640.36 GB/s
SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9840 runs - 102.46 us/run - 81920 kB/run - 763.42 GB/s
SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98064 runs - 10.25 us/run - 6160 kB/run - 573.43 GB/s
SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98310 runs - 10.25 us/run - 10240 kB/run - 953.21 GB/s
SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 7.00 us/run - 640 kB/run - 87.26 GB/s
SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.99 us/run - 770 kB/run - 105.05 GB/s
SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 6.02 us/run - 64 kB/run - 10.13 GB/s
SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 163820 runs - 6.12 us/run - 256 kB/run - 39.87 GB/s
SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.91 us/run - 1024 kB/run - 141.40 GB/s
SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 114674 runs - 8.79 us/run - 512 kB/run - 55.54 GB/s
SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 2048 kB/run - 190.82 GB/s
SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 131056 runs - 8.11 us/run - 256 kB/run - 30.12 GB/s
SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 22.54 us/run - 1024 kB/run - 43.33 GB/s
SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 23.32 us/run - 4096 kB/run - 167.50 GB/s
SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.19 us/run - 512 kB/run - 59.63 GB/s
SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 40955 runs - 24.59 us/run - 2048 kB/run - 79.43 GB/s
SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24582 runs - 43.21 us/run - 8192 kB/run - 180.84 GB/s
SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.19 us/run - 1024 kB/run - 119.25 GB/s
SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 40955 runs - 24.59 us/run - 4096 kB/run - 158.87 GB/s
SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 12294 runs - 82.37 us/run - 16384 kB/run - 189.74 GB/s
SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 2048 kB/run - 238.28 GB/s
SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 36873 runs - 28.66 us/run - 8192 kB/run - 272.61 GB/s
SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9225 runs - 108.51 us/run - 32768 kB/run - 288.13 GB/s
SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 4096 kB/run - 381.65 GB/s
SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 32784 runs - 31.74 us/run - 16384 kB/run - 492.43 GB/s
SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8721 runs - 121.20 us/run - 65536 kB/run - 516.19 GB/s
```
* Fix compiler warnings by casting `const` away
* llama : require backend samplers to be of type llama_sampler_chain
* sampling : use host buffer type for inputs
* Try fixing HIP build errors by adding corresponding #defines
Will likely have to disable for MUSA as I didn't find any docs online
* Fix launch logic when supports_cooperative_launch=false
* Disable cooperative groups for musa
Didn't find any doc online, so I don't even know if they support this
* server : reconnect the backend_sampling setting in the WebUI
* graph : make the compute graph constant with respect to active samplers
* batch : fix sequence id ownage
* graph : respect sampler order for graph reuse
* HIP/MUSA: fix build for backend sampling
* sampling : optimize logit_bias sampler
* cont : fix build
* sampling : generic ggml op support detection
* sampling : fix greedy
* tests : run backend sampler tests always on the CPU
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* webui : fix lint
* Fix data-race in `soft_max_f32_parallelize_cols_single_row`
By using `tmp_vals` to store both max values and exponential
accumulator there was a potential data-race, where the exponential accumulator
for a given CTA may have written to `tmp_vals` before all others CTAs have
read the max value from it.
To avoid a third g.sync(), an additional temporary data-storage was
added. Given that there are syncs in place after writing to gmem, it is
guaranteed that the previous values for sums/max were read by all CTAs now.
* Apply automated code-formating to softmax.cu
* llama : clarify backend_accept/backend_set_input comments [no ci]
* llama : fix typo in comment [no ci]
* tests : use smart pointers for backend samplers
* tests : use smart pointers for model and context
* tests : remove vocab member from test_model_context
Also includes some minor cleanups related to nullptr checks.
* tests : extract batch info update to separate method
* tests : fix batch token position tracking in test_backend_sampler.cpp
* tests : add --device option support to backend sampler tests
This commit adds support for specifying a device to run the test on.
* common : disable backend sampling when grammar is involved
* Fix different RNG-states between backend-sampling and llama-sampling
By default, we perform a warm-up step where the ggml_cgraph is computed
once. For backend-sampling, this graph contains the sampler, and thus
the RNG state of the backend's dist sampler is advanced once.
Solution to this is to reset the samplers after the warmup has finished
* Make backend dist sampler use same rnd's as dist sampler
We sample in double precision and cast to float to match rnd numbers of
llama_dampler_dist which uses double precision (sampling from
std::uniform_real_distribution<double> and
std::uniform_real_distribution<float> with same rng will produce
different sequences).
* Update CCCL version to v3.2.0-rc2
* Build with CCCL 3.2 for CUDA backends
Gives best perf for backend-sampling on CUDA. Flag can be removed once
CCCL 3.2 is bundled within CTK and that CTK version is used in llama.cpp
* tests : revert server test changes (no longer needed)
* ggml : include cub/cub.cuh instead of block_scan.cuh
This commit updates the include directive in cumsum.cu to use
cub/cub.cuh instead of cub/block/block_scan.cuh.
The motivation of this change is that without it compilation fails
with the following error:
```console
/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(196): error: name followed by "::" must be a class or namespace name
cub::DeviceScan::InclusiveSum(nullptr,
^
/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(207): error: name followed by "::" must be a class or namespace name
cub::DeviceScan::InclusiveSum((void *) tmp_alloc.get(), tmp_size, src, dst, ne, stream);
^
2 errors detected in the compilation of "/llama.cpp/ggml/src/ggml-cuda/cumsum.cu".
gmake[2]: *** [ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/build.make:317: ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/cumsum.cu.o] Error 2
```
Commit 83b3b1c271 ("cuda: optimize
cumsum cub path (#18362)") updated the include directive replacing
device_scan.cuh which is causing this issue.
This commit uses cub/cub.cuh umbrella header which is consistent with
other files in the ggml-cuda directory like mean.cu, sum.cu, etc.
* arg : add shorthand for --backend-sampling
* ci : add server workflow with backend sampling
* sampling : fix reshapes
* server : remove printfs
* sampling : zero-initialize input buffers
* minor : add comments + some cleanup
* llama : assert at most one output token per sequence
* tests : add more top_k tests
* CUDA: Fix non-determinism of CUB-based Top-K
DeviceTopK::MaxPairs is an iterative algorithm, where `d_keys_out` is
written after every iteration. As a consequence, it must not overlap
with `d_keys_in`, or otherwise undefined behavior occurs (keys are no
longer unique in d_keys_in and may map to different values between
iterations)
* CUDA: Optimize index of top_k_cub
By using the fancy
[`counting_iterator`](https://nvidia.github.io/cccl/thrust/api/classthrust_1_1counting__iterator.html#classthrust_1_1counting__iterator)
exposed by CCCL, we can avoid materializing the index to GPU memory,
saving VRAM + 1 kernel invocation
* Apply code-formatting to top-k.cu
* CUDA: Remove obsolete temp_keys from CUB
Since we use cuda::discard_iterator to avoid writing out the keys, we
can directly pass in src instead of copying it to `temp_keys`
* minor : cleanup, TODOs, etc.
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
849 lines
36 KiB
C++
849 lines
36 KiB
C++
// Various helper functions and utilities
|
|
|
|
#pragma once
|
|
|
|
#include "ggml-opt.h"
|
|
#include "llama-cpp.h"
|
|
|
|
#include <set>
|
|
#include <sstream>
|
|
#include <string>
|
|
#include <string_view>
|
|
#include <vector>
|
|
#include <map>
|
|
|
|
#if defined(_WIN32) && !defined(_WIN32_WINNT)
|
|
#define _WIN32_WINNT 0x0A00
|
|
#endif
|
|
|
|
#ifdef _WIN32
|
|
#define DIRECTORY_SEPARATOR '\\'
|
|
#else
|
|
#define DIRECTORY_SEPARATOR '/'
|
|
#endif // _WIN32
|
|
|
|
#define die(msg) do { fputs("error: " msg "\n", stderr); exit(1); } while (0)
|
|
#define die_fmt(fmt, ...) do { fprintf(stderr, "error: " fmt "\n", __VA_ARGS__); exit(1); } while (0)
|
|
|
|
#define print_build_info() do { \
|
|
fprintf(stderr, "%s: build = %d (%s)\n", __func__, LLAMA_BUILD_NUMBER, LLAMA_COMMIT); \
|
|
fprintf(stderr, "%s: built with %s for %s\n", __func__, LLAMA_COMPILER, LLAMA_BUILD_TARGET); \
|
|
} while(0)
|
|
|
|
struct common_time_meas {
|
|
common_time_meas(int64_t & t_acc, bool disable = false);
|
|
~common_time_meas();
|
|
|
|
const int64_t t_start_us;
|
|
|
|
int64_t & t_acc;
|
|
};
|
|
|
|
struct common_adapter_lora_info {
|
|
std::string path;
|
|
float scale;
|
|
|
|
std::string task_name;
|
|
std::string prompt_prefix;
|
|
|
|
struct llama_adapter_lora * ptr;
|
|
};
|
|
|
|
using llama_tokens = std::vector<llama_token>;
|
|
|
|
// build info
|
|
extern int LLAMA_BUILD_NUMBER;
|
|
extern const char * LLAMA_COMMIT;
|
|
extern const char * LLAMA_COMPILER;
|
|
extern const char * LLAMA_BUILD_TARGET;
|
|
|
|
struct common_control_vector_load_info;
|
|
|
|
//
|
|
// CPU utils
|
|
//
|
|
|
|
struct cpu_params {
|
|
int n_threads = -1;
|
|
bool cpumask[GGML_MAX_N_THREADS] = {false}; // CPU affinity mask.
|
|
bool mask_valid = false; // Default: any CPU
|
|
enum ggml_sched_priority priority = GGML_SCHED_PRIO_NORMAL; // Scheduling prio : (0 - normal, 1 - medium, 2 - high, 3 - realtime)
|
|
bool strict_cpu = false; // Use strict CPU placement
|
|
uint32_t poll = 50; // Polling (busywait) level (0 - no polling, 100 - mostly polling)
|
|
};
|
|
|
|
int32_t cpu_get_num_physical_cores();
|
|
int32_t cpu_get_num_math();
|
|
|
|
//
|
|
// Common params
|
|
//
|
|
|
|
enum llama_example {
|
|
LLAMA_EXAMPLE_COMMON,
|
|
LLAMA_EXAMPLE_SPECULATIVE,
|
|
LLAMA_EXAMPLE_COMPLETION,
|
|
LLAMA_EXAMPLE_CLI,
|
|
LLAMA_EXAMPLE_EMBEDDING,
|
|
LLAMA_EXAMPLE_PERPLEXITY,
|
|
LLAMA_EXAMPLE_RETRIEVAL,
|
|
LLAMA_EXAMPLE_PASSKEY,
|
|
LLAMA_EXAMPLE_IMATRIX,
|
|
LLAMA_EXAMPLE_BENCH,
|
|
LLAMA_EXAMPLE_SERVER,
|
|
LLAMA_EXAMPLE_CVECTOR_GENERATOR,
|
|
LLAMA_EXAMPLE_EXPORT_LORA,
|
|
LLAMA_EXAMPLE_MTMD,
|
|
LLAMA_EXAMPLE_LOOKUP,
|
|
LLAMA_EXAMPLE_PARALLEL,
|
|
LLAMA_EXAMPLE_TTS,
|
|
LLAMA_EXAMPLE_DIFFUSION,
|
|
LLAMA_EXAMPLE_FINETUNE,
|
|
LLAMA_EXAMPLE_FIT_PARAMS,
|
|
|
|
LLAMA_EXAMPLE_COUNT,
|
|
};
|
|
|
|
enum common_sampler_type {
|
|
COMMON_SAMPLER_TYPE_NONE = 0,
|
|
COMMON_SAMPLER_TYPE_DRY = 1,
|
|
COMMON_SAMPLER_TYPE_TOP_K = 2,
|
|
COMMON_SAMPLER_TYPE_TOP_P = 3,
|
|
COMMON_SAMPLER_TYPE_MIN_P = 4,
|
|
//COMMON_SAMPLER_TYPE_TFS_Z = 5,
|
|
COMMON_SAMPLER_TYPE_TYPICAL_P = 6,
|
|
COMMON_SAMPLER_TYPE_TEMPERATURE = 7,
|
|
COMMON_SAMPLER_TYPE_XTC = 8,
|
|
COMMON_SAMPLER_TYPE_INFILL = 9,
|
|
COMMON_SAMPLER_TYPE_PENALTIES = 10,
|
|
COMMON_SAMPLER_TYPE_TOP_N_SIGMA = 11,
|
|
};
|
|
|
|
// dimensionality reduction methods, used by cvector-generator
|
|
enum dimre_method {
|
|
DIMRE_METHOD_PCA,
|
|
DIMRE_METHOD_MEAN,
|
|
};
|
|
|
|
enum common_conversation_mode {
|
|
COMMON_CONVERSATION_MODE_DISABLED = 0,
|
|
COMMON_CONVERSATION_MODE_ENABLED = 1,
|
|
COMMON_CONVERSATION_MODE_AUTO = 2,
|
|
};
|
|
|
|
enum common_grammar_trigger_type {
|
|
COMMON_GRAMMAR_TRIGGER_TYPE_TOKEN,
|
|
COMMON_GRAMMAR_TRIGGER_TYPE_WORD,
|
|
COMMON_GRAMMAR_TRIGGER_TYPE_PATTERN,
|
|
COMMON_GRAMMAR_TRIGGER_TYPE_PATTERN_FULL,
|
|
};
|
|
|
|
struct common_grammar_trigger {
|
|
common_grammar_trigger_type type;
|
|
std::string value;
|
|
llama_token token = LLAMA_TOKEN_NULL;
|
|
};
|
|
|
|
enum common_params_sampling_config : uint64_t {
|
|
COMMON_PARAMS_SAMPLING_CONFIG_SAMPLERS = 1 << 0,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_TOP_K = 1 << 1,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_TOP_P = 1 << 2,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_MIN_P = 1 << 3,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_XTC_PROBABILITY = 1 << 4,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_XTC_THRESHOLD = 1 << 5,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_TEMP = 1 << 6,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_PENALTY_LAST_N = 1 << 7,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_PENALTY_REPEAT = 1 << 8,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_MIROSTAT = 1 << 9,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_MIROSTAT_TAU = 1 << 10,
|
|
COMMON_PARAMS_SAMPLING_CONFIG_MIROSTAT_ETA = 1 << 11,
|
|
};
|
|
|
|
|
|
// sampling parameters
|
|
struct common_params_sampling {
|
|
uint32_t seed = LLAMA_DEFAULT_SEED; // the seed used to initialize llama_sampler
|
|
|
|
int32_t n_prev = 64; // number of previous tokens to remember
|
|
int32_t n_probs = 0; // if greater than 0, output the probabilities of top n_probs tokens.
|
|
int32_t min_keep = 0; // 0 = disabled, otherwise samplers should return at least min_keep tokens
|
|
int32_t top_k = 40; // <= 0 to use vocab size
|
|
float top_p = 0.95f; // 1.0 = disabled
|
|
float min_p = 0.05f; // 0.0 = disabled
|
|
float xtc_probability = 0.00f; // 0.0 = disabled
|
|
float xtc_threshold = 0.10f; // > 0.5 disables XTC
|
|
float typ_p = 1.00f; // typical_p, 1.0 = disabled
|
|
float temp = 0.80f; // <= 0.0 to sample greedily, 0.0 to not output probabilities
|
|
float dynatemp_range = 0.00f; // 0.0 = disabled
|
|
float dynatemp_exponent = 1.00f; // controls how entropy maps to temperature in dynamic temperature sampler
|
|
int32_t penalty_last_n = 64; // last n tokens to penalize (0 = disable penalty, -1 = context size)
|
|
float penalty_repeat = 1.00f; // 1.0 = disabled
|
|
float penalty_freq = 0.00f; // 0.0 = disabled
|
|
float penalty_present = 0.00f; // 0.0 = disabled
|
|
float dry_multiplier = 0.0f; // 0.0 = disabled; DRY repetition penalty for tokens extending repetition:
|
|
float dry_base = 1.75f; // 0.0 = disabled; multiplier * base ^ (length of sequence before token - allowed length)
|
|
int32_t dry_allowed_length = 2; // tokens extending repetitions beyond this receive penalty
|
|
int32_t dry_penalty_last_n = -1; // how many tokens to scan for repetitions (0 = disable penalty, -1 = context size)
|
|
int32_t mirostat = 0; // 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
|
|
float top_n_sigma = -1.00f;// -1.0 = disabled
|
|
float mirostat_tau = 5.00f; // target entropy
|
|
float mirostat_eta = 0.10f; // learning rate
|
|
bool ignore_eos = false;
|
|
bool no_perf = false; // disable performance metrics
|
|
bool timing_per_token = false;
|
|
|
|
uint64_t user_sampling_config = 0; // bitfield to track user-specified samplers
|
|
|
|
std::vector<std::string> dry_sequence_breakers = {"\n", ":", "\"", "*"}; // default sequence breakers for DRY
|
|
|
|
std::vector<enum common_sampler_type> samplers = {
|
|
COMMON_SAMPLER_TYPE_PENALTIES,
|
|
COMMON_SAMPLER_TYPE_DRY,
|
|
COMMON_SAMPLER_TYPE_TOP_N_SIGMA,
|
|
COMMON_SAMPLER_TYPE_TOP_K,
|
|
COMMON_SAMPLER_TYPE_TYPICAL_P,
|
|
COMMON_SAMPLER_TYPE_TOP_P,
|
|
COMMON_SAMPLER_TYPE_MIN_P,
|
|
COMMON_SAMPLER_TYPE_XTC,
|
|
COMMON_SAMPLER_TYPE_TEMPERATURE,
|
|
};
|
|
|
|
std::string grammar; // optional BNF-like grammar to constrain sampling
|
|
bool grammar_lazy = false;
|
|
std::vector<common_grammar_trigger> grammar_triggers; // optional triggers (for lazy grammars)
|
|
std::set<llama_token> preserved_tokens;
|
|
|
|
std::vector<llama_logit_bias> logit_bias; // logit biases to apply
|
|
std::vector<llama_logit_bias> logit_bias_eog; // pre-calculated logit biases for EOG tokens
|
|
|
|
bool backend_sampling = false;
|
|
|
|
bool has_logit_bias() const {
|
|
return !logit_bias.empty();
|
|
}
|
|
|
|
// print the parameters into a string
|
|
std::string print() const;
|
|
};
|
|
|
|
struct common_params_model {
|
|
std::string path = ""; // model local path // NOLINT
|
|
std::string url = ""; // model url to download // NOLINT
|
|
std::string hf_repo = ""; // HF repo // NOLINT
|
|
std::string hf_file = ""; // HF file // NOLINT
|
|
std::string docker_repo = ""; // Docker repo // NOLINT
|
|
std::string name = ""; // in format <user>/<model>[:<tag>] (tag is optional) // NOLINT
|
|
};
|
|
|
|
struct common_params_speculative {
|
|
std::vector<ggml_backend_dev_t> devices; // devices to use for offloading
|
|
|
|
int32_t n_ctx = 0; // draft context size
|
|
int32_t n_max = 16; // maximum number of tokens to draft during speculative decoding
|
|
int32_t n_min = 0; // minimum number of draft tokens to use for speculative decoding
|
|
int32_t n_gpu_layers = -1; // number of layers to store in VRAM for the draft model (-1 - use default)
|
|
float p_split = 0.1f; // speculative decoding split probability
|
|
float p_min = 0.75f; // minimum speculative decoding probability (greedy)
|
|
std::vector<std::pair<std::string, std::string>> replacements; // main to speculative model replacements
|
|
std::vector<llama_model_tensor_buft_override> tensor_buft_overrides;
|
|
|
|
ggml_type cache_type_k = GGML_TYPE_F16; // KV cache data type for the K
|
|
ggml_type cache_type_v = GGML_TYPE_F16; // KV cache data type for the V
|
|
|
|
struct cpu_params cpuparams;
|
|
struct cpu_params cpuparams_batch;
|
|
|
|
struct common_params_model model;
|
|
};
|
|
|
|
struct common_params_vocoder {
|
|
struct common_params_model model;
|
|
|
|
std::string speaker_file = ""; // speaker file path // NOLINT
|
|
|
|
bool use_guide_tokens = false; // enable guide tokens to improve TTS accuracy // NOLINT
|
|
};
|
|
|
|
struct common_params_diffusion {
|
|
int32_t steps = 128;
|
|
bool visual_mode = false;
|
|
|
|
float eps = 0; // epsilon for timesteps
|
|
int32_t block_length = 0; // block length for generation
|
|
|
|
int32_t algorithm = 4; // default algorithm: low-confidence
|
|
float alg_temp = 0.0f; // algorithm temperature
|
|
|
|
float cfg_scale = 0; // classifier-free guidance scale
|
|
bool add_gumbel_noise = false; // add gumbel noise to the logits if temp > 0.0
|
|
};
|
|
|
|
// reasoning API response format (not to be confused as chat template's reasoning format)
|
|
enum common_reasoning_format {
|
|
COMMON_REASONING_FORMAT_NONE,
|
|
COMMON_REASONING_FORMAT_AUTO, // Same as deepseek, using `message.reasoning_content`
|
|
COMMON_REASONING_FORMAT_DEEPSEEK_LEGACY, // Extract thinking tag contents and return as `message.reasoning_content`, or leave inline in <think> tags in stream mode
|
|
COMMON_REASONING_FORMAT_DEEPSEEK, // Extract thinking tag contents and return as `message.reasoning_content`, including in streaming deltas.
|
|
// do not extend this enum unless you absolutely have to
|
|
// in most cases, use COMMON_REASONING_FORMAT_AUTO
|
|
// see: https://github.com/ggml-org/llama.cpp/pull/15408
|
|
};
|
|
|
|
|
|
struct lr_opt {
|
|
float lr0 = 1e-5; // learning rate at first epoch
|
|
float lr_min = -1;
|
|
float decay_epochs = -1; // if >0, the learning rate starts at lr0 and decays to lr_min after this many epochs
|
|
float scale_epoch = 0;
|
|
float wd = 0;
|
|
unsigned epochs = 2;
|
|
|
|
unsigned epoch; // set by optimizer outer (epochs) loop
|
|
// learning rate decay - constant LR per epoch only for now
|
|
float get_lr(float e) const;
|
|
float get_lr() const { return get_lr(epoch); }
|
|
// must call after arg parse, before get_lr
|
|
void init();
|
|
};
|
|
|
|
struct ggml_opt_optimizer_params common_opt_lr_pars(void * userdata);
|
|
|
|
struct common_params {
|
|
int32_t n_predict = -1; // max. number of new tokens to predict, -1 == no limit
|
|
int32_t n_ctx = 0; // context size, 0 == context the model was trained with
|
|
int32_t n_batch = 2048; // logical batch size for prompt processing (must be >=32 to use BLAS)
|
|
int32_t n_ubatch = 512; // physical batch size for prompt processing (must be >=32 to use BLAS)
|
|
int32_t n_keep = 0; // number of tokens to keep from initial prompt
|
|
int32_t n_chunks = -1; // max number of chunks to process (-1 = unlimited)
|
|
int32_t n_parallel = 1; // number of parallel sequences to decode
|
|
int32_t n_sequences = 1; // number of sequences to decode
|
|
int32_t grp_attn_n = 1; // group-attention factor
|
|
int32_t grp_attn_w = 512; // group-attention width
|
|
int32_t n_print = -1; // print token count every n tokens (-1 = disabled)
|
|
float rope_freq_base = 0.0f; // RoPE base frequency
|
|
float rope_freq_scale = 0.0f; // RoPE frequency scaling factor
|
|
float yarn_ext_factor = -1.0f; // YaRN extrapolation mix factor
|
|
float yarn_attn_factor = -1.0f; // YaRN magnitude scaling factor
|
|
float yarn_beta_fast = -1.0f; // YaRN low correction dim
|
|
float yarn_beta_slow = -1.0f; // YaRN high correction dim
|
|
int32_t yarn_orig_ctx = 0; // YaRN original context length
|
|
|
|
// offload params
|
|
std::vector<ggml_backend_dev_t> devices; // devices to use for offloading
|
|
|
|
int32_t n_gpu_layers = -1; // number of layers to store in VRAM, -1 is auto, <= -2 is all
|
|
int32_t main_gpu = 0; // the GPU that is used for scratch and small tensors
|
|
float tensor_split[128] = {0}; // how split tensors should be distributed across GPUs
|
|
bool fit_params = true; // whether to fit unset model/context parameters to free device memory
|
|
size_t fit_params_target = 1024 * 1024*1024; // margin per device in bytes for fitting parameters to free memory
|
|
int32_t fit_params_min_ctx = 4096; // minimum context size to set when trying to reduce memory use
|
|
|
|
enum llama_split_mode split_mode = LLAMA_SPLIT_MODE_LAYER; // how to split the model across GPUs
|
|
|
|
struct cpu_params cpuparams;
|
|
struct cpu_params cpuparams_batch;
|
|
|
|
ggml_backend_sched_eval_callback cb_eval = nullptr;
|
|
void * cb_eval_user_data = nullptr;
|
|
|
|
ggml_numa_strategy numa = GGML_NUMA_STRATEGY_DISABLED;
|
|
|
|
enum llama_rope_scaling_type rope_scaling_type = LLAMA_ROPE_SCALING_TYPE_UNSPECIFIED;
|
|
enum llama_pooling_type pooling_type = LLAMA_POOLING_TYPE_UNSPECIFIED; // pooling type for embeddings
|
|
enum llama_attention_type attention_type = LLAMA_ATTENTION_TYPE_UNSPECIFIED; // attention type for embeddings
|
|
enum llama_flash_attn_type flash_attn_type = LLAMA_FLASH_ATTN_TYPE_AUTO; // whether to use Flash Attention
|
|
|
|
struct common_params_sampling sampling;
|
|
struct common_params_speculative speculative;
|
|
struct common_params_vocoder vocoder;
|
|
struct common_params_diffusion diffusion;
|
|
|
|
struct common_params_model model;
|
|
|
|
std::string model_alias = ""; // model alias // NOLINT
|
|
std::string hf_token = ""; // HF token // NOLINT
|
|
std::string prompt = ""; // NOLINT
|
|
std::string system_prompt = ""; // NOLINT
|
|
std::string prompt_file = ""; // store the external prompt file name // NOLINT
|
|
std::string path_prompt_cache = ""; // path to file for saving/loading prompt eval state // NOLINT
|
|
std::string input_prefix = ""; // string to prefix user inputs with // NOLINT
|
|
std::string input_suffix = ""; // string to suffix user inputs with // NOLINT
|
|
std::string lookup_cache_static = ""; // path of static ngram cache file for lookup decoding // NOLINT
|
|
std::string lookup_cache_dynamic = ""; // path of dynamic ngram cache file for lookup decoding // NOLINT
|
|
std::string logits_file = ""; // file for saving *all* logits // NOLINT
|
|
|
|
std::vector<std::string> in_files; // all input files
|
|
std::vector<std::string> antiprompt; // strings upon which more user input is prompted (a.k.a. reverse prompts)
|
|
std::vector<llama_model_kv_override> kv_overrides;
|
|
std::vector<llama_model_tensor_buft_override> tensor_buft_overrides;
|
|
|
|
bool lora_init_without_apply = false; // only load lora to memory, but do not apply it to ctx (user can manually apply lora later using llama_adapter_lora_apply)
|
|
std::vector<common_adapter_lora_info> lora_adapters; // lora adapter path with user defined scale
|
|
|
|
std::vector<common_control_vector_load_info> control_vectors; // control vector with user defined scale
|
|
|
|
int32_t verbosity = 3; // LOG_LEVEL_INFO
|
|
int32_t control_vector_layer_start = -1; // layer range for control vector
|
|
int32_t control_vector_layer_end = -1; // layer range for control vector
|
|
bool offline = false;
|
|
|
|
int32_t ppl_stride = 0; // stride for perplexity calculations. If left at 0, the pre-existing approach will be used.
|
|
int32_t ppl_output_type = 0; // = 0 -> ppl output is as usual, = 1 -> ppl output is num_tokens, ppl, one per line
|
|
// (which is more convenient to use for plotting)
|
|
//
|
|
bool hellaswag = false; // compute HellaSwag score over random tasks from datafile supplied in prompt
|
|
size_t hellaswag_tasks = 400; // number of tasks to use when computing the HellaSwag score
|
|
|
|
bool winogrande = false; // compute Winogrande score over random tasks from datafile supplied in prompt
|
|
size_t winogrande_tasks = 0; // number of tasks to use when computing the Winogrande score. If 0, all tasks will be computed
|
|
|
|
bool multiple_choice = false; // compute TruthfulQA score over random tasks from datafile supplied in prompt
|
|
size_t multiple_choice_tasks = 0; // number of tasks to use when computing the TruthfulQA score. If 0, all tasks will be computed
|
|
|
|
bool kl_divergence = false; // compute KL divergence
|
|
|
|
bool usage = false; // print usage
|
|
bool completion = false; // print source-able completion script
|
|
bool use_color = false; // use color to distinguish generations and inputs
|
|
bool special = false; // enable special token output
|
|
bool interactive = false; // interactive mode
|
|
bool interactive_first = false; // wait for user input immediately
|
|
bool prompt_cache_all = false; // save user input and generations to prompt cache
|
|
bool prompt_cache_ro = false; // open the prompt cache read-only and do not update it
|
|
|
|
bool escape = true; // escape "\n", "\r", "\t", "\'", "\"", and "\\"
|
|
bool multiline_input = false; // reverse the usage of `\`
|
|
bool simple_io = false; // improves compatibility with subprocesses and limited consoles
|
|
bool cont_batching = true; // insert new sequences for decoding on-the-fly
|
|
bool no_perf = false; // disable performance metrics
|
|
bool show_timings = true; // show timing information on CLI
|
|
bool ctx_shift = false; // context shift on infinite text generation
|
|
bool swa_full = false; // use full-size SWA cache (https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
|
|
bool kv_unified = false; // enable unified KV cache
|
|
|
|
bool input_prefix_bos = false; // prefix BOS to user inputs, preceding input_prefix
|
|
bool use_mmap = true; // use mmap for faster loads
|
|
bool use_mlock = false; // use mlock to keep model in memory
|
|
bool verbose_prompt = false; // print prompt tokens before generation
|
|
bool display_prompt = true; // print prompt before generation
|
|
bool no_kv_offload = false; // disable KV offloading
|
|
bool warmup = true; // warmup run
|
|
bool check_tensors = false; // validate tensor data
|
|
bool no_op_offload = false; // globally disable offload host tensor operations to device
|
|
bool no_extra_bufts = false; // disable extra buffer types (used for weight repacking)
|
|
bool no_host = false; // bypass host buffer allowing extra buffers to be used
|
|
|
|
bool single_turn = false; // single turn chat conversation
|
|
|
|
ggml_type cache_type_k = GGML_TYPE_F16; // KV cache data type for the K
|
|
ggml_type cache_type_v = GGML_TYPE_F16; // KV cache data type for the V
|
|
|
|
common_conversation_mode conversation_mode = COMMON_CONVERSATION_MODE_AUTO;
|
|
|
|
// multimodal models (see tools/mtmd)
|
|
struct common_params_model mmproj;
|
|
bool mmproj_use_gpu = true; // use GPU for multimodal model
|
|
bool no_mmproj = false; // explicitly disable multimodal model
|
|
std::vector<std::string> image; // path to image file(s)
|
|
int image_min_tokens = -1;
|
|
int image_max_tokens = -1;
|
|
|
|
// finetune
|
|
struct lr_opt lr;
|
|
enum ggml_opt_optimizer_type optimizer = GGML_OPT_OPTIMIZER_TYPE_ADAMW;
|
|
float val_split = 0.05f; // fraction of the data used for the validation set
|
|
|
|
// embedding
|
|
bool embedding = false; // get only sentence embedding
|
|
int32_t embd_normalize = 2; // normalisation for embeddings (-1=none, 0=max absolute int16, 1=taxicab, 2=euclidean, >2=p-norm)
|
|
std::string embd_out = ""; // empty = default, "array" = [[],[]...], "json" = openai style, "json+" = same "json" + cosine similarity matrix
|
|
std::string embd_sep = "\n"; // separator of embeddings
|
|
std::string cls_sep = "\t"; // separator of classification sequences
|
|
|
|
// server params
|
|
int32_t port = 8080; // server listens on this network port
|
|
int32_t timeout_read = 600; // http read timeout in seconds
|
|
int32_t timeout_write = timeout_read; // http write timeout in seconds
|
|
int32_t n_threads_http = -1; // number of threads to process HTTP requests (TODO: support threadpool)
|
|
int32_t n_cache_reuse = 0; // min chunk size to reuse from the cache via KV shifting
|
|
int32_t n_ctx_checkpoints = 8; // max number of context checkpoints per slot
|
|
int32_t cache_ram_mib = 8192; // -1 = no limit, 0 - disable, 1 = 1 MiB, etc.
|
|
|
|
std::string hostname = "127.0.0.1";
|
|
std::string public_path = ""; // NOLINT
|
|
std::string api_prefix = ""; // NOLINT
|
|
std::string chat_template = ""; // NOLINT
|
|
bool use_jinja = true; // NOLINT
|
|
bool enable_chat_template = true;
|
|
common_reasoning_format reasoning_format = COMMON_REASONING_FORMAT_DEEPSEEK;
|
|
int reasoning_budget = -1;
|
|
bool prefill_assistant = true; // if true, any trailing assistant message will be prefilled into the response
|
|
int sleep_idle_seconds = -1; // if >0, server will sleep after this many seconds of idle time
|
|
|
|
std::vector<std::string> api_keys;
|
|
|
|
std::string ssl_file_key = ""; // NOLINT
|
|
std::string ssl_file_cert = ""; // NOLINT
|
|
|
|
std::map<std::string, std::string> default_template_kwargs;
|
|
|
|
// webui configs
|
|
bool webui = true;
|
|
std::string webui_config_json;
|
|
|
|
// "advanced" endpoints are disabled by default for better security
|
|
bool endpoint_slots = true;
|
|
bool endpoint_props = false; // only control POST requests, not GET
|
|
bool endpoint_metrics = false;
|
|
|
|
// router server configs
|
|
std::string models_dir = ""; // directory containing models for the router server
|
|
std::string models_preset = ""; // directory containing model presets for the router server
|
|
int models_max = 4; // maximum number of models to load simultaneously
|
|
bool models_autoload = true; // automatically load models when requested via the router server
|
|
|
|
bool log_json = false;
|
|
|
|
std::string slot_save_path;
|
|
std::string media_path; // path to directory for loading media files
|
|
|
|
float slot_prompt_similarity = 0.1f;
|
|
|
|
// batched-bench params
|
|
bool is_pp_shared = false;
|
|
bool is_tg_separate = false;
|
|
|
|
std::vector<int32_t> n_pp;
|
|
std::vector<int32_t> n_tg;
|
|
std::vector<int32_t> n_pl;
|
|
|
|
// retrieval params
|
|
std::vector<std::string> context_files; // context files to embed
|
|
|
|
int32_t chunk_size = 64; // chunk size for context embedding
|
|
|
|
std::string chunk_separator = "\n"; // chunk separator for context embedding
|
|
|
|
// passkey params
|
|
int32_t n_junk = 250; // number of times to repeat the junk text
|
|
int32_t i_pos = -1; // position of the passkey in the junk text
|
|
|
|
// imatrix params
|
|
int32_t n_out_freq = 10; // output the imatrix every n_out_freq iterations
|
|
int32_t n_save_freq = 0; // save the imatrix every n_save_freq iterations
|
|
int32_t i_chunk = 0; // start processing from this chunk
|
|
int8_t imat_dat = 0; // whether the legacy imatrix.dat format should be output (gguf <= 0 < dat)
|
|
|
|
bool process_output = false; // collect data for the output tensor
|
|
bool compute_ppl = true; // whether to compute perplexity
|
|
bool show_statistics = false; // show imatrix statistics per tensor
|
|
bool parse_special = false; // whether to parse special tokens during imatrix tokenization
|
|
|
|
// cvector-generator params
|
|
int n_pca_batch = 100;
|
|
int n_pca_iterations = 1000;
|
|
dimre_method cvector_dimre_method = DIMRE_METHOD_PCA;
|
|
std::string cvector_positive_file = "tools/cvector-generator/positive.txt";
|
|
std::string cvector_negative_file = "tools/cvector-generator/negative.txt";
|
|
|
|
bool spm_infill = false; // suffix/prefix/middle pattern for infill
|
|
|
|
// batched-bench params
|
|
bool batched_bench_output_jsonl = false;
|
|
|
|
// common params
|
|
std::string out_file; // output filename for all example programs
|
|
// optional callback for model loading progress and cancellation:
|
|
// called with a progress value between 0.0 and 1.0.
|
|
// return false from callback to abort model loading or true to continue
|
|
llama_progress_callback load_progress_callback = NULL;
|
|
void * load_progress_callback_user_data = NULL;
|
|
|
|
bool has_speculative() const {
|
|
return !speculative.model.path.empty() || !speculative.model.hf_repo.empty();
|
|
}
|
|
};
|
|
|
|
// call once at the start of a program if it uses libcommon
|
|
// initializes the logging system and prints info about the build
|
|
void common_init();
|
|
|
|
std::string common_params_get_system_info(const common_params & params);
|
|
|
|
bool parse_cpu_range(const std::string & range, bool(&boolmask)[GGML_MAX_N_THREADS]);
|
|
bool parse_cpu_mask(const std::string & mask, bool(&boolmask)[GGML_MAX_N_THREADS]);
|
|
void postprocess_cpu_params(cpu_params & cpuparams, const cpu_params * role_model = nullptr);
|
|
bool set_process_priority(enum ggml_sched_priority prio);
|
|
|
|
//
|
|
// String utils
|
|
//
|
|
|
|
#ifdef __GNUC__
|
|
# if defined(__MINGW32__) && !defined(__clang__)
|
|
# define LLAMA_COMMON_ATTRIBUTE_FORMAT(...) __attribute__((format(gnu_printf, __VA_ARGS__)))
|
|
# else
|
|
# define LLAMA_COMMON_ATTRIBUTE_FORMAT(...) __attribute__((format(printf, __VA_ARGS__)))
|
|
# endif
|
|
#else
|
|
# define LLAMA_COMMON_ATTRIBUTE_FORMAT(...)
|
|
#endif
|
|
|
|
LLAMA_COMMON_ATTRIBUTE_FORMAT(1, 2)
|
|
std::string string_format(const char * fmt, ...);
|
|
|
|
std::string string_strip(const std::string & str);
|
|
std::string string_get_sortable_timestamp();
|
|
|
|
std::string string_join(const std::vector<std::string> & values, const std::string & separator);
|
|
std::vector<std::string> string_split(const std::string & str, const std::string & delimiter);
|
|
std::string string_repeat(const std::string & str, size_t n);
|
|
|
|
void string_replace_all(std::string & s, const std::string & search, const std::string & replace);
|
|
|
|
std::string regex_escape(const std::string & s);
|
|
|
|
template<class T>
|
|
static std::vector<T> string_split(const std::string & str, char delim) {
|
|
static_assert(!std::is_same<T, std::string>::value, "Please use the specialized version for std::string");
|
|
std::vector<T> values;
|
|
std::istringstream str_stream(str);
|
|
std::string token;
|
|
while (std::getline(str_stream, token, delim)) {
|
|
T value;
|
|
std::istringstream token_stream(token);
|
|
token_stream >> value;
|
|
values.push_back(value);
|
|
}
|
|
return values;
|
|
}
|
|
|
|
template<>
|
|
std::vector<std::string> string_split<std::string>(const std::string & input, char separator)
|
|
{
|
|
std::vector<std::string> parts;
|
|
size_t begin_pos = 0;
|
|
size_t separator_pos = input.find(separator);
|
|
while (separator_pos != std::string::npos) {
|
|
std::string part = input.substr(begin_pos, separator_pos - begin_pos);
|
|
parts.emplace_back(part);
|
|
begin_pos = separator_pos + 1;
|
|
separator_pos = input.find(separator, begin_pos);
|
|
}
|
|
parts.emplace_back(input.substr(begin_pos, separator_pos - begin_pos));
|
|
return parts;
|
|
}
|
|
|
|
static bool string_starts_with(const std::string & str,
|
|
const std::string & prefix) { // While we wait for C++20's std::string::starts_with...
|
|
return str.rfind(prefix, 0) == 0;
|
|
}
|
|
|
|
// While we wait for C++20's std::string::ends_with...
|
|
bool string_ends_with(const std::string_view & str, const std::string_view & suffix);
|
|
bool string_remove_suffix(std::string & str, const std::string_view & suffix);
|
|
size_t string_find_partial_stop(const std::string_view & str, const std::string_view & stop);
|
|
|
|
bool string_parse_kv_override(const char * data, std::vector<llama_model_kv_override> & overrides);
|
|
void string_process_escapes(std::string & input);
|
|
|
|
std::string string_from(bool value);
|
|
std::string string_from(const std::vector<int> & values);
|
|
std::string string_from(const struct llama_context * ctx, const std::vector<llama_token> & tokens);
|
|
std::string string_from(const struct llama_context * ctx, const struct llama_batch & batch);
|
|
|
|
//
|
|
// Filesystem utils
|
|
//
|
|
|
|
bool fs_validate_filename(const std::string & filename, bool allow_subdirs = false);
|
|
bool fs_create_directory_with_parents(const std::string & path);
|
|
bool fs_is_directory(const std::string & path);
|
|
|
|
std::string fs_get_cache_directory();
|
|
std::string fs_get_cache_file(const std::string & filename);
|
|
|
|
struct common_file_info {
|
|
std::string path;
|
|
std::string name;
|
|
size_t size = 0; // in bytes
|
|
bool is_dir = false;
|
|
};
|
|
std::vector<common_file_info> fs_list(const std::string & path, bool include_directories);
|
|
|
|
//
|
|
// TTY utils
|
|
//
|
|
|
|
// Auto-detect if colors can be enabled based on terminal and environment
|
|
bool tty_can_use_colors();
|
|
|
|
//
|
|
// Model utils
|
|
//
|
|
|
|
struct common_sampler;
|
|
|
|
// note: defines the model, context, samplers, ets. lifetimes
|
|
struct common_init_result {
|
|
common_init_result(common_params & params);
|
|
~common_init_result();
|
|
|
|
llama_model * model();
|
|
llama_context * context();
|
|
|
|
common_sampler * sampler(llama_seq_id seq_id);
|
|
void reset_samplers();
|
|
|
|
std::vector<llama_adapter_lora_ptr> & lora();
|
|
|
|
void free_context();
|
|
|
|
private:
|
|
struct impl;
|
|
std::unique_ptr<impl> pimpl;
|
|
};
|
|
|
|
using common_init_result_ptr = std::unique_ptr<common_init_result>;
|
|
|
|
common_init_result_ptr common_init_from_params(common_params & params);
|
|
|
|
struct llama_model_params common_model_params_to_llama ( common_params & params);
|
|
struct llama_context_params common_context_params_to_llama(const common_params & params);
|
|
struct ggml_threadpool_params ggml_threadpool_params_from_cpu_params(const cpu_params & params);
|
|
|
|
// clear LoRA adapters from context, then apply new list of adapters
|
|
void common_set_adapter_lora(struct llama_context * ctx, std::vector<common_adapter_lora_info> & lora);
|
|
|
|
std::string get_model_endpoint();
|
|
|
|
//
|
|
// Batch utils
|
|
//
|
|
|
|
void common_batch_clear(struct llama_batch & batch);
|
|
|
|
void common_batch_add(
|
|
struct llama_batch & batch,
|
|
llama_token id,
|
|
llama_pos pos,
|
|
const std::vector<llama_seq_id> & seq_ids,
|
|
bool logits);
|
|
|
|
//
|
|
// Token utils
|
|
//
|
|
|
|
// longest common prefix
|
|
size_t common_lcp(const llama_tokens & a, const llama_tokens & b);
|
|
|
|
// longet common subsequence
|
|
size_t common_lcs(const llama_tokens & a, const llama_tokens & b);
|
|
|
|
//
|
|
// Vocab utils
|
|
//
|
|
|
|
// tokenizes a string into a vector of tokens
|
|
// should work similar to Python's `tokenizer.encode`
|
|
std::vector<llama_token> common_tokenize(
|
|
const struct llama_context * ctx,
|
|
const std::string & text,
|
|
bool add_special,
|
|
bool parse_special = false);
|
|
|
|
std::vector<llama_token> common_tokenize(
|
|
const struct llama_vocab * vocab,
|
|
const std::string & text,
|
|
bool add_special,
|
|
bool parse_special = false);
|
|
|
|
// tokenizes a token into a piece, optionally renders special/control tokens
|
|
// should work similar to Python's `tokenizer.id_to_piece`
|
|
std::string common_token_to_piece(
|
|
const struct llama_context * ctx,
|
|
llama_token token,
|
|
bool special = true);
|
|
|
|
std::string common_token_to_piece(
|
|
const struct llama_vocab * vocab,
|
|
llama_token token,
|
|
bool special = true);
|
|
|
|
// detokenizes a vector of tokens into a string
|
|
// should work similar to Python's `tokenizer.decode`
|
|
// optionally renders special/control tokens
|
|
std::string common_detokenize(
|
|
const struct llama_context * ctx,
|
|
const std::vector<llama_token> & tokens,
|
|
bool special = true);
|
|
|
|
std::string common_detokenize(
|
|
const struct llama_vocab * vocab,
|
|
const std::vector<llama_token> & tokens,
|
|
bool special = true);
|
|
|
|
//
|
|
// Embedding utils
|
|
//
|
|
|
|
// TODO: repace embd_norm with an enum
|
|
void common_embd_normalize(const float * inp, float * out, int n, int embd_norm);
|
|
|
|
float common_embd_similarity_cos(const float * embd1, const float * embd2, int n);
|
|
|
|
//
|
|
// Control vector utils
|
|
//
|
|
|
|
struct common_control_vector_data {
|
|
int n_embd;
|
|
|
|
// stores data for layers [1, n_layer] where n_layer = data.size() / n_embd
|
|
std::vector<float> data;
|
|
};
|
|
|
|
struct common_control_vector_load_info {
|
|
float strength;
|
|
|
|
std::string fname;
|
|
};
|
|
|
|
// Load control vectors, scale each by strength, and add them together.
|
|
// On error, returns {-1, empty}
|
|
common_control_vector_data common_control_vector_load(const std::vector<common_control_vector_load_info> & load_infos);
|
|
|
|
//
|
|
// Split utils
|
|
//
|
|
|
|
namespace {
|
|
|
|
const char * const LLM_KV_SPLIT_NO = "split.no";
|
|
const char * const LLM_KV_SPLIT_COUNT = "split.count";
|
|
const char * const LLM_KV_SPLIT_TENSORS_COUNT = "split.tensors.count";
|
|
|
|
}
|
|
|
|
//
|
|
// MoE utils
|
|
//
|
|
|
|
const char * const LLM_FFN_EXPS_REGEX = "\\.ffn_(up|down|gate)_(ch|)exps";
|
|
|
|
static std::string llm_ffn_exps_block_regex(int idx) {
|
|
return string_format("blk\\.%d%s", idx, LLM_FFN_EXPS_REGEX);
|
|
}
|
|
|
|
static llama_model_tensor_buft_override llm_ffn_exps_cpu_override() {
|
|
return { LLM_FFN_EXPS_REGEX, ggml_backend_cpu_buffer_type() };
|
|
}
|
|
|
|
//
|
|
// training utils
|
|
//
|
|
|
|
ggml_opt_dataset_t common_opt_dataset_init(struct llama_context * ctx, const std::vector<llama_token> & tokens, int64_t stride);
|
|
|
|
// "adamw" or "sgd" (case insensitive)
|
|
enum ggml_opt_optimizer_type common_opt_get_optimizer(const char *);
|