* sampling : add support for backend sampling
This commit adds support for performing sampling operations on the
backend (e.g. GPU) as part of the model computation graph.
The motivation for this feature is to enable sampling to be performed
directly on the backend as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the backend.
For example, the backend sampler chain might select/sample a token
directly in which case only the sampled token needs to be transferred
from device memory to host memory.
It is also possible for the backend samplers to perform filtering of
the logits, or compute and filter the probability distribution, in
which case only the filtered logits or probabilites need to be
transferred back to system memory for further processing by CPU
samplers.
Currently the backend sampling works in a similar manner to how
pooling works, it is a function that is called by build_graph and the
sampler operations become part of the models computation graph.
* llama-cli : add backend sampler configuration
* server : add backend sampling options/configuration
* webui : add backend sampling options
* ggml : add initial cumsum implementation for CUDA
* sampling : enable all backend sampler tests
This commit enables all exisiting backend sampler tests in the
test-backend-sampler. Previously, some tests were disabled because
there were missing ggml operation implementations.
* graph : do not include llama-model.h
* sampling : always expose sampled_ids
This commit precomputes and caches the full-vocab token id list in
llama_context's constructor, so llama_get_backend_sampled_token_ids_ith
always returns a valid pointer.
The motivation for this is that this enables both common/sampling.cpp
and src/llama-sampling.cpp can simplify their logic.
Not all backends samplers that process logits need to set the
sampled_tokens_id as they may not change the order of the logits, for
example the temperature sampler only scales the logits but does not
change their order. Simliar the logit bias sampler only adds bias to
specific token ids but does not change the order of the logits. In
these cases there will not be a device to host copy of the sampled
token ids, and this is the use case where having this precomputed
list is useful.
* sampling : ensure at most one output token per seq
This commit adds a check in the batch allocator to ensure that when
backend sampling is enabled, at most one output token is specified per
sequence.
* CUDA: Optimize argsort for gpu-based token sampling
Argsort is used for top-k currently. WE optimize argsort by 2 things:
1. Use `DeviceRadixSort` for single-row/sequence to parallelize it
across our SMs
2. Use `DeviceSegmentedSort` for multi-row/sequence as this is the
correct entrypoint (the function chooses different execution paths,
it contains `DeviceSegmentedRadixSort` as one of the paths and will
choose the best one according to heuristics.
https://nvidia.github.io/cccl/cub/api/structcub_1_1DeviceSegmentedSort.html#overview
Some perf numbers for a RTX PRO 6000:
On the kernel level, tested with
`GGML_CUDA_DISABLE_GRAPHS=1 ./test-backend-ops -o ARGSORT perf`
Before:
```
ARGSORT(type=f32,ne=[65000,16,1,1],order=0): 4130 runs - 359.24 us/run
ARGSORT(type=f32,ne=[200000,1,1,1],order=0): 8192 runs - 861.34 us/run
ARGSORT(type=f32,ne=[200000,16,1,1],order=0): 1343 runs - 1020.01 us/run
```
After:
```
ARGSORT(type=f32,ne=[65000,16,1,1],order=0): 4130 runs - 312.41 us/run
ARGSORT(type=f32,ne=[200000,1,1,1],order=0): 16384 runs - 63.48 us/run
ARGSORT(type=f32,ne=[200000,16,1,1],order=0): 1343 runs - 874.36 us/run
```
---
On the model level, tested with
`llama-cli -m gpt-oss-20b-mxfp4.gguf -n 200 -p "What is
the Capital of Sweden?" -no-cnv -fa 1 --backend-sampling`
Before:
```
llama_perf_sampler_print: sampling time = 0.25 ms / 207 runs ( 0.00 ms per token, 824701.20 tokens per second)
llama_perf_context_print: load time = 18215.58 ms
llama_perf_context_print: prompt eval time = 28.20 ms / 7 tokens ( 4.03 ms per token, 248.19 tokens per second)
llama_perf_context_print: eval time = 714.79 ms / 199 runs ( 3.59 ms per token, 278.40 tokens per second)
llama_perf_context_print: total time = 857.62 ms / 206 tokens
```
After
```
llama_perf_sampler_print: sampling time = 0.25 ms / 207 runs ( 0.00 ms per token, 828000.00 tokens per second)
llama_perf_context_print: load time = 18366.92 ms
llama_perf_context_print: prompt eval time = 35.92 ms / 7 tokens ( 5.13 ms per token, 194.87 tokens per second)
llama_perf_context_print: eval time = 532.79 ms / 199 runs ( 2.68 ms per token, 373.50 tokens per second)
llama_perf_context_print: total time = 683.65 ms / 206 tokens
```
* sampling : remove version from sampler chain
This commit removes the version field from the sampler chain and instead
used the sampler pointer itself for change detection.
* sampling : always populate logits for sampled probs
This commit updates common/sampler.cpp set_logits and
src/llama-sampling.cpp llama_sampler_sample to always populate the
logits field when backend sampled probabilities are available.
The motivation for this is that this ensure that CPU sampler always have
access to the logits values even when probabilites have been produced by
backend samplers.
* sampling : simplify backend sampling logic decode
This commit tries to simplify the backend sampling logic in
llama_context::decode.
* squash! sampling : simplify backend sampling logic decode
Fix condition to check if backend actually sampled tokens, not just that
backend samplers are available.
* common : fix regression caused by extra memory allocations during sampling
* squash! sampling : simplify backend sampling logic decode
The commit fixes a variable shadowing issue in the
`llama_context::decode` function which was introduced in a previous
refactoring.
* squash! common : fix regression caused by extra memory allocations during sampling
Apply the same changes to llama-sampling.cpp, llama_sampler_sample as
were applied in commit 38f408c25.
* sampling : introduce sampling_info struct
This commit introduces a sampling_info struct to encapsulate all
backend sampling related data within the llama_context class.
It also updates to use more descriptive names for sampled tokens and
candidates in the backend sampler ggml data structure.
* sampling : return early if backend sampling is disabled
* sampling : use pinned memory for backend sampling buffers
* common, tools : refactor model loading to support backend samplers
This commit refactors the model loading process in common/common.cpp
to enable backend sampler to be configure prior to the llama_context
creation.
The motivation for this change is that just being able to set/reset the
backend samplers after the llama_context has been created will cause a
resize to occur in llama_context::output_reserve which we want to avoid.
* sampling : add stride variable for clarity
* sampling: clarify candidate ids usage in comments
* sampling : fix copying both sampled tokens and logits/probs from backend
This commit fixes the issue where both sampled tokens and logits/probs
were not being copied correctly from the backend to the host when
multiple backend samplers were used.
A test for this scenario has also been added to ensure that both types
of data are copied correctly when different backend samplers are
employed.
* tests : cleanup test-backend-sampler.cpp
* common : remove build-info.cpp from commit [no ci]
This file was generated during the build process and should not be
included in previous commits.
* sampling : cleanup and clarify output_reserve
* sampling : remove redundant checks for stride and size [no ci]
* sampling : add debug log when backend sampler selects token
This commit adds a debug log statement in the llama_sampler_sample
to indicate when a backend sampler has selected a token for a given
index.
The modification helps in tracing the sampling process and understanding
the flow of control when backend samplers are used.
* examples : update batched to use backend sampling
This commit updates the batched example to demonstrate how to use
backend samplers.
* llama-cli : fix dangling reference to sampler config
* common : initialize backend samplers
* samplers : add missing cont
* sampling : add assertions for contiguous tensors in async copy functions
* examples : add info about hybrid sampling in batched [no ci]
* sampling : remove backend-dist option (wip)
This commit removes the `--backend-dist` option and instead uses the
configured --samplers chain to determine which samplers run on the
backend.
Backend sampling is still enabled using With `--backend_sampling`, and
the sampler chain, either explictly specified using `--samplers` or the
default, is automatically analyzed to determine which samplers can run
on the backend. The system finds the longest contiguous chain of
backend supported samplers from the start of the sampler sequence.
For example:
* If the chain is `top-k -> temperature -> top-p`, and both `top-k` and
`temperature` are backend-supported but `top-p` is not, then `top-k`
and `temperature` will run on the backend, while `top-p` and
subsequent samplers run on the CPU.
* If all configured samplers are supported, the final distribution
sampling will also happen on the backend, transferring only the
sampled token IDs back to the host.
* If the sampler chain starts with an unsupported sampler (e.g.,
`penalties`), all sampling runs on the CPU. Note that this is
currently the case with the default sampler so to use backend sampling
it is required to specify a sampler chain. See below for an example.
The following shows how llama-cli can be run with backend sampling:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
--prompt 'What is the capital of Sweden?' \
-n 20 \
-no-cnv \
--verbose-prompt \
-ngl 40 \
--backend-sampling \
--samplers 'top_k;temperature'
```
In this case the all sampling will happen on the backend since both
`top_k` and `temperature` are supported backend samplers.
To enable a partial backend sampling (hybrid sampling), for example
running `top_k` and `temperature` on the backend and `typ_p` on the CPU
the following sampler chain could be specified:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
--prompt 'What is the capital of Sweden?' \
-n 20 \
-no-cnv \
--verbose-prompt \
-ngl 40 \
--backend-sampling \
--samplers 'top_k;temperature;top_p'
```
If this looks good then I'll follow up with updates the llama-cli and
llama-server documentation to reflect these changes.
* CUDA: Add top-k implementation
* sampling : add min-p backend sampler
* Use `FetchContent` over CPM as it's bundled with CMake
Thanks @ggerganov for the suggestion
* common : add get_active_samplers function to check enabled samplers
This commit adds a function to check if a sampler is actually enabled,
meaning that it does not have values that disables its effect. This is
then used by the backend samplers initialization to avoid considering
samplers that are not enabled when determining the split point between
them.
The motivation for this is that this allows the default sampler chain
for `--samplers` to be used and any sampler that is not enabled will not
cause the backend samplers to be skipped.
For example, before this change if the penalties sampler was included in
the samplers list but had default values that disable it, it would cause
the backend samplers to be skipped entirely.
This commit also contains some refactoring to remove some code
duplication.
* cuda : fix editorconfig-checker warning
* sampling : use argmax for min-p sampling
* sampling : fix temperature check to allow zero temperature
This commit modifies the temperature sampling check to allow a
temperature value of zero. Previously, the check only allowed
positive temperature values, which excluded the valid case of
zero temperature.
The motivation for this is to enable a zero temperature setting which is
also currently causing the following test to fail:
```console
(venv) $ cd tools/server/tests
(venv) $ ./tests.sh unit/test_basic.py::test_load_split_model
```
* cuda : fix top-k compilation when CUB is unavailable
This commit adds a macro guard around argsort_f32_i32_cuda_cub usage
in the top-k fallback path, falling back to bitonic sort when
GGML_CUDA_USE_CUB is not defined.
The motivation for this is that some environments like AMD HIP
do not have CUB available, causing compilation failure.
Refs: https://github.com/ggml-org/llama.cpp/actions/runs/19728226426/job/56523606840#step:6:208
* sampling : add comments about backend sampler [no ci]
This commit adds a comment to llama_context's constructor explaining why
backend samplers are initialized early in the process.
* sampling : remove backend sampling chain from common_sampler
This commit removes the backend sampling chain from the common_sampler
structure and related functions.
The motivation for this change is that the backend samplers are not
currently set on the context, and if they are they would cause the
a graph reallocation to occur. Instead, the intialization is handled
like it currently is by llama_context's constructor.
* Fix top-k comp & behavior for non-CUB path
Some changes were made in 5ea3be265ba6f8916daf52e19e3fb8efe9a03637
which were incomplete. In the case of non-CUB, bitonic sort and its
limitations of ncols < 1024 have to apply, similar to argsort.cu
* sampling : support intermixed backend/cpu samplers
This commit updates the backend sampling implementation to support
intermixed usage of backend and CPU samplers within the same batch.
The initial implementation was developed as an all-or-nothing solution:
either perform backend sampling for the entire batch, or perform CPU
sampling for the entire batch.
The motivation for this change is to support batches with mixed
sequences. For example, we may have a backend sampler configured for
sequence 0, while sequence 1 in the same batch uses CPU sampling. This
was not supported in the initial implementation.
This issue manifested in llama-server with the webui: decoding with
backend samplers would work initially, but after changing to CPU
sampling, a slot (sequence) could still be using a backend sampler.
This meant that logits in output_reserve would not be allocated,
resulting in an error.
The solution in this commit inspects the batch to determine which
sampling modes are needed and allocates buffers accordingly. However,
there is a known inefficiency: when we have intermixed backend/CPU
samplers in the same batch, we currently copy all logits to the host,
even for sequences using backend samplers.
Added test_backend_cpu_mixed_batch to verify correct behavior with
mixed backend/CPU samplers in a single batch, including dynamic
sampler switching between decode calls.
* squash! sampling : support intermixed backend/cpu samplers
Add check that logits is not null which is can happen for embeddings.
* squash! sampling : support intermixed backend/cpu samplers
Fix llama-save-load-state which currently fails by handling the case
when batch.logits is nullptr (like when loading state) by allocating
space for all outputs as CPU logits.
* refactor : simplify and improve memory management
* Add initial version for top-p sampling
As we only support static graphs for the time and we don't know the size
of the output of top-p, we have to do value-scaling same as for min-p
operator.
Further improvements can be applied to the unit-test (i.e. check for
equivalence of top_p happening on backend with top_p happening on cpu)
and also by constructing candidates and sorting those as opposed to
reversing the sort of the logits (this would be arange +
get_rows instead of argsort + get_rows)
* sampling : use logits directly for min-p filtering
* sampling : simplify
* llama : simplify
* llama : cleanup + naming
* llama : call backend_init once
* llama : reserve graphs with samplers
* llama : naming
* cont : naming
* sampling : lower log level for output buffer reallocations [no ci]
This commit changes the logging level for output buffer reallocations
in the llama_context::output_reserve function from INFO to DEBUG.
The motivation for this is that it currently logs to info and when
enabling verbose logging for llama-cli this will get mixed with the
output, for example:
```console
What is the capital of Sweden?output_reserve: reallocating output buffer from size 0.58 MiB to 1.74 MiB
1. Stockholm
2\. Helsinki
Based are the options
1. Stockholm
Explanation: Stockholm is the capital of
...
```
* Fix backend_top_p_sampler
softmax(softmax) will return uniform distribution, so we should not
return the softmax but the logits instead.
* Factor out `ggml_sort` into its own function
* Make backend's top_p sampler inclusive
In addition to match the algorithm proposed in the original
[paper](https://arxiv.org/abs/1904.09751), this resolves the edge-case
where `max_p is > top_p` for a single logit, where the mask would
otherwise be empty (and we thus sample from the whole vocabulary with
equal likelihood)
* common : simplify sampler chain initialization
* sampling : do not create empty samplers
* sampling : fix top_p empty condition
* examples : remove outdated backend sampling section
This commit removes the outdated section about using backend samplers
from the README.md file in the examples/batched.
* sampling : fix backend temp sampler for zero temperature
This commit fixes the implementation of the temperature-based sampler
for the case when the temperature is set to zero. This now correctly
selects the most probable token by masking out all other tokens in the
logits.
* CUDA: Move cccl fetch to after cuda has been enabled in CMakeLists.txt
This will allow cccl to set build flags for the CUDA compiler, required
e.g. for MSVC compat, see also
https://github.com/NVIDIA/cccl/pull/6791
* CUDA: Use standard-compliant preprocessor for MSVC builds
Workarounds of https://github.com/NVIDIA/cccl/pull/6791 will not be
backported to CCCL 3.2, only the diagnostics/error messages will:
https://github.com/NVIDIA/cccl/pull/6827
* CUDA: Update CCCL's rc candidate
* squash! sampling : fix backend temp sampler for zero temperature
This modifies the parent commit to simply return the most probably token
instead of masking the logits.
* sampling : implement temp_ext_backend sampling
This commit implements the apply function for the extended temperature
sampling.
* sampling : minor cleanup
* sampling : stop short if backend sampler sampled a token
This commit modifies the graph building logic to immediately continue
when a token has already been sampled by the backend sampler.
It also updates the test for backend temporary sampling to include
top-k and distribution samplers in the chain to verify that they are not
producing any logits (they are not run).
* Revert "sampling : stop short if backend sampler sampled a token"
This reverts commit 87b2719eca55b30afff600fc7f61c6cce9452cbf.
* sampling : fix backend temp sampling to use logits masking
* sampling : simplify temp sampling
* sampling : remove redundant calls to ggml_build_forward_expand
* sampling : check backend support during init
* cont : keep backend sampling disabled for now
* sampling : fix outputs and device checks
* sampling : fix candidates logic
* Add perf-tests for CUMSUM
* Readd `cub::DeviceScan::InclusiveSum`-based CumSum
For single rows and large columns doing a for-loop over the function
`cub::DeviceScan::InclusiveSum` offered by CUB outperforms the
`cumsum_cub_kernel` where `cub::BlockScan` is used.
Numbers before this change
Backend 1/3: CUDA0
Device description: NVIDIA RTX 6000 Ada Generation
Device memory: 48510 MB (48039 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 311258 runs - 3.26 us/run - 2048 kB/run - 599.76 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 229390 runs - 4.40 us/run - 5120 kB/run - 1110.23 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 37583 runs - 29.63 us/run - 6250 kB/run - 201.18 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 892819 runs - 1.12 us/run - 1 kB/run - 0.85 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 450505 runs - 2.25 us/run - 8 kB/run - 3.39 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 155629 runs - 6.61 us/run - 32 kB/run - 4.62 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 81910 runs - 12.60 us/run - 64 kB/run - 4.85 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 49146 runs - 23.99 us/run - 128 kB/run - 5.09 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 24573 runs - 47.10 us/run - 256 kB/run - 5.18 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 16382 runs - 93.57 us/run - 512 kB/run - 5.22 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 8191 runs - 184.79 us/run - 1024 kB/run - 5.29 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 8191 runs - 280.43 us/run - 1562 kB/run - 5.31 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 2148 runs - 2771.23 us/run - 15625 kB/run - 5.38 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 458696 runs - 2.21 us/run - 4 kB/run - 1.73 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 360404 runs - 2.82 us/run - 32 kB/run - 10.83 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 147438 runs - 7.12 us/run - 128 kB/run - 17.15 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 81910 runs - 12.90 us/run - 256 kB/run - 18.92 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 49146 runs - 24.32 us/run - 512 kB/run - 20.08 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 24573 runs - 47.28 us/run - 1024 kB/run - 20.66 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 16382 runs - 93.21 us/run - 2048 kB/run - 20.96 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 8191 runs - 185.04 us/run - 4096 kB/run - 21.11 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 5369 runs - 282.08 us/run - 6250 kB/run - 21.13 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 537 runs - 2806.46 us/run - 62500 kB/run - 21.26 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 458696 runs - 2.20 us/run - 8 kB/run - 3.47 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 360404 runs - 2.82 us/run - 64 kB/run - 21.66 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 147438 runs - 7.12 us/run - 256 kB/run - 34.28 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 81910 runs - 12.90 us/run - 512 kB/run - 37.84 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 49146 runs - 24.32 us/run - 1024 kB/run - 40.15 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 47.28 us/run - 2048 kB/run - 41.31 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 93.20 us/run - 4096 kB/run - 41.92 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 8194 runs - 185.05 us/run - 8192 kB/run - 42.22 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 5370 runs - 282.15 us/run - 12500 kB/run - 42.26 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 269 runs - 4067.61 us/run - 125000 kB/run - 29.36 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 303067 runs - 3.32 us/run - 16 kB/run - 4.60 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 303067 runs - 3.32 us/run - 128 kB/run - 36.76 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 147438 runs - 7.17 us/run - 512 kB/run - 68.13 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 81910 runs - 12.90 us/run - 1024 kB/run - 75.68 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 49146 runs - 24.33 us/run - 2048 kB/run - 80.28 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 47.30 us/run - 4096 kB/run - 82.59 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 93.24 us/run - 8192 kB/run - 83.80 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 6147 runs - 185.07 us/run - 16384 kB/run - 84.45 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 4029 runs - 282.40 us/run - 25000 kB/run - 84.46 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 270 runs - 4118.40 us/run - 250000 kB/run - 58.11 GB/s
Backend CUDA0: OK
Backend 2/3: CUDA1
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96677 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 368595 runs - 2.73 us/run - 2048 kB/run - 715.83 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 216282 runs - 4.72 us/run - 5120 kB/run - 1035.32 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 32214 runs - 34.33 us/run - 6250 kB/run - 173.64 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 810909 runs - 1.24 us/run - 1 kB/run - 0.77 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 401359 runs - 2.52 us/run - 8 kB/run - 3.03 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 139247 runs - 7.44 us/run - 32 kB/run - 4.10 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 73719 runs - 14.27 us/run - 64 kB/run - 4.28 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 40955 runs - 27.24 us/run - 128 kB/run - 4.48 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 24573 runs - 53.46 us/run - 256 kB/run - 4.57 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 16382 runs - 105.29 us/run - 512 kB/run - 4.64 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 8191 runs - 210.15 us/run - 1024 kB/run - 4.65 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 8191 runs - 318.22 us/run - 1562 kB/run - 4.68 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 2148 runs - 3142.23 us/run - 15625 kB/run - 4.74 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 303067 runs - 3.34 us/run - 4 kB/run - 1.14 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 253921 runs - 4.03 us/run - 32 kB/run - 7.58 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 73719 runs - 14.96 us/run - 256 kB/run - 16.32 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 40955 runs - 28.66 us/run - 512 kB/run - 17.04 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 24573 runs - 54.21 us/run - 1024 kB/run - 18.01 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 16382 runs - 106.49 us/run - 2048 kB/run - 18.34 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 8191 runs - 210.88 us/run - 4096 kB/run - 18.52 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 5369 runs - 321.77 us/run - 6250 kB/run - 18.53 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 537 runs - 3191.79 us/run - 62500 kB/run - 18.69 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 376786 runs - 2.67 us/run - 8 kB/run - 2.86 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 245730 runs - 4.10 us/run - 64 kB/run - 14.90 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 122865 runs - 8.20 us/run - 256 kB/run - 29.79 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 65528 runs - 16.38 us/run - 512 kB/run - 29.82 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 40955 runs - 28.69 us/run - 1024 kB/run - 34.04 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 55.28 us/run - 2048 kB/run - 35.33 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 108.50 us/run - 4096 kB/run - 36.00 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 8194 runs - 213.75 us/run - 8192 kB/run - 36.55 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 5370 runs - 326.31 us/run - 12500 kB/run - 36.54 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 538 runs - 3252.68 us/run - 125000 kB/run - 36.72 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 303067 runs - 3.32 us/run - 16 kB/run - 4.60 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 253921 runs - 4.06 us/run - 128 kB/run - 30.09 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 122865 runs - 8.20 us/run - 512 kB/run - 59.57 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 65528 runs - 16.38 us/run - 1024 kB/run - 59.63 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 28.69 us/run - 2048 kB/run - 68.09 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 55.28 us/run - 4096 kB/run - 70.67 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 108.50 us/run - 8192 kB/run - 72.02 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 6147 runs - 213.60 us/run - 16384 kB/run - 73.17 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 4029 runs - 326.04 us/run - 25000 kB/run - 73.15 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 270 runs - 5458.69 us/run - 250000 kB/run - 43.84 GB/s
----
Numbers after:
Backend 1/3: CUDA0
Device description: NVIDIA RTX 6000 Ada Generation
Device memory: 48510 MB (48039 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 311258 runs - 3.25 us/run - 2048 kB/run - 601.62 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 229390 runs - 4.40 us/run - 5120 kB/run - 1110.14 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 37583 runs - 29.67 us/run - 6250 kB/run - 200.89 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 892819 runs - 1.12 us/run - 1 kB/run - 0.85 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 458696 runs - 2.21 us/run - 8 kB/run - 3.45 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 376786 runs - 2.66 us/run - 32 kB/run - 11.46 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 393168 runs - 2.59 us/run - 64 kB/run - 23.57 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 393168 runs - 2.59 us/run - 128 kB/run - 47.15 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 376786 runs - 2.69 us/run - 256 kB/run - 90.69 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 327640 runs - 3.06 us/run - 512 kB/run - 159.65 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 311258 runs - 3.28 us/run - 1024 kB/run - 297.77 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 270303 runs - 3.74 us/run - 1562 kB/run - 398.14 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 137472 runs - 7.35 us/run - 15625 kB/run - 2026.94 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 876437 runs - 1.14 us/run - 4 kB/run - 3.33 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 442314 runs - 2.28 us/run - 32 kB/run - 13.39 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 155629 runs - 6.69 us/run - 128 kB/run - 18.24 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 81910 runs - 12.53 us/run - 256 kB/run - 19.49 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 49146 runs - 24.18 us/run - 512 kB/run - 20.20 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 65528 runs - 15.34 us/run - 1024 kB/run - 63.66 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 73719 runs - 14.76 us/run - 2048 kB/run - 132.35 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 65528 runs - 16.01 us/run - 4096 kB/run - 244.07 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 64428 runs - 16.51 us/run - 6250 kB/run - 360.97 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 33831 runs - 29.59 us/run - 62500 kB/run - 2016.08 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 868246 runs - 1.16 us/run - 8 kB/run - 6.59 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 442314 runs - 2.28 us/run - 64 kB/run - 26.76 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 155629 runs - 6.69 us/run - 256 kB/run - 36.48 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 81910 runs - 12.53 us/run - 512 kB/run - 38.97 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 49146 runs - 24.17 us/run - 1024 kB/run - 40.41 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 47.53 us/run - 2048 kB/run - 41.10 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 61.25 us/run - 4096 kB/run - 63.77 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 32776 runs - 31.79 us/run - 8192 kB/run - 245.82 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 32220 runs - 32.90 us/run - 12500 kB/run - 362.35 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 6725 runs - 151.99 us/run - 125000 kB/run - 785.77 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 851864 runs - 1.18 us/run - 16 kB/run - 12.97 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 442314 runs - 2.30 us/run - 128 kB/run - 53.13 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 155629 runs - 6.68 us/run - 512 kB/run - 73.13 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 81910 runs - 12.68 us/run - 1024 kB/run - 77.00 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 24.56 us/run - 2048 kB/run - 79.53 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 47.52 us/run - 4096 kB/run - 82.21 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 93.44 us/run - 8192 kB/run - 83.62 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 16392 runs - 63.36 us/run - 16384 kB/run - 246.68 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 16116 runs - 65.25 us/run - 25000 kB/run - 365.53 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 3375 runs - 304.46 us/run - 250000 kB/run - 785.98 GB/s
Backend CUDA0: OK
Backend 2/3: CUDA1
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96677 MB free)
CUMSUM(type=f32,ne=[128,128,4,4]): 376786 runs - 2.69 us/run - 2048 kB/run - 727.04 GB/s
CUMSUM(type=f32,ne=[2048,16,5,4]): 216282 runs - 4.64 us/run - 5120 kB/run - 1053.30 GB/s
CUMSUM(type=f32,ne=[20000,10,4,1]): 32214 runs - 34.21 us/run - 6250 kB/run - 174.27 GB/s
CUMSUM(type=f32,ne=[128,1,1,1]): 819100 runs - 1.22 us/run - 1 kB/run - 0.78 GB/s
CUMSUM(type=f32,ne=[1024,1,1,1]): 409550 runs - 2.47 us/run - 8 kB/run - 3.09 GB/s
CUMSUM(type=f32,ne=[4096,1,1,1]): 303067 runs - 3.31 us/run - 32 kB/run - 9.21 GB/s
CUMSUM(type=f32,ne=[8192,1,1,1]): 237539 runs - 4.33 us/run - 64 kB/run - 14.08 GB/s
CUMSUM(type=f32,ne=[16384,1,1,1]): 237539 runs - 4.33 us/run - 128 kB/run - 28.17 GB/s
CUMSUM(type=f32,ne=[32768,1,1,1]): 188393 runs - 5.37 us/run - 256 kB/run - 45.47 GB/s
CUMSUM(type=f32,ne=[65536,1,1,1]): 188393 runs - 5.41 us/run - 512 kB/run - 90.20 GB/s
CUMSUM(type=f32,ne=[131072,1,1,1]): 188393 runs - 5.41 us/run - 1024 kB/run - 180.41 GB/s
CUMSUM(type=f32,ne=[200000,1,1,1]): 188393 runs - 5.41 us/run - 1562 kB/run - 275.27 GB/s
CUMSUM(type=f32,ne=[2000000,1,1,1]): 128880 runs - 7.76 us/run - 15625 kB/run - 1920.33 GB/s
CUMSUM(type=f32,ne=[128,4,1,1]): 802718 runs - 1.26 us/run - 4 kB/run - 3.03 GB/s
CUMSUM(type=f32,ne=[1024,4,1,1]): 401359 runs - 2.51 us/run - 32 kB/run - 12.18 GB/s
CUMSUM(type=f32,ne=[4096,4,1,1]): 139247 runs - 7.51 us/run - 128 kB/run - 16.26 GB/s
CUMSUM(type=f32,ne=[8192,4,1,1]): 73719 runs - 14.17 us/run - 256 kB/run - 17.23 GB/s
CUMSUM(type=f32,ne=[16384,4,1,1]): 40955 runs - 27.37 us/run - 512 kB/run - 17.84 GB/s
CUMSUM(type=f32,ne=[32768,4,1,1]): 40955 runs - 26.33 us/run - 1024 kB/run - 37.10 GB/s
CUMSUM(type=f32,ne=[65536,4,1,1]): 40955 runs - 26.19 us/run - 2048 kB/run - 74.59 GB/s
CUMSUM(type=f32,ne=[131072,4,1,1]): 40955 runs - 26.35 us/run - 4096 kB/run - 148.26 GB/s
CUMSUM(type=f32,ne=[200000,4,1,1]): 42952 runs - 24.18 us/run - 6250 kB/run - 246.51 GB/s
CUMSUM(type=f32,ne=[2000000,4,1,1]): 32757 runs - 31.01 us/run - 62500 kB/run - 1923.68 GB/s
CUMSUM(type=f32,ne=[128,8,1,1]): 786336 runs - 1.28 us/run - 8 kB/run - 5.95 GB/s
CUMSUM(type=f32,ne=[1024,8,1,1]): 393168 runs - 2.57 us/run - 64 kB/run - 23.73 GB/s
CUMSUM(type=f32,ne=[4096,8,1,1]): 131056 runs - 7.67 us/run - 256 kB/run - 31.82 GB/s
CUMSUM(type=f32,ne=[8192,8,1,1]): 73719 runs - 14.43 us/run - 512 kB/run - 33.84 GB/s
CUMSUM(type=f32,ne=[16384,8,1,1]): 40955 runs - 27.90 us/run - 1024 kB/run - 35.01 GB/s
CUMSUM(type=f32,ne=[32768,8,1,1]): 24573 runs - 54.63 us/run - 2048 kB/run - 35.75 GB/s
CUMSUM(type=f32,ne=[65536,8,1,1]): 16382 runs - 72.24 us/run - 4096 kB/run - 54.08 GB/s
CUMSUM(type=f32,ne=[131072,8,1,1]): 20485 runs - 52.66 us/run - 8192 kB/run - 148.37 GB/s
CUMSUM(type=f32,ne=[200000,8,1,1]): 21480 runs - 48.00 us/run - 12500 kB/run - 248.42 GB/s
CUMSUM(type=f32,ne=[2000000,8,1,1]): 16140 runs - 61.99 us/run - 125000 kB/run - 1926.51 GB/s
CUMSUM(type=f32,ne=[128,16,1,1]): 786336 runs - 1.28 us/run - 16 kB/run - 11.90 GB/s
CUMSUM(type=f32,ne=[1024,16,1,1]): 393168 runs - 2.57 us/run - 128 kB/run - 47.57 GB/s
CUMSUM(type=f32,ne=[4096,16,1,1]): 131056 runs - 7.65 us/run - 512 kB/run - 63.83 GB/s
CUMSUM(type=f32,ne=[8192,16,1,1]): 73719 runs - 14.42 us/run - 1024 kB/run - 67.74 GB/s
CUMSUM(type=f32,ne=[16384,16,1,1]): 40955 runs - 27.87 us/run - 2048 kB/run - 70.09 GB/s
CUMSUM(type=f32,ne=[32768,16,1,1]): 24573 runs - 54.54 us/run - 4096 kB/run - 71.63 GB/s
CUMSUM(type=f32,ne=[65536,16,1,1]): 12291 runs - 107.53 us/run - 8192 kB/run - 72.66 GB/s
CUMSUM(type=f32,ne=[131072,16,1,1]): 10245 runs - 105.10 us/run - 16384 kB/run - 148.70 GB/s
CUMSUM(type=f32,ne=[200000,16,1,1]): 10744 runs - 95.36 us/run - 25000 kB/run - 250.11 GB/s
CUMSUM(type=f32,ne=[2000000,16,1,1]): 5400 runs - 186.97 us/run - 250000 kB/run - 1279.90 GB/s
* sampling : expand support (wip)
* tests : fix memory leaks
* cont : fixes
* tests : check temp back to 0.0
* sampling : fix top-p
* sampling : handle n_probs case
* server : handle unsupported cases
* metal : print node names for debugging
* ggml : remove redundant src in ggml_cast
* ggml-alloc : fix reuse-parent logic for misaligned sizes
* Revert "ggml : remove redundant src in ggml_cast"
This reverts commit 62d1b0082dbad699fbeea85a096bc334e3c1c0e6.
* CUDA: Add Cooperative-Groups-based parallelization of ncols in softmax
Old implementation parallelizes rows across SMs, which does not fit the
needs of backend-sampling (where we have ncols >> nrows and thus want to
parallelize ncols across SMs)
* Add TODOs to and adjust heuristics of row-wise soft_max in CUDA
Heuristics were selected based on the following numbers:
```
-- Before
Backend 1/2: CUDA0
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96691 MB free)
SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 2236 runs - 450.34 us/run - 655360 kB/run - 1401.20 GB/s
SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 17748 runs - 56.80 us/run - 128880 kB/run - 2168.19 GB/s
SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 57204 runs - 18.35 us/run - 12320 kB/run - 640.57 GB/s
SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9840 runs - 102.46 us/run - 81920 kB/run - 763.45 GB/s
SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98064 runs - 10.25 us/run - 6160 kB/run - 573.43 GB/s
SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98310 runs - 10.25 us/run - 10240 kB/run - 953.20 GB/s
SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 5.99 us/run - 640 kB/run - 101.84 GB/s
SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 5.97 us/run - 770 kB/run - 123.02 GB/s
SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 6.00 us/run - 64 kB/run - 10.16 GB/s
SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 163820 runs - 6.12 us/run - 256 kB/run - 39.91 GB/s
SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.88 us/run - 1024 kB/run - 141.92 GB/s
SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 114674 runs - 8.87 us/run - 512 kB/run - 55.06 GB/s
SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 2048 kB/run - 190.82 GB/s
SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 21.37 us/run - 256 kB/run - 11.43 GB/s
SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 22.54 us/run - 1024 kB/run - 43.33 GB/s
SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 23.92 us/run - 4096 kB/run - 163.32 GB/s
SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 32764 runs - 38.94 us/run - 512 kB/run - 12.54 GB/s
SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24573 runs - 41.94 us/run - 2048 kB/run - 46.57 GB/s
SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24582 runs - 43.09 us/run - 8192 kB/run - 181.32 GB/s
SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 16382 runs - 74.56 us/run - 1024 kB/run - 13.10 GB/s
SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 16382 runs - 79.85 us/run - 4096 kB/run - 48.92 GB/s
SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 12294 runs - 82.41 us/run - 16384 kB/run - 189.64 GB/s
SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8191 runs - 145.16 us/run - 2048 kB/run - 13.46 GB/s
SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8194 runs - 155.46 us/run - 8192 kB/run - 50.26 GB/s
SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 7175 runs - 160.70 us/run - 32768 kB/run - 194.56 GB/s
SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8191 runs - 285.81 us/run - 4096 kB/run - 13.67 GB/s
SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 4098 runs - 306.91 us/run - 16384 kB/run - 50.92 GB/s
SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 3591 runs - 317.06 us/run - 65536 kB/run - 197.32 GB/s
-- After
Backend 1/2: CUDA0
Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Device memory: 97250 MB (96691 MB free)
SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 2236 runs - 450.67 us/run - 655360 kB/run - 1400.15 GB/s
SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 17748 runs - 56.97 us/run - 128880 kB/run - 2161.50 GB/s
SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 57204 runs - 18.35 us/run - 12320 kB/run - 640.36 GB/s
SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9840 runs - 102.46 us/run - 81920 kB/run - 763.42 GB/s
SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98064 runs - 10.25 us/run - 6160 kB/run - 573.43 GB/s
SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98310 runs - 10.25 us/run - 10240 kB/run - 953.21 GB/s
SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 7.00 us/run - 640 kB/run - 87.26 GB/s
SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.99 us/run - 770 kB/run - 105.05 GB/s
SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 172011 runs - 6.02 us/run - 64 kB/run - 10.13 GB/s
SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 163820 runs - 6.12 us/run - 256 kB/run - 39.87 GB/s
SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 147438 runs - 6.91 us/run - 1024 kB/run - 141.40 GB/s
SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 128 kB/run - 14.89 GB/s
SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 114674 runs - 8.79 us/run - 512 kB/run - 55.54 GB/s
SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 2048 kB/run - 190.82 GB/s
SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 131056 runs - 8.11 us/run - 256 kB/run - 30.12 GB/s
SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 22.54 us/run - 1024 kB/run - 43.33 GB/s
SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 49146 runs - 23.32 us/run - 4096 kB/run - 167.50 GB/s
SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.19 us/run - 512 kB/run - 59.63 GB/s
SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 40955 runs - 24.59 us/run - 2048 kB/run - 79.43 GB/s
SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 24582 runs - 43.21 us/run - 8192 kB/run - 180.84 GB/s
SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.19 us/run - 1024 kB/run - 119.25 GB/s
SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 40955 runs - 24.59 us/run - 4096 kB/run - 158.87 GB/s
SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 12294 runs - 82.37 us/run - 16384 kB/run - 189.74 GB/s
SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 122865 runs - 8.20 us/run - 2048 kB/run - 238.28 GB/s
SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 36873 runs - 28.66 us/run - 8192 kB/run - 272.61 GB/s
SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 9225 runs - 108.51 us/run - 32768 kB/run - 288.13 GB/s
SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 98292 runs - 10.24 us/run - 4096 kB/run - 381.65 GB/s
SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 32784 runs - 31.74 us/run - 16384 kB/run - 492.43 GB/s
SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0): 8721 runs - 121.20 us/run - 65536 kB/run - 516.19 GB/s
```
* Fix compiler warnings by casting `const` away
* llama : require backend samplers to be of type llama_sampler_chain
* sampling : use host buffer type for inputs
* Try fixing HIP build errors by adding corresponding #defines
Will likely have to disable for MUSA as I didn't find any docs online
* Fix launch logic when supports_cooperative_launch=false
* Disable cooperative groups for musa
Didn't find any doc online, so I don't even know if they support this
* server : reconnect the backend_sampling setting in the WebUI
* graph : make the compute graph constant with respect to active samplers
* batch : fix sequence id ownage
* graph : respect sampler order for graph reuse
* HIP/MUSA: fix build for backend sampling
* sampling : optimize logit_bias sampler
* cont : fix build
* sampling : generic ggml op support detection
* sampling : fix greedy
* tests : run backend sampler tests always on the CPU
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* webui : fix lint
* Fix data-race in `soft_max_f32_parallelize_cols_single_row`
By using `tmp_vals` to store both max values and exponential
accumulator there was a potential data-race, where the exponential accumulator
for a given CTA may have written to `tmp_vals` before all others CTAs have
read the max value from it.
To avoid a third g.sync(), an additional temporary data-storage was
added. Given that there are syncs in place after writing to gmem, it is
guaranteed that the previous values for sums/max were read by all CTAs now.
* Apply automated code-formating to softmax.cu
* llama : clarify backend_accept/backend_set_input comments [no ci]
* llama : fix typo in comment [no ci]
* tests : use smart pointers for backend samplers
* tests : use smart pointers for model and context
* tests : remove vocab member from test_model_context
Also includes some minor cleanups related to nullptr checks.
* tests : extract batch info update to separate method
* tests : fix batch token position tracking in test_backend_sampler.cpp
* tests : add --device option support to backend sampler tests
This commit adds support for specifying a device to run the test on.
* common : disable backend sampling when grammar is involved
* Fix different RNG-states between backend-sampling and llama-sampling
By default, we perform a warm-up step where the ggml_cgraph is computed
once. For backend-sampling, this graph contains the sampler, and thus
the RNG state of the backend's dist sampler is advanced once.
Solution to this is to reset the samplers after the warmup has finished
* Make backend dist sampler use same rnd's as dist sampler
We sample in double precision and cast to float to match rnd numbers of
llama_dampler_dist which uses double precision (sampling from
std::uniform_real_distribution<double> and
std::uniform_real_distribution<float> with same rng will produce
different sequences).
* Update CCCL version to v3.2.0-rc2
* Build with CCCL 3.2 for CUDA backends
Gives best perf for backend-sampling on CUDA. Flag can be removed once
CCCL 3.2 is bundled within CTK and that CTK version is used in llama.cpp
* tests : revert server test changes (no longer needed)
* ggml : include cub/cub.cuh instead of block_scan.cuh
This commit updates the include directive in cumsum.cu to use
cub/cub.cuh instead of cub/block/block_scan.cuh.
The motivation of this change is that without it compilation fails
with the following error:
```console
/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(196): error: name followed by "::" must be a class or namespace name
cub::DeviceScan::InclusiveSum(nullptr,
^
/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(207): error: name followed by "::" must be a class or namespace name
cub::DeviceScan::InclusiveSum((void *) tmp_alloc.get(), tmp_size, src, dst, ne, stream);
^
2 errors detected in the compilation of "/llama.cpp/ggml/src/ggml-cuda/cumsum.cu".
gmake[2]: *** [ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/build.make:317: ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/cumsum.cu.o] Error 2
```
Commit 83b3b1c271 ("cuda: optimize
cumsum cub path (#18362)") updated the include directive replacing
device_scan.cuh which is causing this issue.
This commit uses cub/cub.cuh umbrella header which is consistent with
other files in the ggml-cuda directory like mean.cu, sum.cu, etc.
* arg : add shorthand for --backend-sampling
* ci : add server workflow with backend sampling
* sampling : fix reshapes
* server : remove printfs
* sampling : zero-initialize input buffers
* minor : add comments + some cleanup
* llama : assert at most one output token per sequence
* tests : add more top_k tests
* CUDA: Fix non-determinism of CUB-based Top-K
DeviceTopK::MaxPairs is an iterative algorithm, where `d_keys_out` is
written after every iteration. As a consequence, it must not overlap
with `d_keys_in`, or otherwise undefined behavior occurs (keys are no
longer unique in d_keys_in and may map to different values between
iterations)
* CUDA: Optimize index of top_k_cub
By using the fancy
[`counting_iterator`](https://nvidia.github.io/cccl/thrust/api/classthrust_1_1counting__iterator.html#classthrust_1_1counting__iterator)
exposed by CCCL, we can avoid materializing the index to GPU memory,
saving VRAM + 1 kernel invocation
* Apply code-formatting to top-k.cu
* CUDA: Remove obsolete temp_keys from CUB
Since we use cuda::discard_iterator to avoid writing out the keys, we
can directly pass in src instead of copying it to `temp_keys`
* minor : cleanup, TODOs, etc.
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
1536 lines
78 KiB
C++
1536 lines
78 KiB
C++
#ifndef LLAMA_H
|
||
#define LLAMA_H
|
||
|
||
#include "ggml.h"
|
||
#include "ggml-cpu.h"
|
||
#include "ggml-backend.h"
|
||
#include "ggml-opt.h"
|
||
|
||
#include <stddef.h>
|
||
#include <stdint.h>
|
||
#include <stdio.h>
|
||
#include <stdbool.h>
|
||
|
||
#ifdef LLAMA_SHARED
|
||
# if defined(_WIN32) && !defined(__MINGW32__)
|
||
# ifdef LLAMA_BUILD
|
||
# define LLAMA_API __declspec(dllexport)
|
||
# else
|
||
# define LLAMA_API __declspec(dllimport)
|
||
# endif
|
||
# else
|
||
# define LLAMA_API __attribute__ ((visibility ("default")))
|
||
# endif
|
||
#else
|
||
# define LLAMA_API
|
||
#endif
|
||
|
||
#ifdef __GNUC__
|
||
# define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
|
||
#elif defined(_MSC_VER)
|
||
# define DEPRECATED(func, hint) __declspec(deprecated(hint)) func
|
||
#else
|
||
# define DEPRECATED(func, hint) func
|
||
#endif
|
||
|
||
#define LLAMA_DEFAULT_SEED 0xFFFFFFFF
|
||
|
||
#define LLAMA_TOKEN_NULL -1
|
||
|
||
#define LLAMA_FILE_MAGIC_GGLA 0x67676c61u // 'ggla'
|
||
#define LLAMA_FILE_MAGIC_GGSN 0x6767736eu // 'ggsn'
|
||
#define LLAMA_FILE_MAGIC_GGSQ 0x67677371u // 'ggsq'
|
||
|
||
#define LLAMA_SESSION_MAGIC LLAMA_FILE_MAGIC_GGSN
|
||
#define LLAMA_SESSION_VERSION 9
|
||
|
||
#define LLAMA_STATE_SEQ_MAGIC LLAMA_FILE_MAGIC_GGSQ
|
||
#define LLAMA_STATE_SEQ_VERSION 2
|
||
|
||
#ifdef __cplusplus
|
||
extern "C" {
|
||
#endif
|
||
|
||
//
|
||
// C interface
|
||
//
|
||
// TODO: show sample usage
|
||
//
|
||
|
||
struct llama_vocab;
|
||
struct llama_model;
|
||
struct llama_context;
|
||
struct llama_sampler;
|
||
|
||
typedef struct llama_memory_i * llama_memory_t;
|
||
|
||
typedef int32_t llama_pos;
|
||
typedef int32_t llama_token;
|
||
typedef int32_t llama_seq_id;
|
||
|
||
enum llama_vocab_type {
|
||
LLAMA_VOCAB_TYPE_NONE = 0, // For models without vocab
|
||
LLAMA_VOCAB_TYPE_SPM = 1, // LLaMA tokenizer based on byte-level BPE with byte fallback
|
||
LLAMA_VOCAB_TYPE_BPE = 2, // GPT-2 tokenizer based on byte-level BPE
|
||
LLAMA_VOCAB_TYPE_WPM = 3, // BERT tokenizer based on WordPiece
|
||
LLAMA_VOCAB_TYPE_UGM = 4, // T5 tokenizer based on Unigram
|
||
LLAMA_VOCAB_TYPE_RWKV = 5, // RWKV tokenizer based on greedy tokenization
|
||
LLAMA_VOCAB_TYPE_PLAMO2 = 6, // PLaMo-2 tokenizer based on Aho-Corasick with dynamic programming
|
||
};
|
||
|
||
enum llama_rope_type {
|
||
LLAMA_ROPE_TYPE_NONE = -1,
|
||
LLAMA_ROPE_TYPE_NORM = 0,
|
||
LLAMA_ROPE_TYPE_NEOX = GGML_ROPE_TYPE_NEOX,
|
||
LLAMA_ROPE_TYPE_MROPE = GGML_ROPE_TYPE_MROPE,
|
||
LLAMA_ROPE_TYPE_IMROPE = GGML_ROPE_TYPE_IMROPE,
|
||
LLAMA_ROPE_TYPE_VISION = GGML_ROPE_TYPE_VISION,
|
||
};
|
||
|
||
enum llama_token_type { //TODO: remove, required until per token attributes are available from GGUF file
|
||
LLAMA_TOKEN_TYPE_UNDEFINED = 0,
|
||
LLAMA_TOKEN_TYPE_NORMAL = 1,
|
||
LLAMA_TOKEN_TYPE_UNKNOWN = 2,
|
||
LLAMA_TOKEN_TYPE_CONTROL = 3,
|
||
LLAMA_TOKEN_TYPE_USER_DEFINED = 4,
|
||
LLAMA_TOKEN_TYPE_UNUSED = 5,
|
||
LLAMA_TOKEN_TYPE_BYTE = 6,
|
||
};
|
||
|
||
enum llama_token_attr {
|
||
LLAMA_TOKEN_ATTR_UNDEFINED = 0,
|
||
LLAMA_TOKEN_ATTR_UNKNOWN = 1 << 0,
|
||
LLAMA_TOKEN_ATTR_UNUSED = 1 << 1,
|
||
LLAMA_TOKEN_ATTR_NORMAL = 1 << 2,
|
||
LLAMA_TOKEN_ATTR_CONTROL = 1 << 3, // SPECIAL?
|
||
LLAMA_TOKEN_ATTR_USER_DEFINED = 1 << 4,
|
||
LLAMA_TOKEN_ATTR_BYTE = 1 << 5,
|
||
LLAMA_TOKEN_ATTR_NORMALIZED = 1 << 6,
|
||
LLAMA_TOKEN_ATTR_LSTRIP = 1 << 7,
|
||
LLAMA_TOKEN_ATTR_RSTRIP = 1 << 8,
|
||
LLAMA_TOKEN_ATTR_SINGLE_WORD = 1 << 9,
|
||
};
|
||
|
||
// model file types
|
||
enum llama_ftype {
|
||
LLAMA_FTYPE_ALL_F32 = 0,
|
||
LLAMA_FTYPE_MOSTLY_F16 = 1, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q4_0 = 2, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q4_1 = 3, // except 1d tensors
|
||
// LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4, // tok_embeddings.weight and output.weight are F16
|
||
// LLAMA_FTYPE_MOSTLY_Q4_2 = 5, // support has been removed
|
||
// LLAMA_FTYPE_MOSTLY_Q4_3 = 6, // support has been removed
|
||
LLAMA_FTYPE_MOSTLY_Q8_0 = 7, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q5_0 = 8, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q5_1 = 9, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q2_K = 10, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q3_K_S = 11, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q3_K_M = 12, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q3_K_L = 13, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q4_K_S = 14, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q4_K_M = 15, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q5_K_S = 16, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q5_K_M = 17, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q6_K = 18, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ2_XXS = 19, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ2_XS = 20, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_Q2_K_S = 21, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ3_XS = 22, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ3_XXS = 23, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ1_S = 24, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ4_NL = 25, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ3_S = 26, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ3_M = 27, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ2_S = 28, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ2_M = 29, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ4_XS = 30, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_IQ1_M = 31, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_BF16 = 32, // except 1d tensors
|
||
//LLAMA_FTYPE_MOSTLY_Q4_0_4_4 = 33, // removed from gguf files, use Q4_0 and runtime repack
|
||
//LLAMA_FTYPE_MOSTLY_Q4_0_4_8 = 34, // removed from gguf files, use Q4_0 and runtime repack
|
||
//LLAMA_FTYPE_MOSTLY_Q4_0_8_8 = 35, // removed from gguf files, use Q4_0 and runtime repack
|
||
LLAMA_FTYPE_MOSTLY_TQ1_0 = 36, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_TQ2_0 = 37, // except 1d tensors
|
||
LLAMA_FTYPE_MOSTLY_MXFP4_MOE = 38, // except 1d tensors
|
||
|
||
LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file
|
||
};
|
||
|
||
enum llama_rope_scaling_type {
|
||
LLAMA_ROPE_SCALING_TYPE_UNSPECIFIED = -1,
|
||
LLAMA_ROPE_SCALING_TYPE_NONE = 0,
|
||
LLAMA_ROPE_SCALING_TYPE_LINEAR = 1,
|
||
LLAMA_ROPE_SCALING_TYPE_YARN = 2,
|
||
LLAMA_ROPE_SCALING_TYPE_LONGROPE = 3,
|
||
LLAMA_ROPE_SCALING_TYPE_MAX_VALUE = LLAMA_ROPE_SCALING_TYPE_LONGROPE,
|
||
};
|
||
|
||
enum llama_pooling_type {
|
||
LLAMA_POOLING_TYPE_UNSPECIFIED = -1,
|
||
LLAMA_POOLING_TYPE_NONE = 0,
|
||
LLAMA_POOLING_TYPE_MEAN = 1,
|
||
LLAMA_POOLING_TYPE_CLS = 2,
|
||
LLAMA_POOLING_TYPE_LAST = 3,
|
||
LLAMA_POOLING_TYPE_RANK = 4, // used by reranking models to attach the classification head to the graph
|
||
};
|
||
|
||
enum llama_attention_type {
|
||
LLAMA_ATTENTION_TYPE_UNSPECIFIED = -1,
|
||
LLAMA_ATTENTION_TYPE_CAUSAL = 0,
|
||
LLAMA_ATTENTION_TYPE_NON_CAUSAL = 1,
|
||
};
|
||
|
||
enum llama_flash_attn_type {
|
||
LLAMA_FLASH_ATTN_TYPE_AUTO = -1,
|
||
LLAMA_FLASH_ATTN_TYPE_DISABLED = 0,
|
||
LLAMA_FLASH_ATTN_TYPE_ENABLED = 1,
|
||
};
|
||
|
||
LLAMA_API const char * llama_flash_attn_type_name(enum llama_flash_attn_type flash_attn_type);
|
||
|
||
enum llama_split_mode {
|
||
LLAMA_SPLIT_MODE_NONE = 0, // single GPU
|
||
LLAMA_SPLIT_MODE_LAYER = 1, // split layers and KV across GPUs
|
||
LLAMA_SPLIT_MODE_ROW = 2, // split layers and KV across GPUs, use tensor parallelism if supported
|
||
};
|
||
|
||
// TODO: simplify (https://github.com/ggml-org/llama.cpp/pull/9294#pullrequestreview-2286561979)
|
||
typedef struct llama_token_data {
|
||
llama_token id; // token id
|
||
float logit; // log-odds of the token
|
||
float p; // probability of the token
|
||
} llama_token_data;
|
||
|
||
typedef struct llama_token_data_array {
|
||
// TODO: consider SoA
|
||
// NOTE: this pointer can be modified by the samplers
|
||
llama_token_data * data;
|
||
size_t size;
|
||
int64_t selected; // this is the index in the data array (i.e. not the token id)
|
||
bool sorted; // note: do not assume the data is sorted - always check this flag
|
||
} llama_token_data_array;
|
||
|
||
typedef bool (*llama_progress_callback)(float progress, void * user_data);
|
||
|
||
// Input data for llama_encode/llama_decode
|
||
// A llama_batch object can contain input about one or many sequences
|
||
// The provided arrays (i.e. token, embd, pos, etc.) must have size of n_tokens
|
||
//
|
||
// - token : the token ids of the input (used when embd is NULL)
|
||
// - embd : token embeddings (i.e. float vector of size n_embd) (used when token is NULL)
|
||
// - pos : the positions of the respective token in the sequence
|
||
// (if set to NULL, the token position will be tracked automatically by llama_encode/llama_decode)
|
||
// - seq_id : the sequence to which the respective token belongs
|
||
// (if set to NULL, the sequence ID will be assumed to be 0)
|
||
// - logits : if zero, the logits (and/or the embeddings) for the respective token will not be output
|
||
// (if set to NULL:
|
||
// - if embeddings: all tokens are output
|
||
// - if not: only the last token is output
|
||
// )
|
||
//
|
||
typedef struct llama_batch {
|
||
int32_t n_tokens;
|
||
|
||
llama_token * token;
|
||
float * embd;
|
||
llama_pos * pos;
|
||
int32_t * n_seq_id;
|
||
llama_seq_id ** seq_id;
|
||
int8_t * logits; // TODO: rename this to "output"
|
||
} llama_batch;
|
||
|
||
enum llama_model_kv_override_type {
|
||
LLAMA_KV_OVERRIDE_TYPE_INT,
|
||
LLAMA_KV_OVERRIDE_TYPE_FLOAT,
|
||
LLAMA_KV_OVERRIDE_TYPE_BOOL,
|
||
LLAMA_KV_OVERRIDE_TYPE_STR,
|
||
};
|
||
|
||
enum llama_model_meta_key {
|
||
LLAMA_MODEL_META_KEY_SAMPLING_SEQUENCE,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_TOP_K,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_TOP_P,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_MIN_P,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_XTC_PROBABILITY,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_XTC_THRESHOLD,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_TEMP,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_PENALTY_LAST_N,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_PENALTY_REPEAT,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_MIROSTAT,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_MIROSTAT_TAU,
|
||
LLAMA_MODEL_META_KEY_SAMPLING_MIROSTAT_ETA,
|
||
};
|
||
|
||
struct llama_model_kv_override {
|
||
enum llama_model_kv_override_type tag;
|
||
|
||
char key[128];
|
||
|
||
union {
|
||
int64_t val_i64;
|
||
double val_f64;
|
||
bool val_bool;
|
||
char val_str[128];
|
||
};
|
||
};
|
||
|
||
struct llama_model_tensor_buft_override {
|
||
const char * pattern;
|
||
ggml_backend_buffer_type_t buft;
|
||
};
|
||
|
||
struct llama_model_params {
|
||
// NULL-terminated list of devices to use for offloading (if NULL, all available devices are used)
|
||
ggml_backend_dev_t * devices;
|
||
|
||
// NULL-terminated list of buffer types to use for tensors that match a pattern
|
||
const struct llama_model_tensor_buft_override * tensor_buft_overrides;
|
||
|
||
int32_t n_gpu_layers; // number of layers to store in VRAM, a negative value means all layers
|
||
enum llama_split_mode split_mode; // how to split the model across multiple GPUs
|
||
|
||
// the GPU that is used for the entire model when split_mode is LLAMA_SPLIT_MODE_NONE
|
||
int32_t main_gpu;
|
||
|
||
// proportion of the model (layers or rows) to offload to each GPU, size: llama_max_devices()
|
||
const float * tensor_split;
|
||
|
||
// Called with a progress value between 0.0 and 1.0. Pass NULL to disable.
|
||
// If the provided progress_callback returns true, model loading continues.
|
||
// If it returns false, model loading is immediately aborted.
|
||
llama_progress_callback progress_callback;
|
||
|
||
// context pointer passed to the progress callback
|
||
void * progress_callback_user_data;
|
||
|
||
// override key-value pairs of the model meta data
|
||
const struct llama_model_kv_override * kv_overrides;
|
||
|
||
// Keep the booleans together to avoid misalignment during copy-by-value.
|
||
bool vocab_only; // only load the vocabulary, no weights
|
||
bool use_mmap; // use mmap if possible
|
||
bool use_mlock; // force system to keep model in RAM
|
||
bool check_tensors; // validate model tensor data
|
||
bool use_extra_bufts; // use extra buffer types (used for weight repacking)
|
||
bool no_host; // bypass host buffer allowing extra buffers to be used
|
||
bool no_alloc; // only load metadata and simulate memory allocations
|
||
};
|
||
|
||
struct llama_sampler_seq_config {
|
||
llama_seq_id seq_id;
|
||
struct llama_sampler * sampler;
|
||
};
|
||
|
||
// NOTE: changing the default values of parameters marked as [EXPERIMENTAL] may cause crashes or incorrect results in certain configurations
|
||
// https://github.com/ggml-org/llama.cpp/pull/7544
|
||
struct llama_context_params {
|
||
uint32_t n_ctx; // text context, 0 = from model
|
||
uint32_t n_batch; // logical maximum batch size that can be submitted to llama_decode
|
||
uint32_t n_ubatch; // physical maximum batch size
|
||
uint32_t n_seq_max; // max number of sequences (i.e. distinct states for recurrent models)
|
||
int32_t n_threads; // number of threads to use for generation
|
||
int32_t n_threads_batch; // number of threads to use for batch processing
|
||
|
||
enum llama_rope_scaling_type rope_scaling_type; // RoPE scaling type, from `enum llama_rope_scaling_type`
|
||
enum llama_pooling_type pooling_type; // whether to pool (sum) embedding results by sequence id
|
||
enum llama_attention_type attention_type; // attention type to use for embeddings
|
||
enum llama_flash_attn_type flash_attn_type; // when to enable Flash Attention
|
||
|
||
// ref: https://github.com/ggml-org/llama.cpp/pull/2054
|
||
float rope_freq_base; // RoPE base frequency, 0 = from model
|
||
float rope_freq_scale; // RoPE frequency scaling factor, 0 = from model
|
||
float yarn_ext_factor; // YaRN extrapolation mix factor, negative = from model
|
||
float yarn_attn_factor; // YaRN magnitude scaling factor
|
||
float yarn_beta_fast; // YaRN low correction dim
|
||
float yarn_beta_slow; // YaRN high correction dim
|
||
uint32_t yarn_orig_ctx; // YaRN original context size
|
||
float defrag_thold; // [DEPRECATED] defragment the KV cache if holes/size > thold, <= 0 disabled (default)
|
||
|
||
ggml_backend_sched_eval_callback cb_eval;
|
||
void * cb_eval_user_data;
|
||
|
||
enum ggml_type type_k; // data type for K cache [EXPERIMENTAL]
|
||
enum ggml_type type_v; // data type for V cache [EXPERIMENTAL]
|
||
|
||
// Abort callback
|
||
// if it returns true, execution of llama_decode() will be aborted
|
||
// currently works only with CPU execution
|
||
ggml_abort_callback abort_callback;
|
||
void * abort_callback_data;
|
||
|
||
// Keep the booleans together and at the end of the struct to avoid misalignment during copy-by-value.
|
||
bool embeddings; // if true, extract embeddings (together with logits)
|
||
bool offload_kqv; // offload the KQV ops (including the KV cache) to GPU
|
||
bool no_perf; // measure performance timings
|
||
bool op_offload; // offload host tensor operations to device
|
||
bool swa_full; // use full-size SWA cache (https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
|
||
// NOTE: setting to false when n_seq_max > 1 can cause bad performance in some cases
|
||
// ref: https://github.com/ggml-org/llama.cpp/pull/13845#issuecomment-2924800573
|
||
bool kv_unified; // use a unified buffer across the input sequences when computing the attention
|
||
// try to disable when n_seq_max > 1 for improved performance when the sequences do not share a large prefix
|
||
// ref: https://github.com/ggml-org/llama.cpp/pull/14363
|
||
|
||
// [EXPERIMENTAL]
|
||
// backend sampler chain configuration (make sure the caller keeps the sampler chains alive)
|
||
// note: the samplers must be sampler chains (i.e. use llama_sampler_chain_init)
|
||
struct llama_sampler_seq_config * samplers;
|
||
size_t n_samplers;
|
||
};
|
||
|
||
// model quantization parameters
|
||
typedef struct llama_model_quantize_params {
|
||
int32_t nthread; // number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
|
||
enum llama_ftype ftype; // quantize to this llama_ftype
|
||
enum ggml_type output_tensor_type; // output tensor type
|
||
enum ggml_type token_embedding_type; // token embeddings tensor type
|
||
bool allow_requantize; // allow quantizing non-f32/f16 tensors
|
||
bool quantize_output_tensor; // quantize output.weight
|
||
bool only_copy; // only copy tensors - ftype, allow_requantize and quantize_output_tensor are ignored
|
||
bool pure; // quantize all tensors to the default type
|
||
bool keep_split; // quantize to the same number of shards
|
||
void * imatrix; // pointer to importance matrix data
|
||
void * kv_overrides; // pointer to vector containing overrides
|
||
void * tensor_types; // pointer to vector containing tensor types
|
||
void * prune_layers; // pointer to vector containing layer indices to prune
|
||
} llama_model_quantize_params;
|
||
|
||
typedef struct llama_logit_bias {
|
||
llama_token token;
|
||
float bias;
|
||
} llama_logit_bias;
|
||
|
||
typedef struct llama_sampler_chain_params {
|
||
bool no_perf; // whether to measure performance timings
|
||
} llama_sampler_chain_params;
|
||
|
||
// used in chat template
|
||
typedef struct llama_chat_message {
|
||
const char * role;
|
||
const char * content;
|
||
} llama_chat_message;
|
||
|
||
// lora adapter
|
||
struct llama_adapter_lora;
|
||
|
||
// Helpers for getting default parameters
|
||
// TODO: update API to start accepting pointers to params structs (https://github.com/ggml-org/llama.cpp/discussions/9172)
|
||
LLAMA_API struct llama_model_params llama_model_default_params(void);
|
||
LLAMA_API struct llama_context_params llama_context_default_params(void);
|
||
LLAMA_API struct llama_sampler_chain_params llama_sampler_chain_default_params(void);
|
||
LLAMA_API struct llama_model_quantize_params llama_model_quantize_default_params(void);
|
||
|
||
// Initialize the llama + ggml backend
|
||
// If numa is true, use NUMA optimizations
|
||
// Call once at the start of the program
|
||
LLAMA_API void llama_backend_init(void);
|
||
|
||
// Call once at the end of the program - currently only used for MPI
|
||
LLAMA_API void llama_backend_free(void);
|
||
|
||
//optional:
|
||
LLAMA_API void llama_numa_init(enum ggml_numa_strategy numa);
|
||
|
||
// Optional: an auto threadpool gets created in ggml if not passed explicitly
|
||
LLAMA_API void llama_attach_threadpool(
|
||
struct llama_context * ctx,
|
||
ggml_threadpool_t threadpool,
|
||
ggml_threadpool_t threadpool_batch);
|
||
|
||
LLAMA_API void llama_detach_threadpool(struct llama_context * ctx);
|
||
|
||
DEPRECATED(LLAMA_API struct llama_model * llama_load_model_from_file(
|
||
const char * path_model,
|
||
struct llama_model_params params),
|
||
"use llama_model_load_from_file instead");
|
||
|
||
// Load the model from a file
|
||
// If the file is split into multiple parts, the file name must follow this pattern: <name>-%05d-of-%05d.gguf
|
||
// If the split file name does not follow this pattern, use llama_model_load_from_splits
|
||
LLAMA_API struct llama_model * llama_model_load_from_file(
|
||
const char * path_model,
|
||
struct llama_model_params params);
|
||
|
||
// Load the model from multiple splits (support custom naming scheme)
|
||
// The paths must be in the correct order
|
||
LLAMA_API struct llama_model * llama_model_load_from_splits(
|
||
const char ** paths,
|
||
size_t n_paths,
|
||
struct llama_model_params params);
|
||
|
||
LLAMA_API void llama_model_save_to_file(
|
||
const struct llama_model * model,
|
||
const char * path_model);
|
||
|
||
DEPRECATED(LLAMA_API void llama_free_model(struct llama_model * model),
|
||
"use llama_model_free instead");
|
||
|
||
LLAMA_API void llama_model_free(struct llama_model * model);
|
||
|
||
LLAMA_API struct llama_context * llama_init_from_model(
|
||
struct llama_model * model,
|
||
struct llama_context_params params);
|
||
|
||
DEPRECATED(LLAMA_API struct llama_context * llama_new_context_with_model(
|
||
struct llama_model * model,
|
||
struct llama_context_params params),
|
||
"use llama_init_from_model instead");
|
||
|
||
// Frees all allocated memory
|
||
LLAMA_API void llama_free(struct llama_context * ctx);
|
||
|
||
enum llama_params_fit_status {
|
||
LLAMA_PARAMS_FIT_STATUS_SUCCESS = 0, // found allocations that are projected to fit
|
||
LLAMA_PARAMS_FIT_STATUS_FAILURE = 1, // could not find allocations that are projected to fit
|
||
LLAMA_PARAMS_FIT_STATUS_ERROR = 2, // a hard error occured, e.g. because no model could be found at the specified path
|
||
};
|
||
|
||
// fits mparams and cparams to free device memory (assumes system memory is unlimited)
|
||
// - returns true if the parameters could be successfully modified to fit device memory
|
||
// - this function is NOT thread safe because it modifies the global llama logger state
|
||
// - only parameters that have the same value as in llama_default_model_params are modified
|
||
LLAMA_API enum llama_params_fit_status llama_params_fit(
|
||
const char * path_model,
|
||
struct llama_model_params * mparams,
|
||
struct llama_context_params * cparams,
|
||
float * tensor_split, // writable buffer for tensor split, needs at least llama_max_devices elements
|
||
struct llama_model_tensor_buft_override * tensor_buft_overrides, // writable buffer for overrides, needs at least llama_max_tensor_buft_overrides elements
|
||
size_t margin, // margin of memory to leave per device in bytes
|
||
uint32_t n_ctx_min, // minimum context size to set when trying to reduce memory use
|
||
enum ggml_log_level log_level); // minimum log level to print during fitting, lower levels go to debug log
|
||
|
||
LLAMA_API int64_t llama_time_us(void);
|
||
|
||
LLAMA_API size_t llama_max_devices(void);
|
||
LLAMA_API size_t llama_max_parallel_sequences(void);
|
||
LLAMA_API size_t llama_max_tensor_buft_overrides(void);
|
||
|
||
LLAMA_API bool llama_supports_mmap (void);
|
||
LLAMA_API bool llama_supports_mlock (void);
|
||
LLAMA_API bool llama_supports_gpu_offload(void);
|
||
LLAMA_API bool llama_supports_rpc (void);
|
||
|
||
// NOTE: After creating a llama_context, it is recommended to query the actual values using these functions
|
||
// In some cases the requested values via llama_context_params may differ from the actual values used by the context
|
||
// ref: https://github.com/ggml-org/llama.cpp/pull/17046#discussion_r2503085732
|
||
LLAMA_API uint32_t llama_n_ctx (const struct llama_context * ctx);
|
||
LLAMA_API uint32_t llama_n_ctx_seq (const struct llama_context * ctx);
|
||
LLAMA_API uint32_t llama_n_batch (const struct llama_context * ctx);
|
||
LLAMA_API uint32_t llama_n_ubatch (const struct llama_context * ctx);
|
||
LLAMA_API uint32_t llama_n_seq_max (const struct llama_context * ctx);
|
||
|
||
DEPRECATED(LLAMA_API int32_t llama_n_ctx_train(const struct llama_model * model), "use llama_model_n_ctx_train instead");
|
||
DEPRECATED(LLAMA_API int32_t llama_n_embd (const struct llama_model * model), "use llama_model_n_embd instead");
|
||
DEPRECATED(LLAMA_API int32_t llama_n_layer (const struct llama_model * model), "use llama_model_n_layer instead");
|
||
DEPRECATED(LLAMA_API int32_t llama_n_head (const struct llama_model * model), "use llama_model_n_head instead");
|
||
|
||
DEPRECATED(LLAMA_API int32_t llama_n_vocab (const struct llama_vocab * vocab), "use llama_vocab_n_tokens instead");
|
||
|
||
LLAMA_API const struct llama_model * llama_get_model (const struct llama_context * ctx);
|
||
LLAMA_API llama_memory_t llama_get_memory (const struct llama_context * ctx);
|
||
LLAMA_API enum llama_pooling_type llama_pooling_type(const struct llama_context * ctx); // TODO: rename to llama_get_pooling_type
|
||
|
||
LLAMA_API const struct llama_vocab * llama_model_get_vocab(const struct llama_model * model);
|
||
LLAMA_API enum llama_rope_type llama_model_rope_type(const struct llama_model * model);
|
||
|
||
LLAMA_API int32_t llama_model_n_ctx_train(const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_embd (const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_embd_inp (const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_layer (const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_head (const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_head_kv (const struct llama_model * model);
|
||
LLAMA_API int32_t llama_model_n_swa (const struct llama_model * model);
|
||
|
||
// Get the model's RoPE frequency scaling factor
|
||
LLAMA_API float llama_model_rope_freq_scale_train(const struct llama_model * model);
|
||
|
||
// Returns the number of classifier outputs (only valid for classifier models)
|
||
// Undefined behavior for non-classifier models
|
||
LLAMA_API uint32_t llama_model_n_cls_out(const struct llama_model * model);
|
||
|
||
// Returns label of classifier output by index (<n_cls_out). Returns nullptr if no label provided
|
||
LLAMA_API const char * llama_model_cls_label(const struct llama_model * model, uint32_t i);
|
||
|
||
LLAMA_API enum llama_vocab_type llama_vocab_type(const struct llama_vocab * vocab);
|
||
|
||
LLAMA_API int32_t llama_vocab_n_tokens(const struct llama_vocab * vocab);
|
||
|
||
// Functions to access the model's GGUF metadata scalar values
|
||
// - The functions return the length of the string on success, or -1 on failure
|
||
// - The output string is always null-terminated and cleared on failure
|
||
// - When retrieving a string, an extra byte must be allocated to account for the null terminator
|
||
// - GGUF array values are not supported by these functions
|
||
|
||
// Get metadata value as a string by key name
|
||
LLAMA_API int32_t llama_model_meta_val_str(const struct llama_model * model, const char * key, char * buf, size_t buf_size);
|
||
|
||
// Get the number of metadata key/value pairs
|
||
LLAMA_API int32_t llama_model_meta_count(const struct llama_model * model);
|
||
|
||
// Get sampling metadata key name. Returns nullptr if the key is invalid
|
||
LLAMA_API const char * llama_model_meta_key_str(enum llama_model_meta_key key);
|
||
|
||
// Get metadata key name by index
|
||
LLAMA_API int32_t llama_model_meta_key_by_index(const struct llama_model * model, int32_t i, char * buf, size_t buf_size);
|
||
|
||
// Get metadata value as a string by index
|
||
LLAMA_API int32_t llama_model_meta_val_str_by_index(const struct llama_model * model, int32_t i, char * buf, size_t buf_size);
|
||
|
||
// Get a string describing the model type
|
||
LLAMA_API int32_t llama_model_desc(const struct llama_model * model, char * buf, size_t buf_size);
|
||
|
||
// Returns the total size of all the tensors in the model in bytes
|
||
LLAMA_API uint64_t llama_model_size(const struct llama_model * model);
|
||
|
||
// Get the default chat template. Returns nullptr if not available
|
||
// If name is NULL, returns the default chat template
|
||
LLAMA_API const char * llama_model_chat_template(const struct llama_model * model, const char * name);
|
||
|
||
// Returns the total number of parameters in the model
|
||
LLAMA_API uint64_t llama_model_n_params(const struct llama_model * model);
|
||
|
||
// Returns true if the model contains an encoder that requires llama_encode() call
|
||
LLAMA_API bool llama_model_has_encoder(const struct llama_model * model);
|
||
|
||
// Returns true if the model contains a decoder that requires llama_decode() call
|
||
LLAMA_API bool llama_model_has_decoder(const struct llama_model * model);
|
||
|
||
// For encoder-decoder models, this function returns id of the token that must be provided
|
||
// to the decoder to start generating output sequence. For other models, it returns -1.
|
||
LLAMA_API llama_token llama_model_decoder_start_token(const struct llama_model * model);
|
||
|
||
// Returns true if the model is recurrent (like Mamba, RWKV, etc.)
|
||
LLAMA_API bool llama_model_is_recurrent(const struct llama_model * model);
|
||
|
||
// Returns true if the model is hybrid (like Jamba, Granite, etc.)
|
||
LLAMA_API bool llama_model_is_hybrid(const struct llama_model * model);
|
||
|
||
// Returns true if the model is diffusion-based (like LLaDA, Dream, etc.)
|
||
LLAMA_API bool llama_model_is_diffusion(const struct llama_model * model);
|
||
|
||
// Returns 0 on success
|
||
LLAMA_API uint32_t llama_model_quantize(
|
||
const char * fname_inp,
|
||
const char * fname_out,
|
||
const llama_model_quantize_params * params);
|
||
|
||
//
|
||
// Adapters
|
||
//
|
||
|
||
// Load a LoRA adapter from file
|
||
// The adapter is valid as long as the associated model is not freed
|
||
// All adapters must be loaded before context creation
|
||
LLAMA_API struct llama_adapter_lora * llama_adapter_lora_init(
|
||
struct llama_model * model,
|
||
const char * path_lora);
|
||
|
||
// Functions to access the adapter's GGUF metadata scalar values
|
||
// - The functions return the length of the string on success, or -1 on failure
|
||
// - The output string is always null-terminated and cleared on failure
|
||
// - When retrieving a string, an extra byte must be allocated to account for the null terminator
|
||
// - GGUF array values are not supported by these functions
|
||
|
||
// Get metadata value as a string by key name
|
||
LLAMA_API int32_t llama_adapter_meta_val_str(const struct llama_adapter_lora * adapter, const char * key, char * buf, size_t buf_size);
|
||
|
||
// Get the number of metadata key/value pairs
|
||
LLAMA_API int32_t llama_adapter_meta_count(const struct llama_adapter_lora * adapter);
|
||
|
||
// Get metadata key name by index
|
||
LLAMA_API int32_t llama_adapter_meta_key_by_index(const struct llama_adapter_lora * adapter, int32_t i, char * buf, size_t buf_size);
|
||
|
||
// Get metadata value as a string by index
|
||
LLAMA_API int32_t llama_adapter_meta_val_str_by_index(const struct llama_adapter_lora * adapter, int32_t i, char * buf, size_t buf_size);
|
||
|
||
// Manually free a LoRA adapter
|
||
// NOTE: loaded adapters will be free when the associated model is deleted
|
||
LLAMA_API void llama_adapter_lora_free(struct llama_adapter_lora * adapter);
|
||
|
||
// Get the invocation tokens if the current lora is an alora
|
||
LLAMA_API uint64_t llama_adapter_get_alora_n_invocation_tokens(const struct llama_adapter_lora * adapter);
|
||
LLAMA_API const llama_token * llama_adapter_get_alora_invocation_tokens (const struct llama_adapter_lora * adapter);
|
||
|
||
// The following functions operate on a llama_context, hence the naming: llama_verb_...
|
||
|
||
// Add a loaded LoRA adapter to given context
|
||
// This will not modify model's weight
|
||
LLAMA_API int32_t llama_set_adapter_lora(
|
||
struct llama_context * ctx,
|
||
struct llama_adapter_lora * adapter,
|
||
float scale);
|
||
|
||
// Remove a specific LoRA adapter from given context
|
||
// Return -1 if the adapter is not present in the context
|
||
LLAMA_API int32_t llama_rm_adapter_lora(
|
||
struct llama_context * ctx,
|
||
struct llama_adapter_lora * adapter);
|
||
|
||
// Remove all LoRA adapters from given context
|
||
LLAMA_API void llama_clear_adapter_lora(struct llama_context * ctx);
|
||
|
||
// Apply a loaded control vector to a llama_context, or if data is NULL, clear
|
||
// the currently loaded vector.
|
||
// n_embd should be the size of a single layer's control, and data should point
|
||
// to an n_embd x n_layers buffer starting from layer 1.
|
||
// il_start and il_end are the layer range the vector should apply to (both inclusive)
|
||
// See llama_control_vector_load in common to load a control vector.
|
||
LLAMA_API int32_t llama_apply_adapter_cvec(
|
||
struct llama_context * ctx,
|
||
const float * data,
|
||
size_t len,
|
||
int32_t n_embd,
|
||
int32_t il_start,
|
||
int32_t il_end);
|
||
|
||
//
|
||
// Memory
|
||
//
|
||
|
||
// Clear the memory contents
|
||
// If data == true, the data buffers will also be cleared together with the metadata
|
||
LLAMA_API void llama_memory_clear(
|
||
llama_memory_t mem,
|
||
bool data);
|
||
|
||
// Removes all tokens that belong to the specified sequence and have positions in [p0, p1)
|
||
// Returns false if a partial sequence cannot be removed. Removing a whole sequence never fails
|
||
// seq_id < 0 : match any sequence
|
||
// p0 < 0 : [0, p1]
|
||
// p1 < 0 : [p0, inf)
|
||
LLAMA_API bool llama_memory_seq_rm(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id,
|
||
llama_pos p0,
|
||
llama_pos p1);
|
||
|
||
// Copy all tokens that belong to the specified sequence to another sequence
|
||
// p0 < 0 : [0, p1]
|
||
// p1 < 0 : [p0, inf)
|
||
LLAMA_API void llama_memory_seq_cp(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id_src,
|
||
llama_seq_id seq_id_dst,
|
||
llama_pos p0,
|
||
llama_pos p1);
|
||
|
||
// Removes all tokens that do not belong to the specified sequence
|
||
LLAMA_API void llama_memory_seq_keep(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id);
|
||
|
||
// Adds relative position "delta" to all tokens that belong to the specified sequence and have positions in [p0, p1)
|
||
// p0 < 0 : [0, p1]
|
||
// p1 < 0 : [p0, inf)
|
||
LLAMA_API void llama_memory_seq_add(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id,
|
||
llama_pos p0,
|
||
llama_pos p1,
|
||
llama_pos delta);
|
||
|
||
// Integer division of the positions by factor of `d > 1`
|
||
// p0 < 0 : [0, p1]
|
||
// p1 < 0 : [p0, inf)
|
||
LLAMA_API void llama_memory_seq_div(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id,
|
||
llama_pos p0,
|
||
llama_pos p1,
|
||
int d);
|
||
|
||
// Returns the smallest position present in the memory for the specified sequence
|
||
// This is typically non-zero only for SWA caches
|
||
// Note that all positions in the range [pos_min, pos_max] are guaranteed to be present in the memory
|
||
// Return -1 if the sequence is empty
|
||
LLAMA_API llama_pos llama_memory_seq_pos_min(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id);
|
||
|
||
// Returns the largest position present in the memory for the specified sequence
|
||
// Note that all positions in the range [pos_min, pos_max] are guaranteed to be present in the memory
|
||
// Return -1 if the sequence is empty
|
||
LLAMA_API llama_pos llama_memory_seq_pos_max(
|
||
llama_memory_t mem,
|
||
llama_seq_id seq_id);
|
||
|
||
// Check if the memory supports shifting
|
||
LLAMA_API bool llama_memory_can_shift(llama_memory_t mem);
|
||
|
||
//
|
||
// State / sessions
|
||
//
|
||
|
||
// Returns the *actual* size in bytes of the state
|
||
// (logits, embedding and memory)
|
||
// Only use when saving the state, not when restoring it, otherwise the size may be too small.
|
||
LLAMA_API size_t llama_state_get_size(struct llama_context * ctx);
|
||
LLAMA_API DEPRECATED(size_t llama_get_state_size(struct llama_context * ctx),
|
||
"use llama_state_get_size instead");
|
||
|
||
// Copies the state to the specified destination address.
|
||
// Destination needs to have allocated enough memory.
|
||
// Returns the number of bytes copied
|
||
LLAMA_API size_t llama_state_get_data(
|
||
struct llama_context * ctx,
|
||
uint8_t * dst,
|
||
size_t size);
|
||
LLAMA_API DEPRECATED(size_t llama_copy_state_data(
|
||
struct llama_context * ctx,
|
||
uint8_t * dst),
|
||
"use llama_state_get_data instead");
|
||
|
||
// Set the state reading from the specified address
|
||
// Returns the number of bytes read
|
||
LLAMA_API size_t llama_state_set_data(
|
||
struct llama_context * ctx,
|
||
const uint8_t * src,
|
||
size_t size);
|
||
LLAMA_API DEPRECATED(size_t llama_set_state_data(
|
||
struct llama_context * ctx,
|
||
const uint8_t * src),
|
||
"use llama_state_set_data instead");
|
||
|
||
// Save/load session file
|
||
LLAMA_API bool llama_state_load_file(
|
||
struct llama_context * ctx,
|
||
const char * path_session,
|
||
llama_token * tokens_out,
|
||
size_t n_token_capacity,
|
||
size_t * n_token_count_out);
|
||
LLAMA_API DEPRECATED(bool llama_load_session_file(
|
||
struct llama_context * ctx,
|
||
const char * path_session,
|
||
llama_token * tokens_out,
|
||
size_t n_token_capacity,
|
||
size_t * n_token_count_out),
|
||
"use llama_state_load_file instead");
|
||
|
||
LLAMA_API bool llama_state_save_file(
|
||
struct llama_context * ctx,
|
||
const char * path_session,
|
||
const llama_token * tokens,
|
||
size_t n_token_count);
|
||
LLAMA_API DEPRECATED(bool llama_save_session_file(
|
||
struct llama_context * ctx,
|
||
const char * path_session,
|
||
const llama_token * tokens,
|
||
size_t n_token_count),
|
||
"use llama_state_save_file instead");
|
||
|
||
// Get the exact size needed to copy the state of a single sequence
|
||
LLAMA_API size_t llama_state_seq_get_size(
|
||
struct llama_context * ctx,
|
||
llama_seq_id seq_id);
|
||
|
||
// Copy the state of a single sequence into the specified buffer
|
||
LLAMA_API size_t llama_state_seq_get_data(
|
||
struct llama_context * ctx,
|
||
uint8_t * dst,
|
||
size_t size,
|
||
llama_seq_id seq_id);
|
||
|
||
// Copy the sequence data (originally copied with `llama_state_seq_get_data`) into the specified sequence
|
||
// Returns:
|
||
// - Positive: Ok
|
||
// - Zero: Failed to load
|
||
LLAMA_API size_t llama_state_seq_set_data(
|
||
struct llama_context * ctx,
|
||
const uint8_t * src,
|
||
size_t size,
|
||
llama_seq_id dest_seq_id);
|
||
|
||
LLAMA_API size_t llama_state_seq_save_file(
|
||
struct llama_context * ctx,
|
||
const char * filepath,
|
||
llama_seq_id seq_id,
|
||
const llama_token * tokens,
|
||
size_t n_token_count);
|
||
|
||
LLAMA_API size_t llama_state_seq_load_file(
|
||
struct llama_context * ctx,
|
||
const char * filepath,
|
||
llama_seq_id dest_seq_id,
|
||
llama_token * tokens_out,
|
||
size_t n_token_capacity,
|
||
size_t * n_token_count_out);
|
||
|
||
// for backwards-compat
|
||
#define LLAMA_STATE_SEQ_FLAGS_SWA_ONLY 1
|
||
|
||
// work only with partial states, such as SWA KV cache or recurrent cache (e.g. Mamba)
|
||
#define LLAMA_STATE_SEQ_FLAGS_PARTIAL_ONLY 1
|
||
|
||
typedef uint32_t llama_state_seq_flags;
|
||
|
||
LLAMA_API size_t llama_state_seq_get_size_ext(
|
||
struct llama_context * ctx,
|
||
llama_seq_id seq_id,
|
||
llama_state_seq_flags flags);
|
||
|
||
LLAMA_API size_t llama_state_seq_get_data_ext(
|
||
struct llama_context * ctx,
|
||
uint8_t * dst,
|
||
size_t size,
|
||
llama_seq_id seq_id,
|
||
llama_state_seq_flags flags);
|
||
|
||
LLAMA_API size_t llama_state_seq_set_data_ext(
|
||
struct llama_context * ctx,
|
||
const uint8_t * src,
|
||
size_t size,
|
||
llama_seq_id dest_seq_id,
|
||
llama_state_seq_flags flags);
|
||
|
||
//
|
||
// Decoding
|
||
//
|
||
|
||
// Return batch for single sequence of tokens
|
||
// The sequence ID will be fixed to 0
|
||
// The position of the tokens will be tracked automatically by llama_decode
|
||
//
|
||
// NOTE: this is a helper function to facilitate transition to the new batch API - avoid using it
|
||
//
|
||
LLAMA_API struct llama_batch llama_batch_get_one(
|
||
llama_token * tokens,
|
||
int32_t n_tokens);
|
||
|
||
// Allocates a batch of tokens on the heap that can hold a maximum of n_tokens
|
||
// Each token can be assigned up to n_seq_max sequence ids
|
||
// The batch has to be freed with llama_batch_free()
|
||
// If embd != 0, llama_batch.embd will be allocated with size of n_tokens * embd * sizeof(float)
|
||
// Otherwise, llama_batch.token will be allocated to store n_tokens llama_token
|
||
// The rest of the llama_batch members are allocated with size n_tokens
|
||
// All members are left uninitialized
|
||
LLAMA_API struct llama_batch llama_batch_init(
|
||
int32_t n_tokens,
|
||
int32_t embd,
|
||
int32_t n_seq_max);
|
||
|
||
// Frees a batch of tokens allocated with llama_batch_init()
|
||
LLAMA_API void llama_batch_free(struct llama_batch batch);
|
||
|
||
// Process a batch of tokens.
|
||
// In contrast to llama_decode() - this call does not use KV cache.
|
||
// For encode-decoder contexts, processes the batch using the encoder.
|
||
// Can store the encoder output internally for later use by the decoder's cross-attention layers.
|
||
// 0 - success
|
||
// < 0 - error. the memory state is restored to the state before this call
|
||
LLAMA_API int32_t llama_encode(
|
||
struct llama_context * ctx,
|
||
struct llama_batch batch);
|
||
|
||
// Process a batch of tokens.
|
||
// Requires the context to have a memory.
|
||
// For encode-decoder contexts, processes the batch using the decoder.
|
||
// Positive return values does not mean a fatal error, but rather a warning.
|
||
// Upon fatal-error or abort, the ubatches that managed to be been processed will remain in the memory state of the context
|
||
// To handle this correctly, query the memory state using llama_memory_seq_pos_min() and llama_memory_seq_pos_max()
|
||
// Upon other return values, the memory state is restored to the state before this call
|
||
// 0 - success
|
||
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
|
||
// 2 - aborted (processed ubatches will remain in the context's memory)
|
||
// -1 - invalid input batch
|
||
// < -1 - fatal error (processed ubatches will remain in the context's memory)
|
||
LLAMA_API int32_t llama_decode(
|
||
struct llama_context * ctx,
|
||
struct llama_batch batch);
|
||
|
||
// Set the number of threads used for decoding
|
||
// n_threads is the number of threads used for generation (single token)
|
||
// n_threads_batch is the number of threads used for prompt and batch processing (multiple tokens)
|
||
LLAMA_API void llama_set_n_threads(struct llama_context * ctx, int32_t n_threads, int32_t n_threads_batch);
|
||
|
||
// Get the number of threads used for generation of a single token.
|
||
LLAMA_API int32_t llama_n_threads(struct llama_context * ctx);
|
||
|
||
// Get the number of threads used for prompt and batch processing (multiple token).
|
||
LLAMA_API int32_t llama_n_threads_batch(struct llama_context * ctx);
|
||
|
||
// Set whether the context outputs embeddings or not
|
||
// TODO: rename to avoid confusion with llama_get_embeddings()
|
||
LLAMA_API void llama_set_embeddings(struct llama_context * ctx, bool embeddings);
|
||
|
||
// Set whether to use causal attention or not
|
||
// If set to true, the model will only attend to the past tokens
|
||
LLAMA_API void llama_set_causal_attn(struct llama_context * ctx, bool causal_attn);
|
||
|
||
// Set whether the model is in warmup mode or not
|
||
// If true, all model tensors are activated during llama_decode() to load and cache their weights.
|
||
LLAMA_API void llama_set_warmup(struct llama_context * ctx, bool warmup);
|
||
|
||
// Set abort callback
|
||
LLAMA_API void llama_set_abort_callback(struct llama_context * ctx, ggml_abort_callback abort_callback, void * abort_callback_data);
|
||
|
||
// Wait until all computations are finished
|
||
// This is automatically done when using one of the functions below to obtain the computation results
|
||
// and is not necessary to call it explicitly in most cases
|
||
LLAMA_API void llama_synchronize(struct llama_context * ctx);
|
||
|
||
// Token logits obtained from the last call to llama_decode()
|
||
// The logits for which llama_batch.logits[i] != 0 are stored contiguously
|
||
// in the order they have appeared in the batch.
|
||
// Rows: number of tokens for which llama_batch.logits[i] != 0
|
||
// Cols: n_vocab
|
||
// TODO: deprecate in favor of llama_get_logits_ith() (ref: https://github.com/ggml-org/llama.cpp/pull/14853#issuecomment-3113143522)
|
||
LLAMA_API float * llama_get_logits(struct llama_context * ctx);
|
||
|
||
// Logits for the ith token. For positive indices, Equivalent to:
|
||
// llama_get_logits(ctx) + ctx->output_ids[i]*n_vocab
|
||
// Negative indicies can be used to access logits in reverse order, -1 is the last logit.
|
||
// returns NULL for invalid ids.
|
||
LLAMA_API float * llama_get_logits_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
// Get all output token embeddings.
|
||
// when pooling_type == LLAMA_POOLING_TYPE_NONE or when using a generative model,
|
||
// the embeddings for which llama_batch.logits[i] != 0 are stored contiguously
|
||
// in the order they have appeared in the batch.
|
||
// shape: [n_outputs*n_embd]
|
||
// Otherwise, returns NULL.
|
||
// TODO: deprecate in favor of llama_get_embeddings_ith() (ref: https://github.com/ggml-org/llama.cpp/pull/14853#issuecomment-3113143522)
|
||
LLAMA_API float * llama_get_embeddings(struct llama_context * ctx);
|
||
|
||
// Get the embeddings for the ith token. For positive indices, Equivalent to:
|
||
// llama_get_embeddings(ctx) + ctx->output_ids[i]*n_embd
|
||
// Negative indicies can be used to access embeddings in reverse order, -1 is the last embedding.
|
||
// shape: [n_embd] (1-dimensional)
|
||
// returns NULL for invalid ids.
|
||
LLAMA_API float * llama_get_embeddings_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
// Get the embeddings for a sequence id
|
||
// Returns NULL if pooling_type is LLAMA_POOLING_TYPE_NONE
|
||
// when pooling_type == LLAMA_POOLING_TYPE_RANK, returns float[n_cls_out] with the rank(s) of the sequence
|
||
// otherwise: float[n_embd] (1-dimensional)
|
||
LLAMA_API float * llama_get_embeddings_seq(struct llama_context * ctx, llama_seq_id seq_id);
|
||
|
||
//
|
||
// backend sampling API [EXPERIMENTAL]
|
||
// note: use only if the llama_context was created with at least one llama_sampler_seq_config
|
||
//
|
||
|
||
// Get the backend sampled token for the ith token.
|
||
// Returns LLAMA_TOKEN_NULL if no token was sampled.
|
||
LLAMA_API llama_token llama_get_sampled_token_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
// Get the backend sampled probabilites for the ith token
|
||
// The index matches llama_get_sampled_token_ith().
|
||
// Returns NULL if no probabilites were generated.
|
||
LLAMA_API float * llama_get_sampled_probs_ith (struct llama_context * ctx, int32_t i);
|
||
LLAMA_API uint32_t llama_get_sampled_probs_count_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
// Get the backend sampled logits for the ith token
|
||
// Returns NULL if no logits were sampled.
|
||
LLAMA_API float * llama_get_sampled_logits_ith (struct llama_context * ctx, int32_t i);
|
||
LLAMA_API uint32_t llama_get_sampled_logits_count_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
// Get the backend sampled candidates (token ids) for the ith token
|
||
// These are needed to map probability/logit indices to vocab token ids.
|
||
// Returns NULL if no candidates were sampled.
|
||
LLAMA_API llama_token * llama_get_sampled_candidates_ith (struct llama_context * ctx, int32_t i);
|
||
LLAMA_API uint32_t llama_get_sampled_candidates_count_ith(struct llama_context * ctx, int32_t i);
|
||
|
||
//
|
||
// Vocab
|
||
//
|
||
|
||
LLAMA_API const char * llama_vocab_get_text(const struct llama_vocab * vocab, llama_token token);
|
||
|
||
LLAMA_API float llama_vocab_get_score(const struct llama_vocab * vocab, llama_token token);
|
||
|
||
LLAMA_API enum llama_token_attr llama_vocab_get_attr(const struct llama_vocab * vocab, llama_token token);
|
||
|
||
// Check if the token is supposed to end generation (end-of-generation, eg. EOS, EOT, etc.)
|
||
LLAMA_API bool llama_vocab_is_eog(const struct llama_vocab * vocab, llama_token token);
|
||
|
||
// Identify if Token Id is a control token or a render-able token
|
||
LLAMA_API bool llama_vocab_is_control(const struct llama_vocab * vocab, llama_token token);
|
||
|
||
// Special tokens
|
||
LLAMA_API llama_token llama_vocab_bos(const struct llama_vocab * vocab); // beginning-of-sentence
|
||
LLAMA_API llama_token llama_vocab_eos(const struct llama_vocab * vocab); // end-of-sentence
|
||
LLAMA_API llama_token llama_vocab_eot(const struct llama_vocab * vocab); // end-of-turn
|
||
LLAMA_API llama_token llama_vocab_sep(const struct llama_vocab * vocab); // sentence separator
|
||
LLAMA_API llama_token llama_vocab_nl (const struct llama_vocab * vocab); // next-line
|
||
LLAMA_API llama_token llama_vocab_pad(const struct llama_vocab * vocab); // padding
|
||
LLAMA_API llama_token llama_vocab_mask(const struct llama_vocab * vocab); // mask
|
||
|
||
LLAMA_API bool llama_vocab_get_add_bos(const struct llama_vocab * vocab);
|
||
LLAMA_API bool llama_vocab_get_add_eos(const struct llama_vocab * vocab);
|
||
LLAMA_API bool llama_vocab_get_add_sep(const struct llama_vocab * vocab);
|
||
|
||
LLAMA_API llama_token llama_vocab_fim_pre(const struct llama_vocab * vocab);
|
||
LLAMA_API llama_token llama_vocab_fim_suf(const struct llama_vocab * vocab);
|
||
LLAMA_API llama_token llama_vocab_fim_mid(const struct llama_vocab * vocab);
|
||
LLAMA_API llama_token llama_vocab_fim_pad(const struct llama_vocab * vocab);
|
||
LLAMA_API llama_token llama_vocab_fim_rep(const struct llama_vocab * vocab);
|
||
LLAMA_API llama_token llama_vocab_fim_sep(const struct llama_vocab * vocab);
|
||
|
||
DEPRECATED(LLAMA_API const char * llama_token_get_text(const struct llama_vocab * vocab, llama_token token), "use llama_vocab_get_text instead");
|
||
DEPRECATED(LLAMA_API float llama_token_get_score(const struct llama_vocab * vocab, llama_token token), "use llama_vocab_get_score instead");
|
||
DEPRECATED(LLAMA_API enum llama_token_attr llama_token_get_attr(const struct llama_vocab * vocab, llama_token token), "use llama_vocab_get_attr instead");
|
||
DEPRECATED(LLAMA_API bool llama_token_is_eog(const struct llama_vocab * vocab, llama_token token), "use llama_vocab_is_eog instead");
|
||
DEPRECATED(LLAMA_API bool llama_token_is_control(const struct llama_vocab * vocab, llama_token token), "use llama_vocab_is_control instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_bos(const struct llama_vocab * vocab), "use llama_vocab_bos instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_eos(const struct llama_vocab * vocab), "use llama_vocab_eos instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_eot(const struct llama_vocab * vocab), "use llama_vocab_eot instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_cls(const struct llama_vocab * vocab), "use llama_vocab_cls instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_sep(const struct llama_vocab * vocab), "use llama_vocab_sep instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_nl (const struct llama_vocab * vocab), "use llama_vocab_nl instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_pad(const struct llama_vocab * vocab), "use llama_vocab_pad instead");
|
||
DEPRECATED(LLAMA_API bool llama_add_bos_token(const struct llama_vocab * vocab), "use llama_vocab_get_add_bos instead");
|
||
DEPRECATED(LLAMA_API bool llama_add_eos_token(const struct llama_vocab * vocab), "use llama_vocab_get_add_eos instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_pre(const struct llama_vocab * vocab), "use llama_vocab_fim_pre instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_suf(const struct llama_vocab * vocab), "use llama_vocab_fim_suf instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_mid(const struct llama_vocab * vocab), "use llama_vocab_fim_mid instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_pad(const struct llama_vocab * vocab), "use llama_vocab_fim_pad instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_rep(const struct llama_vocab * vocab), "use llama_vocab_fim_rep instead");
|
||
DEPRECATED(LLAMA_API llama_token llama_token_fim_sep(const struct llama_vocab * vocab), "use llama_vocab_fim_sep instead");
|
||
|
||
// CLS is equivalent to BOS
|
||
DEPRECATED(LLAMA_API llama_token llama_vocab_cls(const struct llama_vocab * vocab), // classification
|
||
"use llama_vocab_bos instead");
|
||
|
||
//
|
||
// Tokenization
|
||
//
|
||
// The API is thread-safe.
|
||
//
|
||
|
||
/// @details Convert the provided text into tokens.
|
||
/// @param tokens The tokens pointer must be large enough to hold the resulting tokens.
|
||
/// @return Returns the number of tokens on success, no more than n_tokens_max
|
||
/// @return Returns a negative number on failure - the number of tokens that would have been returned
|
||
/// @return Returns INT32_MIN on overflow (e.g., tokenization result size exceeds int32_t limit)
|
||
/// @param add_special Allow to add BOS and EOS tokens if model is configured to do so.
|
||
/// @param parse_special Allow tokenizing special and/or control tokens which otherwise are not exposed and treated
|
||
/// as plaintext. Does not insert a leading space.
|
||
LLAMA_API int32_t llama_tokenize(
|
||
const struct llama_vocab * vocab,
|
||
const char * text,
|
||
int32_t text_len,
|
||
llama_token * tokens,
|
||
int32_t n_tokens_max,
|
||
bool add_special,
|
||
bool parse_special);
|
||
|
||
// Token Id -> Piece.
|
||
// Uses the vocabulary in the provided context.
|
||
// Does not write null terminator to the buffer.
|
||
// User can skip up to 'lstrip' leading spaces before copying (useful when encoding/decoding multiple tokens with 'add_space_prefix')
|
||
// @param special If true, special tokens are rendered in the output.
|
||
LLAMA_API int32_t llama_token_to_piece(
|
||
const struct llama_vocab * vocab,
|
||
llama_token token,
|
||
char * buf,
|
||
int32_t length,
|
||
int32_t lstrip,
|
||
bool special);
|
||
|
||
/// @details Convert the provided tokens into text (inverse of llama_tokenize()).
|
||
/// @param text The char pointer must be large enough to hold the resulting text.
|
||
/// @return Returns the number of chars/bytes on success, no more than text_len_max.
|
||
/// @return Returns a negative number on failure - the number of chars/bytes that would have been returned.
|
||
/// @param remove_special Allow to remove BOS and EOS tokens if model is configured to do so.
|
||
/// @param unparse_special If true, special tokens are rendered in the output.
|
||
LLAMA_API int32_t llama_detokenize(
|
||
const struct llama_vocab * vocab,
|
||
const llama_token * tokens,
|
||
int32_t n_tokens,
|
||
char * text,
|
||
int32_t text_len_max,
|
||
bool remove_special,
|
||
bool unparse_special);
|
||
|
||
//
|
||
// Chat templates
|
||
//
|
||
|
||
/// Apply chat template. Inspired by hf apply_chat_template() on python.
|
||
/// Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
|
||
/// NOTE: This function does not use a jinja parser. It only support a pre-defined list of template. See more: https://github.com/ggml-org/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
|
||
/// @param tmpl A Jinja template to use for this chat. If this is nullptr, the model’s default chat template will be used instead.
|
||
/// @param chat Pointer to a list of multiple llama_chat_message
|
||
/// @param n_msg Number of llama_chat_message in this chat
|
||
/// @param add_ass Whether to end the prompt with the token(s) that indicate the start of an assistant message.
|
||
/// @param buf A buffer to hold the output formatted prompt. The recommended alloc size is 2 * (total number of characters of all messages)
|
||
/// @param length The size of the allocated buffer
|
||
/// @return The total number of bytes of the formatted prompt. If is it larger than the size of buffer, you may need to re-alloc it and then re-apply the template.
|
||
LLAMA_API int32_t llama_chat_apply_template(
|
||
const char * tmpl,
|
||
const struct llama_chat_message * chat,
|
||
size_t n_msg,
|
||
bool add_ass,
|
||
char * buf,
|
||
int32_t length);
|
||
|
||
// Get list of built-in chat templates
|
||
LLAMA_API int32_t llama_chat_builtin_templates(const char ** output, size_t len);
|
||
|
||
//
|
||
// Sampling API
|
||
//
|
||
// Sample usage:
|
||
//
|
||
// // prepare the sampling chain at the start
|
||
// auto sparams = llama_sampler_chain_default_params();
|
||
//
|
||
// llama_sampler * smpl = llama_sampler_chain_init(sparams);
|
||
//
|
||
// llama_sampler_chain_add(smpl, llama_sampler_init_top_k(50));
|
||
// llama_sampler_chain_add(smpl, llama_sampler_init_top_p(0.9, 1));
|
||
// llama_sampler_chain_add(smpl, llama_sampler_init_temp (0.8));
|
||
//
|
||
// // typically, the chain should end with a sampler such as "greedy", "dist" or "mirostat"
|
||
// // this sampler will be responsible to select the actual token
|
||
// llama_sampler_chain_add(smpl, llama_sampler_init_dist(seed));
|
||
//
|
||
// ...
|
||
//
|
||
// // decoding loop:
|
||
// while (...) {
|
||
// ...
|
||
//
|
||
// llama_decode(ctx, batch);
|
||
//
|
||
// // sample from the logits of the last token in the batch
|
||
// const llama_token id = llama_sampler_sample(smpl, ctx, -1);
|
||
//
|
||
// ...
|
||
// }
|
||
//
|
||
// llama_sampler_free(smpl);
|
||
//
|
||
|
||
typedef void * llama_sampler_context_t;
|
||
|
||
struct llama_sampler_data {
|
||
struct ggml_tensor * logits;
|
||
struct ggml_tensor * probs;
|
||
struct ggml_tensor * sampled;
|
||
struct ggml_tensor * candidates;
|
||
};
|
||
|
||
// user code can implement the interface below in order to create custom llama_sampler
|
||
struct llama_sampler_i {
|
||
const char * (*name) (const struct llama_sampler * smpl); // can be NULL
|
||
void (*accept)( struct llama_sampler * smpl, llama_token token); // can be NULL
|
||
void (*apply) ( struct llama_sampler * smpl, llama_token_data_array * cur_p); // required
|
||
void (*reset) ( struct llama_sampler * smpl); // can be NULL
|
||
struct llama_sampler * (*clone) (const struct llama_sampler * smpl); // can be NULL if ctx is NULL
|
||
void (*free) ( struct llama_sampler * smpl); // can be NULL if ctx is NULL
|
||
|
||
// [EXPERIMENTAL]
|
||
// backend sampling interface:
|
||
|
||
// return true if the backend supports all ops needed by the sampler
|
||
// note: call once per sampler
|
||
bool (*backend_init)(struct llama_sampler * smpl, ggml_backend_buffer_type_t buft);
|
||
|
||
// call after .backend_apply()
|
||
void (*backend_accept)(
|
||
struct llama_sampler * smpl,
|
||
struct ggml_context * ctx,
|
||
struct ggml_cgraph * gf,
|
||
struct ggml_tensor * selected_token);
|
||
|
||
// call after .backend_init()
|
||
void (*backend_apply)(
|
||
struct llama_sampler * smpl,
|
||
struct ggml_context * ctx,
|
||
struct ggml_cgraph * gf,
|
||
struct llama_sampler_data * data);
|
||
|
||
// called before graph execution to set inputs for the current ubatch
|
||
void (*backend_set_input)(struct llama_sampler * smpl);
|
||
};
|
||
|
||
struct llama_sampler {
|
||
struct llama_sampler_i * iface;
|
||
|
||
llama_sampler_context_t ctx;
|
||
};
|
||
|
||
// [EXPERIMENTAL]
|
||
// attach a sampler to the context
|
||
// note: prefer initializing the context with llama_context_params.samplers when possible
|
||
// note: changing the samplers of a context can cause graph reallocations and degraded performance
|
||
LLAMA_API bool llama_set_sampler(struct llama_context * ctx, llama_seq_id seq_id, struct llama_sampler * smpl);
|
||
|
||
// mirror of llama_sampler_i:
|
||
LLAMA_API struct llama_sampler * llama_sampler_init ( struct llama_sampler_i * iface, llama_sampler_context_t ctx);
|
||
LLAMA_API const char * llama_sampler_name (const struct llama_sampler * smpl);
|
||
LLAMA_API void llama_sampler_accept( struct llama_sampler * smpl, llama_token token);
|
||
LLAMA_API void llama_sampler_apply ( struct llama_sampler * smpl, llama_token_data_array * cur_p);
|
||
LLAMA_API void llama_sampler_reset ( struct llama_sampler * smpl);
|
||
LLAMA_API struct llama_sampler * llama_sampler_clone (const struct llama_sampler * smpl);
|
||
// important: do not free if the sampler has been added to a llama_sampler_chain (via llama_sampler_chain_add)
|
||
LLAMA_API void llama_sampler_free ( struct llama_sampler * smpl);
|
||
|
||
// llama_sampler_chain
|
||
// a type of llama_sampler that can chain multiple samplers one after another
|
||
|
||
LLAMA_API struct llama_sampler * llama_sampler_chain_init(struct llama_sampler_chain_params params);
|
||
|
||
// important: takes ownership of the sampler object and will free it when llama_sampler_free is called
|
||
LLAMA_API void llama_sampler_chain_add( struct llama_sampler * chain, struct llama_sampler * smpl);
|
||
|
||
// return NULL if:
|
||
// - the sampler is NULL
|
||
// - the sampler is not a llama_sampler_chain
|
||
// - the index is out of bounds, unless i == -1
|
||
// - if i == -1, returns the chain itself (can be used to check if the sampler is a chain)
|
||
LLAMA_API struct llama_sampler * llama_sampler_chain_get( struct llama_sampler * chain, int32_t i);
|
||
|
||
// the total number of samplers in the chain
|
||
LLAMA_API int llama_sampler_chain_n (const struct llama_sampler * chain);
|
||
|
||
// after removing a sampler, the chain will no longer own it, and it will not be freed when the chain is freed
|
||
LLAMA_API struct llama_sampler * llama_sampler_chain_remove( struct llama_sampler * chain, int32_t i);
|
||
|
||
// available samplers:
|
||
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_greedy(void);
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_dist (uint32_t seed);
|
||
|
||
/// @details Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
|
||
/// Setting k <= 0 makes this a noop
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_top_k (int32_t k);
|
||
|
||
/// @details Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_top_p (float p, size_t min_keep);
|
||
|
||
/// @details Minimum P sampling as described in https://github.com/ggml-org/llama.cpp/pull/3841
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_min_p (float p, size_t min_keep);
|
||
|
||
/// @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_typical (float p, size_t min_keep);
|
||
|
||
/// #details Updates the logits l_i` = l_i/t. When t <= 0.0f, the maximum logit is kept at it's original value, the rest are set to -inf
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_temp (float t);
|
||
|
||
/// @details Dynamic temperature implementation (a.k.a. entropy) described in the paper https://arxiv.org/abs/2309.02772.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_temp_ext (float t, float delta, float exponent);
|
||
|
||
/// @details XTC sampler as described in https://github.com/oobabooga/text-generation-webui/pull/6335
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_xtc (float p, float t, size_t min_keep, uint32_t seed);
|
||
|
||
/// @details Top n sigma sampling as described in academic paper "Top-nσ: Not All Logits Are You Need" https://arxiv.org/pdf/2411.07641
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_top_n_sigma(float n);
|
||
|
||
/// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
|
||
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
|
||
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
|
||
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
|
||
/// @param m The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm.
|
||
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_mirostat(
|
||
int32_t n_vocab,
|
||
uint32_t seed,
|
||
float tau,
|
||
float eta,
|
||
int32_t m);
|
||
|
||
/// @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
|
||
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
|
||
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
|
||
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
|
||
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_mirostat_v2(
|
||
uint32_t seed,
|
||
float tau,
|
||
float eta);
|
||
|
||
/// @details Intializes a GBNF grammar, see grammars/README.md for details.
|
||
/// @param vocab The vocabulary that this grammar will be used with.
|
||
/// @param grammar_str The production rules for the grammar, encoded as a string. Returns an empty grammar if empty. Returns NULL if parsing of grammar_str fails.
|
||
/// @param grammar_root The name of the start symbol for the grammar.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_grammar(
|
||
const struct llama_vocab * vocab,
|
||
const char * grammar_str,
|
||
const char * grammar_root);
|
||
|
||
DEPRECATED(LLAMA_API struct llama_sampler * llama_sampler_init_grammar_lazy(
|
||
const struct llama_vocab * vocab,
|
||
const char * grammar_str,
|
||
const char * grammar_root,
|
||
const char ** trigger_words,
|
||
size_t num_trigger_words,
|
||
const llama_token * trigger_tokens,
|
||
size_t num_trigger_tokens),
|
||
"use llama_sampler_init_grammar_lazy_patterns instead");
|
||
|
||
|
||
/// @details Lazy grammar sampler, introduced in https://github.com/ggml-org/llama.cpp/pull/9639
|
||
/// @param trigger_patterns A list of patterns that will trigger the grammar sampler. Pattern will be matched from the start of the generation output, and grammar sampler will be fed content starting from its first match group.
|
||
/// @param trigger_tokens A list of tokens that will trigger the grammar sampler. Grammar sampler will be fed content starting from the trigger token included.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_grammar_lazy_patterns(
|
||
const struct llama_vocab * vocab,
|
||
const char * grammar_str,
|
||
const char * grammar_root,
|
||
const char ** trigger_patterns,
|
||
size_t num_trigger_patterns,
|
||
const llama_token * trigger_tokens,
|
||
size_t num_trigger_tokens);
|
||
|
||
|
||
/// NOTE: Avoid using on the full vocabulary as searching for repeated tokens can become slow. For example, apply top-k or top-p sampling first.
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_penalties(
|
||
int32_t penalty_last_n, // last n tokens to penalize (0 = disable penalty, -1 = context size)
|
||
float penalty_repeat, // 1.0 = disabled
|
||
float penalty_freq, // 0.0 = disabled
|
||
float penalty_present); // 0.0 = disabled
|
||
|
||
/// @details DRY sampler, designed by p-e-w, as described in: https://github.com/oobabooga/text-generation-webui/pull/5677, porting Koboldcpp implementation authored by pi6am: https://github.com/LostRuins/koboldcpp/pull/982
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_dry(
|
||
const struct llama_vocab * vocab,
|
||
int32_t n_ctx_train,
|
||
float dry_multiplier,
|
||
float dry_base,
|
||
int32_t dry_allowed_length,
|
||
int32_t dry_penalty_last_n,
|
||
const char ** seq_breakers,
|
||
size_t num_breakers);
|
||
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_logit_bias(
|
||
int32_t n_vocab,
|
||
int32_t n_logit_bias,
|
||
const llama_logit_bias * logit_bias);
|
||
|
||
// this sampler is meant to be used for fill-in-the-middle infilling
|
||
// it's supposed to be used after top_k + top_p sampling
|
||
//
|
||
// 1. if the sum of the EOG probs times the number of candidates is higher than the sum of the other probs -> pick EOG
|
||
// 2. combine probs of tokens that have the same prefix
|
||
//
|
||
// example:
|
||
//
|
||
// - before:
|
||
// "hel": 0.5
|
||
// "hell": 0.2
|
||
// "hello": 0.1
|
||
// "dummy": 0.1
|
||
//
|
||
// - after:
|
||
// "hel": 0.8
|
||
// "dummy": 0.1
|
||
//
|
||
// 3. discard non-EOG tokens with low prob
|
||
// 4. if no tokens are left -> pick EOT
|
||
//
|
||
LLAMA_API struct llama_sampler * llama_sampler_init_infill(const struct llama_vocab * vocab);
|
||
|
||
// Returns the seed used by the sampler if applicable, LLAMA_DEFAULT_SEED otherwise
|
||
LLAMA_API uint32_t llama_sampler_get_seed(const struct llama_sampler * smpl);
|
||
|
||
/// @details Sample and accept a token from the idx-th output of the last evaluation
|
||
//
|
||
// Shorthand for:
|
||
// const auto * logits = llama_get_logits_ith(ctx, idx);
|
||
// llama_token_data_array cur_p = { ... init from logits ... };
|
||
// llama_sampler_apply(smpl, &cur_p);
|
||
// auto token = cur_p.data[cur_p.selected].id;
|
||
// llama_sampler_accept(smpl, token);
|
||
// return token;
|
||
// Returns the sampled token
|
||
LLAMA_API llama_token llama_sampler_sample(struct llama_sampler * smpl, struct llama_context * ctx, int32_t idx);
|
||
|
||
// TODO: extend in the future
|
||
//LLAMA_API void llama_decode_with_sampler(struct llama_context * ctx, struct llama_sampler * smpl, struct llama_batch batch, ...);
|
||
|
||
//
|
||
// Model split
|
||
//
|
||
|
||
/// @details Build a split GGUF final path for this chunk.
|
||
/// llama_split_path(split_path, sizeof(split_path), "/models/ggml-model-q4_0", 2, 4) => split_path = "/models/ggml-model-q4_0-00002-of-00004.gguf"
|
||
// Returns the split_path length.
|
||
LLAMA_API int llama_split_path(char * split_path, size_t maxlen, const char * path_prefix, int split_no, int split_count);
|
||
|
||
/// @details Extract the path prefix from the split_path if and only if the split_no and split_count match.
|
||
/// llama_split_prefix(split_prefix, 64, "/models/ggml-model-q4_0-00002-of-00004.gguf", 2, 4) => split_prefix = "/models/ggml-model-q4_0"
|
||
// Returns the split_prefix length.
|
||
LLAMA_API int llama_split_prefix(char * split_prefix, size_t maxlen, const char * split_path, int split_no, int split_count);
|
||
|
||
// Print system information
|
||
LLAMA_API const char * llama_print_system_info(void);
|
||
|
||
// Set callback for all future logging events.
|
||
// If this is not called, or NULL is supplied, everything is output on stderr.
|
||
// The logger state is global so these functions are NOT thread safe.
|
||
LLAMA_API void llama_log_get(ggml_log_callback * log_callback, void ** user_data);
|
||
LLAMA_API void llama_log_set(ggml_log_callback log_callback, void * user_data);
|
||
|
||
//
|
||
// Performance utils
|
||
//
|
||
// NOTE: Used by llama.cpp examples/tools, avoid using in third-party apps. Instead, do your own performance measurements.
|
||
//
|
||
|
||
struct llama_perf_context_data {
|
||
// ms == milliseconds
|
||
double t_start_ms; // absolute start time
|
||
double t_load_ms; // time needed for loading the model
|
||
double t_p_eval_ms; // time needed for processing the prompt
|
||
double t_eval_ms; // time needed for generating tokens
|
||
|
||
int32_t n_p_eval; // number of prompt tokens
|
||
int32_t n_eval; // number of generated tokens
|
||
int32_t n_reused; // number of times a ggml compute graph had been reused
|
||
};
|
||
|
||
struct llama_perf_sampler_data {
|
||
double t_sample_ms; // time needed for sampling in ms
|
||
|
||
int32_t n_sample; // number of sampled tokens
|
||
};
|
||
|
||
LLAMA_API struct llama_perf_context_data llama_perf_context (const struct llama_context * ctx);
|
||
LLAMA_API void llama_perf_context_print(const struct llama_context * ctx);
|
||
LLAMA_API void llama_perf_context_reset( struct llama_context * ctx);
|
||
|
||
// NOTE: the following work only with samplers constructed via llama_sampler_chain_init
|
||
LLAMA_API struct llama_perf_sampler_data llama_perf_sampler (const struct llama_sampler * chain);
|
||
LLAMA_API void llama_perf_sampler_print(const struct llama_sampler * chain);
|
||
LLAMA_API void llama_perf_sampler_reset( struct llama_sampler * chain);
|
||
|
||
// print a breakdown of per-device memory use via LLAMA_LOG:
|
||
LLAMA_API void llama_memory_breakdown_print(const struct llama_context * ctx);
|
||
|
||
//
|
||
// training
|
||
//
|
||
|
||
// function that returns whether or not a given tensor contains trainable parameters
|
||
typedef bool (*llama_opt_param_filter)(const struct ggml_tensor * tensor, void * userdata);
|
||
|
||
// always returns true
|
||
LLAMA_API bool llama_opt_param_filter_all(const struct ggml_tensor * tensor, void * userdata);
|
||
|
||
struct llama_opt_params {
|
||
uint32_t n_ctx_train; // assumed context size post training, use context size specified in llama_context if 0
|
||
|
||
llama_opt_param_filter param_filter; // callback for determining which tensors contain trainable parameters
|
||
void * param_filter_ud; // userdata for determining which tensors contain trainable parameters
|
||
|
||
ggml_opt_get_optimizer_params get_opt_pars; // callback for calculating optimizer parameters
|
||
void * get_opt_pars_ud; // userdata for calculating optimizer parameters
|
||
|
||
enum ggml_opt_optimizer_type optimizer_type;
|
||
};
|
||
|
||
LLAMA_API void llama_opt_init(struct llama_context * lctx, struct llama_model * model, struct llama_opt_params lopt_params);
|
||
|
||
LLAMA_API void llama_opt_epoch(
|
||
struct llama_context * lctx,
|
||
ggml_opt_dataset_t dataset,
|
||
ggml_opt_result_t result_train,
|
||
ggml_opt_result_t result_eval,
|
||
int64_t idata_split,
|
||
ggml_opt_epoch_callback callback_train,
|
||
ggml_opt_epoch_callback callback_eval);
|
||
|
||
#ifdef __cplusplus
|
||
}
|
||
#endif
|
||
|
||
#endif // LLAMA_H
|