Commit graph

76 commits

Author SHA1 Message Date
d03abe4285 3 2026-04-09 02:00:36 +03:00
1af95aba36 2 2026-04-09 01:59:40 +03:00
85753dae2c 1 2026-04-09 01:58:51 +03:00
2ad996bfb8 похуй 2026-04-08 23:03:52 +03:00
0591e57dfd похуй 2026-04-08 23:02:56 +03:00
Vitaly Chikunov
01f8650dd9 1:8681-alt1
- Update to b8681 (2026-04-06).
2026-04-06 21:23:51 +00:00
Vitaly Chikunov
7c28f3abf0 gear: Change WebUI npm target
Since 4a00bbfed ("server: (webui) no more gzip compression (#21073)").

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2026-04-06 21:23:50 +00:00
Vitaly Chikunov
f3ada6d562 1:8470-alt1
- Update to b8470 (2026-03-22).
2026-03-22 18:53:17 +03:00
Vitaly Chikunov
c912b31529 spec: Rm export-graph-ops test
Link: https://github.com/ggml-org/llama.cpp/pull/19896
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2026-03-22 18:48:10 +03:00
Vitaly Chikunov
79a2c10800 spec: Switch to .gear/npm-build hook 2026-03-11 07:42:38 +03:00
Vitaly Chikunov
ed4178bf90 spec: check:Disable test-llama-archs
4/41 Test #22: test-llama-archs ..................***Failed    0.00 sec
  build: 8245 (d417bc43 [alt1]) with GNU 14.3.1 for Linux x86_64
  encountered runtime error: failed to create llama model
  llama_model_load_from_file_impl: no backends are loaded. hint: use ggml_backend_load() or ggml_backend_load_all() to load a backend before calling this function
  |    Model arch.|                        Device|Config|    NMSE|Status|
  |---------------|------------------------------|------|--------|------|

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2026-03-11 07:42:38 +03:00
Vitaly Chikunov
2e14561ca4 1:8192-alt1
- Update to b8192 (2026-03-03).
2026-03-04 01:34:26 +03:00
Vitaly Chikunov
e99737bd8e 1:8018-alt1
- Update to b8018 (2026-02-12).
2026-02-13 08:15:20 +03:00
Vitaly Chikunov
eda67378e9 .gear/generate: Beep 2026-02-13 08:15:20 +03:00
Vitaly Chikunov
01167d638e 1:7819-alt1
- Update to b7819 (2026-01-23).
- Responses API support (partial).
2026-01-24 05:09:49 +03:00
Vitaly Chikunov
42d6c582aa .gear/generate: Add proxy and checklist support
Safe-chain: Unhandled promise rejection: Error: Error parsing malware database: Invalid response body while trying to fetch https://malware-list.aikido.dev/malware_predictions.json: read ETIMEDOUT

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2026-01-24 04:50:31 +03:00
Vitaly Chikunov
61d42ce962 1:7388-alt1
- Update to b7388 (2025-12-13).
- llama-cli: New CLI experience (with the old moved to llama-completion).
- llama-server: Live model switching.
2025-12-14 06:24:31 +03:00
Vitaly Chikunov
e14c061238 spec: Switch from llama-cli to llama-completion and llama-server
llama-cli as a different show off program now. So, switch to older
llama-completion for testing and to llama-server for versioning and
man-page generation.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-12-14 06:21:28 +03:00
Vitaly Chikunov
7ce4f4bf3b spec: Add the parent project site as non-Url
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-12-14 05:21:43 +03:00
Vitaly Chikunov
da6a1218df spec: Add pre-release hook to check desired package consistency
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-12-14 05:21:43 +03:00
Vitaly Chikunov
3847cec1a5 spec: Generate man-page for llama-server
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-12-14 05:21:43 +03:00
Brendan O'Dea
55c4625b75 ALT: Add helper to fix llama --help suitable for help2man
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-12-14 05:21:43 +03:00
Vitaly Chikunov
5b93ed1649 spec: Use safe-chain to run npm ci
Safe Chain provides extra checks before installing new packages.
Also, ignore lifecycle scripts.

Link: https://github.com/AikidoSec/safe-chain
Link: https://github.com/bodadotsh/npm-security-best-practices
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-11-25 01:25:18 +03:00
Vitaly Chikunov
bbb4f595fe 1:7127-alt1
- Update to b7127 (2025-11-21).
- spec: Remove llama.cpp-convert package.
- model: detect GigaChat3-10-A1.8B as deepseek lite.
2025-11-22 00:32:38 +03:00
Vitaly Chikunov
6ccc09a99d spec: Add Conflicts with libwhisper-cpp-devel
whisper-cpp packages libggml.so of different version.

  $ apt-file libggml.so
  libllama-devel: /usr/lib64/libggml.so
  libllama: /usr/lib64/libggml.so.0.0.6869
  libwhisper-cpp1: /usr/lib64/libggml.so.1
  libwhisper-cpp1: /usr/lib64/libggml.so.1.7.6
  libwhisper-cpp-devel: /usr/lib64/libggml.so

It also bundles libggml with incorrect SOVERSION. The SOVERSION is set
to whisper.cpp's version while (lib)ggml have their own versioning
(currently 0.9.4). Correcting the version will not help much resolving
the potential conflict, though.

There seems to be two theoretically possible solutions:
  1. Namespace llama.cpp's libggml.so into /usr/lib/llama/, similar to
     backends.
  2. Consider libggml to be a convenience library and link it statically
     into libllama and llama- binaries.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-11-22 00:32:38 +03:00
Vitaly Chikunov
a202c98ff8 spec: Do not add %version to SOVERSION of libggml{,-base}.so
Upstream started to install with internal ggml (so)versioning.

  warning: Installed (but unpackaged) file(s) found:
      /usr/lib64/libggml-base.so.0.9.4
      /usr/lib64/libggml.so.0.9.4

Also, modifying `ggml/cmake/ggml-config.cmake.in` does not seem useful.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-11-22 00:32:38 +03:00
Vitaly Chikunov
3816a3a5d2 spec: Fix install of (llama-)rpc-server
Upstream started to install rpc-server, but w/o prefix.

  warning: Installed (but unpackaged) file(s) found:
      /usr/bin/rpc-server

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-11-22 00:32:38 +03:00
Vitaly Chikunov
97a7090638 spec: Remove llama.cpp-convert package
No point to provide it and require install of requirements from pip.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-11-22 00:32:38 +03:00
Vitaly Chikunov
8ce3d9dc6b 1:6869-alt1
- Update to b6869 (2025-10-28).
2025-10-29 02:03:28 +03:00
Vitaly Chikunov
62eecdf684 .gear/generate: Raise audit level and log build
`--audit-level=critical` only affects exit code, not the level of fix.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-10-29 02:02:13 +03:00
Vitaly Chikunov
22d4780785 1:6397-alt1
- Update to b6397 (2025-09-06).
- Python-based model conversion scripts are sub-packaged. Note that they are
  not supported and are provided as-is.
2025-09-06 09:10:41 +03:00
Vitaly Chikunov
7bd1b78853 spec: Split model converter with other python scripts into a separate package
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-09-06 09:07:01 +03:00
Vitaly Chikunov
7bba0c56dd spec: Add .gear/generate to re-generate React for llama-server
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-09-06 07:44:29 +03:00
Vitaly Chikunov
8ef7a5cd9e spec: Reduce debuginfo level
"Level 1 produces minimal information, enough for making backtraces in
  parts of the program that you don't plan to debug. This includes
  descriptions of functions and external variables, and line number
  tables, but no information about local variables." -- gcc(1)

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-08-19 06:19:32 +03:00
Vitaly Chikunov
50d28023b7 1:6121-alt1
- Update to b6121 (2025-08-08).
2025-08-09 03:34:42 +03:00
Vitaly Chikunov
e6fba35f45 spec: check: More tightly check CUDA build artifacts versions
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-08-09 03:34:42 +03:00
Vitaly Chikunov
97e8cf1efa ALT: gear-submodule-update 2025-08-09 02:55:14 +03:00
Vitaly Chikunov
ce184d9ae8 1:5753-alt1
- Update to b5753 (2025-06-24).
- Install an experimental rpc backend and server. The rpc code is a
  proof-of-concept, fragile, and insecure.
2025-06-25 11:04:16 +03:00
Vitaly Chikunov
26f10c647c spec: Install rpc-server and rpc backend
No point to make a separate rpc backend package since this is a virtual
thing.

Link: https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-06-25 11:02:14 +03:00
Vitaly Chikunov
54456bb584 spec: Add llama-mtmd-cli to the completions
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-06-25 11:02:14 +03:00
Vitaly Chikunov
66330c5eaa 1:5332-alt1
- Update to b5332 (2025-05-09), with vision support in llama-server.
- Enable Vulkan backend (for GPU) in llama.cpp-vulkan package.
2025-05-10 02:13:53 +03:00
Vitaly Chikunov
306d38a1dc 1:5318-alt1
- Update to b5318 (2025-05-08).
- Enable Vulkan backend (for GPU) in llama.cpp-vulkan package.
2025-05-09 05:59:22 +03:00
Vitaly Chikunov
0bb9b9a949 spec: Enable Vulkan support
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-05-09 05:59:22 +03:00
Vitaly Chikunov
759f343545 spec: Do not package lib{llava,mtmd}_shared.so libs
Their purpose is unclear and seems experimental.

  warning: Installed (but unpackaged) file(s) found:
      /usr/lib64/libllava_shared.so
      /usr/lib64/libmtmd_shared.so

They aren't required for llama-llava-clip-quantize-cli nor for
llama-mtmd-cli, there is no headers for them.

Ref: 381efbf48 ("llava : expose as a shared library for downstream projects (#3613)")
Link: https://github.com/ggml-org/llama.cpp/pull/3613
Ref: 8b9cc7cdd ("llava : introduce libmtmd (#12849)")
Link: https://github.com/ggml-org/llama.cpp/pull/12849
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-05-09 05:59:22 +03:00
Vitaly Chikunov
9167cfc475 spec: Support printing commit id in version output
This may be useful for bug reporting.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-05-09 05:59:22 +03:00
Vitaly Chikunov
b2f17b3cba spec: Disable part of test-arg-parser requiring Internet access
Link: https://github.com/ggml-org/llama.cpp/issues/13371
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-05-09 04:21:37 +03:00
Vitaly Chikunov
fadfd18973 1:4855-alt1
- Update to b4855 (2025-03-07).
- Enable CUDA backend (for NVIDIA GPU) in llama.cpp-cuda package.
- Disable BLAS backend (issues/12282).
- Install bash-completions.
2025-03-10 03:40:29 +03:00
Vitaly Chikunov
4edbfcac94 .gear/llama.service: Make config optional
Also, remove the full path from the executable name.
And yes it's not installed, yet.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
2025-03-08 16:45:15 +03:00
Vitaly Chikunov
3b1e713459 ALT: gear-submodule-update 2025-03-07 20:19:24 +00:00
Vitaly Chikunov
1c5972bf3e ALT: Import GPG key for GitHub (web-flow commit signing) <noreply@github.com>
Link: https://github.com/web-flow.gpg
2025-03-07 20:19:22 +00:00