Commit graph

6 commits

Author SHA1 Message Date
fd6208a376 Fix tooltips and error icons after config parsing
- Call updateTooltips() after clearAllTooltips() to restore tooltip attributes
- Add error icons to all input fields (context-length, kv-heads, head-size,
  num-heads, num-layers, parallel, full-attention, hf-model)
- Update showConfigErrors() to handle hf-model field correctly
2026-04-12 02:19:03 +03:00
ca578b93de Fix HuggingFace URL to use /resolve/main/ instead of /blob/main/ 2026-04-12 02:11:19 +03:00
fb2d114d38 Add HuggingFace model auto-fetch feature
- Added model identifier input field with Fetch button
- Added loading indicator for fetch operation
- Added async handleHfFetch function to fetch config from HuggingFace
- Fetches config from https://huggingface.co/{model}/blob/main/config.json
- Replaces current config when fetch is successful
- Shows 'Model not found' error for 404 responses
- Sets focus on model input after successful fetch
- Added translations for new fields
2026-04-12 01:52:34 +03:00
2a1522fa70 Fix ConfigParser to handle nested text_config structure
- Check both config level and text_config level for fields
- num_key_value_heads now found in text_config
- num_attention_heads now found in text_config
- num_hidden_layers now found in text_config
- Warning checks also check both levels
2026-04-12 01:24:24 +03:00
b8352ebd17 Add config.json support with auto-populate and reset features
- Added ConfigParser module for parsing Hugging Face config files
- Added model name display above form
- Added file upload for config.json (accepts only .json files)
- Added reset button to clear all fields
- Added error indicators (!) with language-aware messages for missing fields
- Auto-populates fields: num_hidden_layers, num_key_value_heads, head_dim,
  num_attention_heads, max_position_embeddings, full_attention_interval
- Sets defaults for optional fields: parallel=1, model_size=0
- Auto-calculates after successful config upload
- Default quantization set to f16
2026-04-12 01:12:53 +03:00
romenskiy2012
c471e1d0a9 Create LLM context memory calculator with:
- Accurate memory calculation using ggml quantization formulas
- Support for f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1 quantizations
- Asymmetric context support (separate K/V cache quantization)
- Full attention interval support
- Parallel sequences multiplier
- Bilingual interface (Russian/English)
- Retro-style design with tooltips

Signed-off-by: Arseniy Romenskiy <romenskiy@altlinux.org> - Co-authored-by: Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled <qwen@example.com>
2026-04-12 00:05:56 +03:00