Commit graph

5 commits

Author SHA1 Message Date
5821734eb8 Fix error icon styling and tooltip functionality
- Removed error icon from parallel field (optional field with default value)
- Added black background to error icons (#000000)
- Added hover effect to error icons (changes to blue on hover)
- Added proper tooltip popup for error icons (same as ? tooltip)
- Removed inline styles from HTML, moved to CSS
2026-04-12 02:29:55 +03:00
fd6208a376 Fix tooltips and error icons after config parsing
- Call updateTooltips() after clearAllTooltips() to restore tooltip attributes
- Add error icons to all input fields (context-length, kv-heads, head-size,
  num-heads, num-layers, parallel, full-attention, hf-model)
- Update showConfigErrors() to handle hf-model field correctly
2026-04-12 02:19:03 +03:00
fb2d114d38 Add HuggingFace model auto-fetch feature
- Added model identifier input field with Fetch button
- Added loading indicator for fetch operation
- Added async handleHfFetch function to fetch config from HuggingFace
- Fetches config from https://huggingface.co/{model}/blob/main/config.json
- Replaces current config when fetch is successful
- Shows 'Model not found' error for 404 responses
- Sets focus on model input after successful fetch
- Added translations for new fields
2026-04-12 01:52:34 +03:00
b8352ebd17 Add config.json support with auto-populate and reset features
- Added ConfigParser module for parsing Hugging Face config files
- Added model name display above form
- Added file upload for config.json (accepts only .json files)
- Added reset button to clear all fields
- Added error indicators (!) with language-aware messages for missing fields
- Auto-populates fields: num_hidden_layers, num_key_value_heads, head_dim,
  num_attention_heads, max_position_embeddings, full_attention_interval
- Sets defaults for optional fields: parallel=1, model_size=0
- Auto-calculates after successful config upload
- Default quantization set to f16
2026-04-12 01:12:53 +03:00
romenskiy2012
c471e1d0a9 Create LLM context memory calculator with:
- Accurate memory calculation using ggml quantization formulas
- Support for f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1 quantizations
- Asymmetric context support (separate K/V cache quantization)
- Full attention interval support
- Parallel sequences multiplier
- Bilingual interface (Russian/English)
- Retro-style design with tooltips

Signed-off-by: Arseniy Romenskiy <romenskiy@altlinux.org> - Co-authored-by: Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled <qwen@example.com>
2026-04-12 00:05:56 +03:00