Releases: leafspark/AutoGGUF
Releases Β· leafspark/AutoGGUF
v1.6.0
AutoGGUF v1.6.0
Changelog:
- Resolve licensing issue by switching to PySide6
- Add GPU monitoring for NVIDIA GPUs
Full Changelog: v1.5.1...v1.6.0
v1.5.1
AutoGGUF v1.5.1
Changelog:
- Support loading *.gguf file types
- Update FAILED_TO_LOAD_PRESET localization key
- Remove Save Preset context menu action
Full Changelog: v1.5.0...v1.5.1
v1.5.0
AutoGGUF v1.5.0
Changelog:
- Refactor localizations to use them in HF conversion area
- Organize localizations
- Add sha256 and PGP signatures (same as commit ones)
- Add HuggingFace to GGUF conversion support
- Fix scaling on low resolution screens, interface now scrolls
Full Changelog: v1.4.3...v1.5.0
v1.5.0 (prerel2)
AutoGGUF v1.5.0 prerelease 2
Changelog:
- Refactor localizations to use them in HF conversion area
Full Changelog: v1.5.0-beta...v1.5.0-beta2
v1.5.0 (prerel)
AutoGGUF v1.5.0 prerelease
Changelog:
- Organize localizations
- Add sha256 and PGP signatures (same as commit ones)
- Add HuggingFace to GGUF conversion support
Full Changelog: v1.4.3...v1.5.0-beta
v1.4.3
AutoGGUF v1.4.3
Changelog:
- Updated src file in release to be Black formatted
- Added model sharding management support
- Allow multiple quantization types to be selected and started simultaneously
- Updating preset saving and loading to handle multiple quantization types
- Modifying the quantize_model function to process all selected types
- Use
ERROR
andIN_PROGRESS
constants from localizations in QuantizationThread
Full Changelog: v1.4.2...v1.4.3
v1.4.2
AutoGGUF v1.4.2
Changelog:
- Resolves bug where Base Model text was shown even when GGML type was selected
- Improved alignment
- Minor repository changes
Full Changelog: v1.4.1...v1.4.2
v1.4.1
AutoGGUF v1.4.1
Changelog:
- Added Dynamic KV Overrides, read more in the wiki: AutoGGUF/wiki/Dynamic-KV-Overrides
- Quantization commands are now printed and logged
Full Changelog: v1.4.0...v1.4.1
v1.4.0
AutoGGUF v1.4.0
Changelog
-
LoRA Conversion:
- New section for converting HuggingFace PEFT LoRA adapters to GGML/GGUF
- Output type selection (GGML or GGUF)
- Base model selection for GGUF output
- LoRA adapter list with individual scaling factors
- Export LoRA section for merging adapters into base model
-
UI Improvements:
- Updated task names in task list
- IMatrix generation check
- Larger window size
- Added exe favicon
-
Localization:
- French and Simplified Chinese support for LoRA and "Refresh Models" strings
-
Code and Build:
- Code organization improvements
- Added build script
- .gitignore file
-
Misc:
- Currently includes src folder with conversion tools
- Plan to potentially download conversion scripts from llama.cpp GitHub in future
- No console window popup
Pull Requests
- docs: update README.md by @eltociear in #2
New Contributors
- @eltociear made their first contribution in #2
Full Changelog: v1.3.1...v1.4.0
v1.4.0 (prerel2)
AutoGGUF v1.4.0 prerelease 2
Changelog:
- Code organization
- Add build script
- Add favicon
- French and Simplified Chinese support for LoRA and "Refresh Models" strings
- .gitignore file
- Larger window size
Full Changelog: v1.4.0-beta...v1.4.0-beta2