Skip to content

Releases: leafspark/AutoGGUF

v1.6.0

08 Aug 20:40
7e61f6b
Compare
Choose a tag to compare

AutoGGUF v1.6.0

Changelog:

  • Resolve licensing issue by switching to PySide6
  • Add GPU monitoring for NVIDIA GPUs

Full Changelog: v1.5.1...v1.6.0

v1.5.1

08 Aug 17:38
f5b3b43
Compare
Choose a tag to compare

AutoGGUF v1.5.1

Changelog:

  • Support loading *.gguf file types
  • Update FAILED_TO_LOAD_PRESET localization key
  • Remove Save Preset context menu action

Full Changelog: v1.5.0...v1.5.1

v1.5.0

06 Aug 23:16
cb51a22
Compare
Choose a tag to compare

AutoGGUF v1.5.0

Changelog:

  • Refactor localizations to use them in HF conversion area
  • Organize localizations
  • Add sha256 and PGP signatures (same as commit ones)
  • Add HuggingFace to GGUF conversion support
  • Fix scaling on low resolution screens, interface now scrolls

Full Changelog: v1.4.3...v1.5.0

v1.5.0 (prerel2)

06 Aug 02:50
6070aac
Compare
Choose a tag to compare
v1.5.0 (prerel2) Pre-release
Pre-release

AutoGGUF v1.5.0 prerelease 2

Changelog:

  • Refactor localizations to use them in HF conversion area

Full Changelog: v1.5.0-beta...v1.5.0-beta2

v1.5.0 (prerel)

05 Aug 20:58
4ced884
Compare
Choose a tag to compare
v1.5.0 (prerel) Pre-release
Pre-release

AutoGGUF v1.5.0 prerelease

Changelog:

  • Organize localizations
  • Add sha256 and PGP signatures (same as commit ones)
  • Add HuggingFace to GGUF conversion support

Full Changelog: v1.4.3...v1.5.0-beta

v1.4.3

05 Aug 19:17
aaacba4
Compare
Choose a tag to compare

AutoGGUF v1.4.3

Changelog:

  • Updated src file in release to be Black formatted
  • Added model sharding management support
  • Allow multiple quantization types to be selected and started simultaneously
  • Updating preset saving and loading to handle multiple quantization types
  • Modifying the quantize_model function to process all selected types
  • Use ERROR and IN_PROGRESS constants from localizations in QuantizationThread

Full Changelog: v1.4.2...v1.4.3

v1.4.2

05 Aug 02:35
Compare
Choose a tag to compare

AutoGGUF v1.4.2

Changelog:

  • Resolves bug where Base Model text was shown even when GGML type was selected
  • Improved alignment
  • Minor repository changes

Full Changelog: v1.4.1...v1.4.2

v1.4.1

05 Aug 01:08
Compare
Choose a tag to compare

AutoGGUF v1.4.1

Changelog:

Full Changelog: v1.4.0...v1.4.1

v1.4.0

04 Aug 23:08
Compare
Choose a tag to compare

AutoGGUF v1.4.0

Changelog

  1. LoRA Conversion:

    • New section for converting HuggingFace PEFT LoRA adapters to GGML/GGUF
    • Output type selection (GGML or GGUF)
    • Base model selection for GGUF output
    • LoRA adapter list with individual scaling factors
    • Export LoRA section for merging adapters into base model
  2. UI Improvements:

    • Updated task names in task list
    • IMatrix generation check
    • Larger window size
    • Added exe favicon
  3. Localization:

    • French and Simplified Chinese support for LoRA and "Refresh Models" strings
  4. Code and Build:

    • Code organization improvements
    • Added build script
    • .gitignore file
  5. Misc:

    • Currently includes src folder with conversion tools
    • Plan to potentially download conversion scripts from llama.cpp GitHub in future
    • No console window popup

Pull Requests

New Contributors

Full Changelog: v1.3.1...v1.4.0

v1.4.0 (prerel2)

04 Aug 22:32
Compare
Choose a tag to compare
v1.4.0 (prerel2) Pre-release
Pre-release

AutoGGUF v1.4.0 prerelease 2

Changelog:

  • Code organization
  • Add build script
  • Add favicon
  • French and Simplified Chinese support for LoRA and "Refresh Models" strings
  • .gitignore file
  • Larger window size

Full Changelog: v1.4.0-beta...v1.4.0-beta2