Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore intended k-quants quantization mixes for MoE models #4872

Merged
merged 3 commits into from
Jan 11, 2024

Commits on Jan 11, 2024

  1. Configuration menu
    Copy the full SHA
    a38378d View commit details
    Browse the repository at this point in the history
  2. Update Q2_K_S values in the quantize tool

    Still using LLaMA-v1 PPL values in the quant description
    today does not make much sense. But let's leave this update
    for another PR.
    Kawrakow committed Jan 11, 2024
    Configuration menu
    Copy the full SHA
    6e60a5c View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    31fb4d8 View commit details
    Browse the repository at this point in the history