Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
refactor: GGUF + GGML Loaders with
ModelKind
(#356)
* chore: Communicate actual difference These methods are very verbose, but really only differ by two params to differentiate from Lora vs XLora. * refactor: Introduce `ModelConfig` + `from_gguf` proxy `ModelConfig` groups the common properties used across the `from_gguf()` methods. This will better communicate differences across impl of `from_gguf()`. The quantized xlora models `from_gguf()` now have a prop to param forwarder as a workaround to minimize breakage elsewhere. * refactor: Add `from_ggml` proxy Very similar to the `from_gguf`, except only `quantized_llama.rs` xlora supports this. No `Device` params, slightly different `File` params from GGUF type. * chore: DRY `ggml.rs` + `gguf.rs` common adapter config logic Finally, all this extra boilerplate can be shifted into the model config `Adapter` struct to self-contain in a single method. This required adjusting ownership a little to satisfy the compiler. The original `from_gguf()` and `from_ggml()` methods are unaffected, they still receive the expected params as reference. * refactor(breaking): Leverage traits for `from_gguf()` / `from_ggml()` This introduces a slight breaking change, in that using these `from_gguf()` / `from_ggml()` methods now requires importing the trait into scope. The methods drop the `pub` prefix as they inherit `pub` from the trait requirement itself. The purpose of this trait is to not require each model to duplicate the structs to params mapping helper method. Instead that can be centralized. * chore: DRY - Dedupe prop mapping methods These no longer need to be maintained as copies within the supported model modules. They now leverage the common shared traits and take an annotated type parameter to handle. The syntax for usage is presently a little more verbose than desired. * chore: Contextual commit - Alternative prop mapping approaches For reference, these alternatives could be considered. * refactor: Add equivalent support for quant models without adapters - Impl traits for the non-adapter quant models - Since adapter variants only append parameters and that is now a distinct struct, `model_config` can be defined earlier and a helper `with_adapter()` can convert to the adapter type variant. * chore: Fix typo * refactor: Collapse Lora + XLora arms With a rebase to adopt new methods for `ModelKind`, the shared logic can be hoisted out of the match arms. XLora specific variables were moved into `Adapter::try_new()` (`model_config.rs`) as they can share the same `paths` parameter by adding a separate bool to toggle x-lora usage. By hoisting `model_config` variable out of the match arm, the type would change when calling `with_adapter()`, thus to prevent that the separate `Adapter*` tuple structs have been dropped in favor of `ModelParams<Q>` which uses generic `Q` for the quantization type (trait marker) and a separate adapter optional that can be updated. `MapParamsToModel<T>` also is no longer compatible as an approach since the trait bound is ambiguous as there is no distinct adapter type (eg: `for AdapterGGUF`) to impl upon, unique method names also become required to avoid conflict on the same type. - `try_into_model()` + `try_into_model_with_adapter()` for the separate `Q` types (GGUF/GGML) with/without adapter. - Due to new struct introduced, slight change to destructuring. The `impl` also bundles both methods now for each `Q` variant. Order was adjusted to basic followed by adapter methods for each `Q` variant, instead of both basic, then both adapter variations following afterwards. - Likewise the `ggml.rs` and `gguf.rs` methods without the `MapParamsToModel<T>` trait now rely on `TryFrom` trait impl. * refactor: Wrap `ModelParams` into enum for distinct adapter type This approach introduces another generic parameter `MaybeAdapter` to emulate a `Option<Adapter>` that can be used as type to `impl` upon. To continue the unified type usage with an adapter variant in `ggml.rs` / `gguf.rs` pipelines, this must now leverage an enum for the two variants. - Slightly more complexity added as a result. - Adapter `try_into_model()` methods no longer need to check for `Some(Adapter)` to unwrap, since that should always be the case. This is now guaranteed. - However similar logic has bubbled up to the `TryFrom` for all impl due to the enum wrapper, thus this approach may not be much better beyond broader consistency. Likewise with the `with_adapter()` method. To minimize boilerplate in handling unwrapping of the enum in the `TryFrom` methods, `Variantly` has been introduced for it's `expect_variant()` method. As all four types are distinct, the `_with_adapter()` method can also be `try_into_model()` due to separate impl for the new generic param `MaybeAdapter`. * chore: Minor improvements Since the type constraint for `try_into_model()` methods is bound as the return type, it can be inferred without any hint in the `TryFrom`, no need to annotate with `Self`. Use `derive_more` for terser construction of `Config` struct for `ModelParams` variants. * refactor: Use `buildstructor` for builder API This is an alternative approach to build the config. Construction of the config from the param inputs is handled at the end now, not dependent upon separate `new()` + optional `with_adapter()` calls on a mutable variable. Unfortunately `buildstructor` and `typed-builder` APIs don't allow for easy flexibility of builder methods in different scopes (_due to moves_). `derive-builder` can do this but not with the more complex types due to lack of a `Copy` / `Clone`. Thus the `None` option is required as input regardless of if an adapter is needed. * chore: Wrap `model_config` assignment into expression This better communicates the block is only relevant to assigning this value. While the two `is_lora` / `is_xlora` variables are hoisted above due to usage later as metadata inputs. * fix: Drop `is_lora` from `GeneralMetadata` `pipeline/gguf.rs` + `pipeline/ggml.rs` now ensure that `activate_adapters()` works for X-LoRA too. This is assumed as a bugfix due to the `XLoraLlama` model the two adapter kinds share along with code everywhere else checking `is_xlora`, no other usage of `is_lora` seems to be used. - To ensure further ambiguity is avoided, the condition is better communicated as `has_adapter`. - It is unclear if all usage of `is_xlora` is specific to X-LoRA or also intended to be applicable to LoRA since `XLora*` models do impl `is_xlora() -> true` (except Gemma, which is a potential bug). `pipeline/normal.rs` handled it's own `is_xlora` bool differently than `gguf.rs` / `ggml.rs` loaders. - It relied upon`model.is_xlora() && !is_lora`, but we already assume X-LoRA via prior matching on `ModelKind` which now provides this information via it's own `is_x_lora()` method. - Only `xlora_models/gemma.rs` would behave differently with this change, but Gemma might have meant to return `true`? * chore: Match on adapter Matches are only for `Quantized` or `AdapterQuantized` variants with no difference in handling by `AdapterKind` variant used. Additionally restores the `GGUF X-LoRA` bail formatted string. For consistency the non-adapter branch also appends `for GGUF` and the architecture in the lora branch now comes before the `ModelKind`. * chore: Support params via tuple `into()` + add note of possible bug * breaking: Replace `ModelKind` with new version A better approach for the most part at encoding the kind info. * lint(clippy): Appease the lint gods `model_config.rs` GGUF and GGML structs prefixed with `Params`. Two exceptions added as the concerns don't seem to warrant change: - `#[allow(clippy::borrowed_box)]` - `#[allow(clippy::large_enum_variant)]` * chore: Convert from `CRLF` to `LF` This file has no other change in the commit beyond line ending conversion. It was mistakenly using CRLF since creation. * lint(rustfmt): Appease the lint gods * fix: Restore `is_lora` condition `GeneralMetadata` now stores the `ModelKind` for this type of information. `activate_adapters()` error message revised. `mod.rs` version includes contextual comment about X-LoRA not being equivalent. * fix: Gemma X-Lora model `is_xlora()` should return `true` Most likely redundant with `GeneralMetadata` now having `ModelKind` to query, but fixing here until all queries replaced. Additionally updates `model_config.rs` note to clarify not a bug.
- Loading branch information