Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for adapter loading in mllama #669

Merged
merged 2 commits into from
Nov 12, 2024
Merged

Add support for adapter loading in mllama #669

merged 2 commits into from
Nov 12, 2024

Conversation

ajtejankar
Copy link
Contributor

Supporting adapter loading in VLMs, specifically mllama models. This is more complex than we had initially anticipated as a VLM is under the hood a composite model with multiple trunks which requires separating and loading the lora weights independently for each trunk. Further, currently we don't have lora adapters for certain layers in one of the trunks (cross attention layers) which requires careful handling. Finally, the linear layer classes currently used don't support adapter weights (currently investigating).

@ajtejankar
Copy link
Contributor Author

The PR is ready from my side. Loading randomized adapter results in garbage output while the model answers correctly without it. Slowly changing the noise in the randomized adapter also slowly degrades the quality of the model's output again indicating that adapters are being loaded correctly.

Copy link
Contributor

@tgaddair tgaddair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ajtejankar ajtejankar merged commit fdda45a into main Nov 12, 2024
1 check passed
@ajtejankar ajtejankar deleted the support-mllama branch November 12, 2024 20:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants