Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different results from the paper #25

Open
royrs opened this issue Dec 30, 2024 · 3 comments
Open

Different results from the paper #25

royrs opened this issue Dec 30, 2024 · 3 comments

Comments

@royrs
Copy link

royrs commented Dec 30, 2024

I tried downloading all of the seth checkpoints and running the example in samples (using the inference.sh script), which gave me the following result:

final_output_orig.mp4

As you can see, the face is significantly different from what it should be, with a change in identity.

I also tried using a different image of Seth as the body and face reference:
seth

Doing so gave even worse results:

final_output.mp4
  • His body does not resemble the reference.
  • The body keeps changing throughout the video (e.g., the color and texture of the shirt and tie keep changing).
  • Near the end, there is a frame where he suddenly has glasses.

Is there anything specific I should be aware of when running this in order to reproduce the results from the paper?
From my understanding, the reference image should influence the generated appearance. Am I correct, or does it serve a different purpose?

@bone-11
Copy link
Collaborator

bone-11 commented Dec 30, 2024

The pre-trained model generates these results, you should use the inference weight fine-tuned with seth data in this link. I hope I haven't mixed up these links during multiple updates.

@royrs
Copy link
Author

royrs commented Dec 30, 2024

Thank you for the quick response.
Indeed I got the weights mixed up, using the correct ones seems to work for the example.
However, I tried running it with several body and head reference images (as shown in the previous message) and I always get the exact video, even the background doesn't change at all (I delete the entire output dir between runs to make sure it doesn't accidentally use the previous run).

final_output.mp4

@bone-11
Copy link
Collaborator

bone-11 commented Jan 3, 2025

Make-Your-Anchor is a personalized solution, so new appearances require collecting specific individual data for fine-tuning. You can reference the preprocess and fine-tuning procedures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants