You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried downloading all of the seth checkpoints and running the example in samples (using the inference.sh script), which gave me the following result:
final_output_orig.mp4
As you can see, the face is significantly different from what it should be, with a change in identity.
I also tried using a different image of Seth as the body and face reference:
Doing so gave even worse results:
final_output.mp4
His body does not resemble the reference.
The body keeps changing throughout the video (e.g., the color and texture of the shirt and tie keep changing).
Near the end, there is a frame where he suddenly has glasses.
Is there anything specific I should be aware of when running this in order to reproduce the results from the paper?
From my understanding, the reference image should influence the generated appearance. Am I correct, or does it serve a different purpose?
The text was updated successfully, but these errors were encountered:
The pre-trained model generates these results, you should use the inference weight fine-tuned with seth data in this link. I hope I haven't mixed up these links during multiple updates.
Thank you for the quick response.
Indeed I got the weights mixed up, using the correct ones seems to work for the example.
However, I tried running it with several body and head reference images (as shown in the previous message) and I always get the exact video, even the background doesn't change at all (I delete the entire output dir between runs to make sure it doesn't accidentally use the previous run).
Make-Your-Anchor is a personalized solution, so new appearances require collecting specific individual data for fine-tuning. You can reference the preprocess and fine-tuning procedures.
I tried downloading all of the seth checkpoints and running the example in samples (using the inference.sh script), which gave me the following result:
final_output_orig.mp4
As you can see, the face is significantly different from what it should be, with a change in identity.
I also tried using a different image of Seth as the body and face reference:
Doing so gave even worse results:
final_output.mp4
Is there anything specific I should be aware of when running this in order to reproduce the results from the paper?
From my understanding, the reference image should influence the generated appearance. Am I correct, or does it serve a different purpose?
The text was updated successfully, but these errors were encountered: