This script is created for use in the project https://github.com/AUTOMATIC1111/stable-diffusion-webui
using img2img generates pictures one after another
08115-93506836-deep.space.with.planets.mp4
video.mp4
08007-2052885894-human.faces.with.pale.skin.very.beautiful.fine.features.and.detailed.mp4
copy video.py in stable-diffusion-webui/scripts folder
- create a picture or upload (it will be used as the first frame)
- write a prompt
- write end prompt
- select video settings
- run
1.) End Prompt Trigger Lets you define at how much Percent 0-100 the End Prompt will added to the original prompt
2.)Zoom Rotate When zoom is activated you can select to rotate the image per rendered frame. Values from - to + 3.6° are accepted. (sanity limit else you get dark corners)
3.)TranslateXY shifts the image in X and Y directions. check boxes if you want to go the opposite direction. if its tiled it takes the opposite end and copies it at the end of the scrolling direction if not it does some color palette maintaining noise stretchy stuff at the end which works but is kind of hacky. Numpy Expert anyone? (would be good to keep the color palette intact)