Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generator subprocess #96

Merged
merged 20 commits into from
Sep 25, 2022
Merged

Generator subprocess #96

merged 20 commits into from
Sep 25, 2022

Conversation

NullSenseStudio
Copy link
Collaborator

A replacement of #81 that keeps the generator within a subprocess so all platforms should be able to release its ram usage. This should fully fix #42 and seems to have also accidentally fixed #35 console spam.

Also allows for canceling in the middle of generating without the need to restart blender.

Still needs work to get progress and show steps so not quite ready to merge. I'd like you to give it a test before I bother with the rest.

@carson-katri
Copy link
Owner

This looks pretty good, I’ll have to give it a try. I like that it reuses the same process. Model loading time was something I was concerned about with an implementation like this.

@NullSenseStudio
Copy link
Collaborator Author

So much cleaner arguments now. Have you had the time to test yet?

@carson-katri
Copy link
Owner

Not just yet. I started working on removing the need to install dependencies, but I'll give it a try soon.

@carson-katri
Copy link
Owner

Have you tested inpainting and init images yet?

@NullSenseStudio
Copy link
Collaborator Author

Just tested, they both seem to work like normal.

@carson-katri
Copy link
Owner

carson-katri commented Sep 24, 2022

Looks like you need to update panel.py to use the new CFG Scale property name.

@carson-katri
Copy link
Owner

I'm not seeing the progress bar update when generating. But other than that, its working great. And when I killed the process all the memory was freed up correctly!

@NullSenseStudio
Copy link
Collaborator Author

I was waiting for you to give it a try to make sure it works on macs before bothering with implementing the rest. It's been difficult for me to get it this far. I'll go ahead and close #81 since that isn't needed anymore.

@carson-katri
Copy link
Owner

Because it's a subprocess now, you can probably remove all of the asyncio code (async_loop.py and users of it).

Co-authored-by: Carson Katri <[email protected]>
@NullSenseStudio
Copy link
Collaborator Author

NullSenseStudio commented Sep 24, 2022

Blender is freezing while generating without async. Still generates but won't be able to show progress like this.

@NullSenseStudio
Copy link
Collaborator Author

NullSenseStudio commented Sep 24, 2022

Progress and show steps are working again, and added bonus information when starting the generator. Blender feels much more responsive while generating now.

10x speed
blender_jncTIORUWS

@NullSenseStudio
Copy link
Collaborator Author

I'm going to add in some error checking, then I think this pr will be complete.

@carson-katri carson-katri added the enhancement New feature or request label Sep 24, 2022
@NullSenseStudio
Copy link
Collaborator Author

Errors within the subprocess will now be shown in a prompt. I've set a special case for low ram that'll give a minimal message about that issue, any other errors will show the full stacktrace and be dumped to stderr of the Blender process. Also set a minimum resolution limit so that can't cause an error.
image

Got a little awkward getting the prompt to show and not have Blender crash due to the multithreading. I'll have to try again to convert the generate operator into a modal operator in a new pr eventually.

@carson-katri
Copy link
Owner

is it possible to use self.report({“ERROR”}, “…”)?

@NullSenseStudio
Copy link
Collaborator Author

I tried but had no luck with that. If I can get it converted to a modal operator it should work then.

@NullSenseStudio
Copy link
Collaborator Author

Alright, so plan changed on taking a break and saving the modal conversion for another pr. Got that done and the self.report({“ERROR”}, “…”) worked flawlessly with it. Removed all async but it still uses a background thread for handling IPC. Also found another low ram case to simplify the error message for.
image

@carson-katri
Copy link
Owner

Awesome! Is it ready for review?

@NullSenseStudio
Copy link
Collaborator Author

Yes, I believe these are the last changes I thought were needed.

from enum import IntEnum as Lawsuit

# IPC message types from subprocess
class Action(Lawsuit): # can't help myself
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😂 love it

@carson-katri carson-katri merged commit 53b4a58 into carson-katri:main Sep 25, 2022
@NullSenseStudio NullSenseStudio deleted the generator-subprocess branch September 25, 2022 20:20
JasonHoku pushed a commit to JasonHoku/dream-textures that referenced this pull request Dec 20, 2022
* Moved generator into its own subprocess

* cleanup

* further cleanup

* minimized arguments

* forgot cfg_scale in panel.py

* inf seed error fix

* shuffle generator_process.py imports

Co-authored-by: Carson Katri <[email protected]>

* modify subprocess sys.path

Co-authored-by: Carson Katri <[email protected]>

* remove __init__.py sys.path modification

* reimplement progress and show steps

* hopefully final tweaks

* get_image() was refactored

* hopefully final tweaks 3

* update progress when show_steps is false

* error prompt

* removed accidental import

* convert to modal operator

* forgot find() returns index and not bool

* refactoring and minor changes

Co-authored-by: Carson Katri <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
2 participants