-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help #8
Comments
init.mp4 |
init.mp4 |
what are your parameters? As the name of "init.mp4", I think you may just get the initialised strokes instead of optimised sketches. |
I believe the parameters are correct. The main issue is that the program terminates before optimization completes. It is intended to run for 2000 epochs, with an MP4 file output every 100 epochs |
Yeah I see, do you know any solutions to this issue? Tried debugging it but without success. |
Logically speaking, such a thing shouldn't happen, and I haven't encountered this problem before either. You can set breakpoints to check why the process exits after initialization and at which step it exits. |
Thanks for the advice, I've also tried CLIPasso with docker install which worked fine, could my gpu having to litlle amount of vram be part of the problem? |
Perhaps, you can try decreasing the number of strokes. It may help with low GPU resources. |
quick question, what exact python version are you running this on and is this on linux or windows |
python 3.7.5; Linux. |
ubuntu? |
ubuntu20.04 cuda11.3 torch1.12.1. |
ty |
hi, me again. I lowered the values and now it seems to be working " Could you tell me if its even worth rendering with these bad parameters. |
I believe the main parameters influencing time are the number of strokes, number of frames, and number of iterations. However, the 3-hour runtime might be due to the capability of your GPU, as the requirements of these parameters are relatively low. Unfortunately, I cannot help further since I haven't fully optimized the code's efficiency. The consist_param and frames_param mainly influence the stability or flexibility of the video, while clip_RN_layer_weights and clip_fc_loss_weight influence the semantics. I advise you to only change consist_param, frames_param, or width, and use more strokes(16, 32, 48) and iterations(like 401). |
Thanks for all the help, currently renting out an rtx 3090 to experiment with this <3 |
Has the program output the sketch videos yet? Is this feedback related to training the neural atlas or optimizing the sketch video? |
It did not output any sketch videos. I Tried running this on my own dataset. Running the process_dataset and operate_atlas worked without any errors. But This happened when running the operate_clipavideo. It finished the first 1200 iterations and then had an error after the last 500 iterations. |
I haven't tried high iterations, like 1200, as that might cause some problems, such as exceeding the default maximum iterations. However, I think iterations higher than 800 will not make much difference. Some artifacts and poor results typically stem from the capabilities of the neural atlas and can be reduced by adjusting parameters like the number and width of strokes. |
Hi,
I know this is not really an issue, but can you give me an example prompt on how to run the soapbox. Since whenever I run it I dont get results that look like the one from the example in the github.
The text was updated successfully, but these errors were encountered: