-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Larger RAM usage with the new config system #159
Comments
Hi @barikata1984 ! Can you try with a reduced sample-per-ray count?
I've also tracked all config updates here: |
Hi @orperel, Thanks for your response. I reduced sample-per-ray from 512 to 16, halving the value iteratively but the process got killed. It looks like something happens when running + print("Instantiating dataset_transform")
dataset_transform = instantiate(cfg.dataset_transform) # SampleRays creates batches of rays from the dataset
+ print("Instantiating train_dataset")
train_dataset = instantiate(cfg.dataset, transform=dataset_transform) # A Multiview dataset in + print("================= Flag 0 =================")
instance = instantiate(config, **overriden_args)
+ print("================= Flag 1 =================") in
Do you have any other ideas to clear this issue? |
Hi @barikata1984 thanks for this bug report. I ran some memory profiling and indeed the main branches uses upwards of 14GB of resident memory at peak, which really shouldn't be the case. I dug into the issue a bit and I fixed some benign issues in: #164 Now the resident memory at least according to my profiling is 8GB (so a 6GB reduction). If you want further savings, I would pass in Let me know if this works for you! |
Hi @tovacinni, thanks a lot for the solution! As you suggested, |
Description
Hi,
I tried to run main_nerf.py in the main branch. But it suddenly stopped showing a one-word line
Killed
. It is presumably due to RAM shortage, according to google. I checked the usage and it reached its limit immediately before the app stopped. Do you have any idea how to deal with this issue?I followed all the installation procedures, including requirements_app.txt. main_nerf.py in the stable branch works without any problems. So, if the config system is the only major change between the main and stable branches, the issue should be caused by the new config system. I suppose you can reproduce the larger RAM usage in your environment.
I installed pyopengl_accelerate separately because a msg telling the module is missing appeared when I ran the stable main_nerf.py for the first time, but the conda env should be clean to run wisp apps.
I know the easiest solution is increasing RAM. But the stable config system works fine even with limited RAM. It would be great if I could also use the new one on the same machine since it looks much cleaner.
Thanks in advance!
Machine spec
Reproduction steps
pip install pyopengl_accelerate
python app/nerf/main_nerf.py --dataset-path /path/to/lego/ --config app/nerf/configs/nerf_hash.yaml
The text was updated successfully, but these errors were encountered: