Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any way to reduce Batch_Size in run.py? #76

Open
cchanyu opened this issue Mar 20, 2021 · 1 comment
Open

Any way to reduce Batch_Size in run.py? #76

cchanyu opened this issue Mar 20, 2021 · 1 comment

Comments

@cchanyu
Copy link

cchanyu commented Mar 20, 2021

My PC fails run run.py, because of lack of GPU memory:
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.00 GiB total capacity; 1.00 GiB already allocated; 60.86 MiB free; 1.06 GiB reserved in total by PyTorch
So from my understanding, run.py needs 64mb of gpu memory. My Nivida gfx card has 6gb, however, 4gb is shared, 2gb dedicated, but within that dedicated 2 gb, 1 gb is allocated, and 1 gb is used for torch py.. I'm short of 4 MiB and I tried reducing the shared memory, but hasn't been included in dedicated memory.

I googled and found people with similar issues, but they all suggested to reduce down batch_size. I was looking through the batch_size in run.py file, it just says:
parser.add_argument('-batch_size', type=int, default=1, help='Batch size to use during optimization')
kwargs = vars(parser.parse_args())
dataloader = DataLoader(dataset, batch_size=kwargs["batch_size"])
for ref_im, ref_im_name in dataloader: if(kwargs["save_intermediate"]): padding = ceil(log10(100)) for i in range(kwargs["batch_size"]): int_path_HR = Path(out_path / ref_im_name[i] / "HR") int_path_LR = Path(out_path / ref_im_name[i] / "LR") int_path_HR.mkdir(parents=True, exist_ok=True) int_path_LR.mkdir(parents=True, exist_ok=True) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( int_path_HR / f"{ref_im_name[i]}_{j:0{padding}}.png") toPIL(LR[i].cpu().detach().clamp(0, 1)).save( int_path_LR / f"{ref_im_name[i]}_{j:0{padding}}.png") else: #out_im = model(ref_im,**kwargs) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( out_path / f"{ref_im_name[i]}.png")
Is there any way to reduce the batch_size?

@ComingPeopleHW
Copy link

In default configurations, batch_size is 1. I think there is not a way to reduce batch_size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants