You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My PC fails run run.py, because of lack of GPU memory: RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.00 GiB total capacity; 1.00 GiB already allocated; 60.86 MiB free; 1.06 GiB reserved in total by PyTorch
So from my understanding, run.py needs 64mb of gpu memory. My Nivida gfx card has 6gb, however, 4gb is shared, 2gb dedicated, but within that dedicated 2 gb, 1 gb is allocated, and 1 gb is used for torch py.. I'm short of 4 MiB and I tried reducing the shared memory, but hasn't been included in dedicated memory.
I googled and found people with similar issues, but they all suggested to reduce down batch_size. I was looking through the batch_size in run.py file, it just says: parser.add_argument('-batch_size', type=int, default=1, help='Batch size to use during optimization') kwargs = vars(parser.parse_args()) dataloader = DataLoader(dataset, batch_size=kwargs["batch_size"]) for ref_im, ref_im_name in dataloader: if(kwargs["save_intermediate"]): padding = ceil(log10(100)) for i in range(kwargs["batch_size"]): int_path_HR = Path(out_path / ref_im_name[i] / "HR") int_path_LR = Path(out_path / ref_im_name[i] / "LR") int_path_HR.mkdir(parents=True, exist_ok=True) int_path_LR.mkdir(parents=True, exist_ok=True) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( int_path_HR / f"{ref_im_name[i]}_{j:0{padding}}.png") toPIL(LR[i].cpu().detach().clamp(0, 1)).save( int_path_LR / f"{ref_im_name[i]}_{j:0{padding}}.png") else: #out_im = model(ref_im,**kwargs) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( out_path / f"{ref_im_name[i]}.png")
Is there any way to reduce the batch_size?
The text was updated successfully, but these errors were encountered:
My PC fails run run.py, because of lack of GPU memory:
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.00 GiB total capacity; 1.00 GiB already allocated; 60.86 MiB free; 1.06 GiB reserved in total by PyTorch
So from my understanding, run.py needs 64mb of gpu memory. My Nivida gfx card has 6gb, however, 4gb is shared, 2gb dedicated, but within that dedicated 2 gb, 1 gb is allocated, and 1 gb is used for torch py.. I'm short of 4 MiB and I tried reducing the shared memory, but hasn't been included in dedicated memory.
I googled and found people with similar issues, but they all suggested to reduce down batch_size. I was looking through the batch_size in run.py file, it just says:
parser.add_argument('-batch_size', type=int, default=1, help='Batch size to use during optimization')
kwargs = vars(parser.parse_args())
dataloader = DataLoader(dataset, batch_size=kwargs["batch_size"])
for ref_im, ref_im_name in dataloader: if(kwargs["save_intermediate"]): padding = ceil(log10(100)) for i in range(kwargs["batch_size"]): int_path_HR = Path(out_path / ref_im_name[i] / "HR") int_path_LR = Path(out_path / ref_im_name[i] / "LR") int_path_HR.mkdir(parents=True, exist_ok=True) int_path_LR.mkdir(parents=True, exist_ok=True) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( int_path_HR / f"{ref_im_name[i]}_{j:0{padding}}.png") toPIL(LR[i].cpu().detach().clamp(0, 1)).save( int_path_LR / f"{ref_im_name[i]}_{j:0{padding}}.png") else: #out_im = model(ref_im,**kwargs) for j,(HR,LR) in enumerate(model(ref_im,**kwargs)): for i in range(kwargs["batch_size"]): toPIL(HR[i].cpu().detach().clamp(0, 1)).save( out_path / f"{ref_im_name[i]}.png")
Is there any way to reduce the batch_size?
The text was updated successfully, but these errors were encountered: