You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2023-07-13 10:53:32.451890: lr: 0.01
Exception in background worker 3:
[WinError 8] 内存资源不足,无法处理此命令。
Traceback (most recent call last):
File "C:\nnformer\nnformer\run\run_training.py", line 214, in
main()
File "C:\nnformer\nnformer\run\run_training.py", line 194, in main
trainer.run_training()
File "C:\nnformer\nnformer\training\network_training\nnFormerTrainerV2.py", line 445, in run_training
ret = super().run_training()
File "C:\nnformer\nnformer\training\network_training\nnFormerTrainer.py", line 319, in run_training
super(nnFormerTrainer, self).run_training()
File "C:\nnformer\nnformer\training\network_training\network_trainer.py", line 443, in run_training
_ = self.tr_gen.next()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 181, in next
return self.next()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 205, in next
item = self.__get_next_item()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 189, in __get_next_item
raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full
Thank you!!!
The text was updated successfully, but these errors were encountered:
please!! how to solve this question?
2023-07-13 10:53:32.451890: lr: 0.01
Exception in background worker 3:
[WinError 8] 内存资源不足,无法处理此命令。
Traceback (most recent call last):
File "C:\nnformer\nnformer\run\run_training.py", line 214, in
main()
File "C:\nnformer\nnformer\run\run_training.py", line 194, in main
trainer.run_training()
File "C:\nnformer\nnformer\training\network_training\nnFormerTrainerV2.py", line 445, in run_training
ret = super().run_training()
File "C:\nnformer\nnformer\training\network_training\nnFormerTrainer.py", line 319, in run_training
super(nnFormerTrainer, self).run_training()
File "C:\nnformer\nnformer\training\network_training\network_trainer.py", line 443, in run_training
_ = self.tr_gen.next()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 181, in next
return self.next()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 205, in next
item = self.__get_next_item()
File "C:\Monai\envs\nnformer\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 189, in __get_next_item
raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full
Thank you!!!
The text was updated successfully, but these errors were encountered: