You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the default is 1 because training is unknown size GPU (probably should be 2)
Make a predict_batch_size and a train_batch_size config arg
Update defaults to 2 for train and 8 for predict.
update the config doc
Write tests showing the dataloaders of each are yielding correct sizes.
I'm unsure about the val dataloader batch size, maybe should be higher, not clear to me the GPU memory. I think val batch size should be the predict size, since no weights are updated.
The text was updated successfully, but these errors were encountered:
Go for it. Do you have access to GPU? Not yet sure if validation batch_size and predict_batch size should be the same or separate arguments. Make sure to profile the example code. Do you need a large tile to test on, you won't notice much on the sample package data.
Updating model weights takes alot more GPU memory than just a forward model pass.
predict.tile is slower than it needs to be because its using trainer.predict, which inherits a dataloader with batch size set by the global config
DeepForest/src/deepforest/main.py
Line 348 in 3dbc834
and in train gets from load_dataset.
DeepForest/src/deepforest/main.py
Line 335 in 3dbc834
the default is 1 because training is unknown size GPU (probably should be 2)
I'm unsure about the val dataloader batch size, maybe should be higher, not clear to me the GPU memory. I think val batch size should be the predict size, since no weights are updated.
The text was updated successfully, but these errors were encountered: