Graph Multi Attention Network PyTorch Implementation with GPU Utilization #8
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
GPU Utilization
In the previous version of the code, Torch was only using
torch.Tensor.float
. This caused computations to be done on CPU and therefore training times were longer. This newer version contains adjustments with cuda implementation to utilize GPU withtorch.cuda.Tensor.float
.Related Issues: #6
[ + ] main.py:
device
variable added so that this can be passed as an argument totrain(...)
andtest(...)
functions[ + ] model.py: Spatial and temporal embed inputs with super-class
nn.Module
have changed to utilize GPU if exists.[ + ] train.py: Function argument changed so that it now accepts device variable as input and input data such as
X, TE, labels
are changed withto(device)
[ + ] test.py: Similar changes as train.py
Time Slot Change
time.freq.delta.total_seconds()
was causing problems. Therefore it's changed withargs.time_slot*60
which is a more feasible and general approach.