Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graph Multi Attention Network PyTorch Implementation with GPU Utilization #8

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

muratbayrktr
Copy link

GPU Utilization

In the previous version of the code, Torch was only using torch.Tensor.float. This caused computations to be done on CPU and therefore training times were longer. This newer version contains adjustments with cuda implementation to utilize GPU with torch.cuda.Tensor.float.

Related Issues: #6

  • [ + ] main.py: device variable added so that this can be passed as an argument to train(...) and test(...) functions

  • [ + ] model.py: Spatial and temporal embed inputs with super-class nn.Module have changed to utilize GPU if exists.

  • [ + ] train.py: Function argument changed so that it now accepts device variable as input and input data such as X, TE, labels are changed with to(device)

  • [ + ] test.py: Similar changes as train.py

Time Slot Change

  • [ + ] utils_.py: The line time.freq.delta.total_seconds() was causing problems. Therefore it's changed with args.time_slot*60 which is a more feasible and general approach.
  • utiils_py: Needs more attention. time & frequency calculation approach should be reviewed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant