-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Features/361 pad #572
Features/361 pad #572
Conversation
…tics/heat into features/361-pad
…eless still not working.
…tics/heat into features/361-pad
…tics/heat into features/361-pad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job @lenablind , needs a few more changes!
GPU cluster tests are currently disabled on this Pull Request. |
@mtar Thank you for letting me know. Is there a reason for that or to put it differently, are these needed for this PR and if that is the case, could you explain to me why? |
The CI system that I was setting up recently has a life of its own 😃 |
ok to test |
rerun tests |
Description
Implementation of function pad for mode "constant".
Syntax is nearly the same as for numpy, whereas I use torch.nn.functional.pad internally.
Syntactical differences
Although numpy uses different
values
keywords for the corresponding mode types (more specificallyconstant_values
,end_values
), I decided to use simply one (values
) for ease of usage, as only one mode can be used simultaneously either way.Also, what lacks my implementation are two numpy keywords as the corresponding modes are currently not available in this version:
Strategy
Hint: Torch allows only one padding value to be specified for all dimensions, whereas numpy offers the possibility to define one in each case. Therefore, to simulate numpy functionality but keep the performance of torch, I decided to call torch for each value in specified in
values
.Preparation
pad_width
and transform it into one torch pad tuple (-> shortcuts: see numpy docs)values
and transform it into one tuple if various values are included (value _ tuple (-> shortcuts: see numpy docs)Actual Padding
CASE 0 : input tensor contains no data
CASE 1 : Padding in non-split dimension or no distribution at all
In other words, you pad each dimension with the specified value in the value _ tuple.
This is necessary to provide numpy functionality ( -> Hint above)
CASE 2 : Padding in split dimension and function runs on more than 1 process
Therefore: Calculate the index of the first element in pad tuple that has to change/be set to zero (the following is the second)
The pad tuples can hereby be divided in three categories:
This is only a mathematical transcription for the manner in which the tensor chunk has to be padded.
Docs numpy: https://numpy.org/devdocs/reference/generated/numpy.pad.html
Docs pytorch: https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad
Issue/s resolved: #361
Changes proposed:
Overview modes (and their differences in numpy and torch)
Aequivalent modes numpy and torch
These might be implemented most easily, though there are some restrictions.
More specifically, only 3D, 4D and 5D padding with non-constant padding are currently supported by torch. Additionally, some scalability issues might occur for these modes.
To make it clear, padding a 9 element long-tensor with 'reflect' might already result in a RuntimeError.
Numpy modes which might result in constant padding with calculated padding values
the referred 'part of the vector' might be furthermore specified with the corresponding values keyword.
(-> see numpy docs.)
Type of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
no