Skip to content

Commit

Permalink
Update on "[PyTorch] Fix const correctness for resize native functions"
Browse files Browse the repository at this point in the history
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
  • Loading branch information
swolchok committed Apr 21, 2021
2 parents eb94856 + 1520b48 commit 4c75799
Show file tree
Hide file tree
Showing 293 changed files with 5,267 additions and 2,110 deletions.
10 changes: 10 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1417,6 +1417,16 @@ jobs:
chmod a+x .jenkins/pytorch/macos-test.sh
unbuffer .jenkins/pytorch/macos-test.sh 2>&1 | ts
- run:
name: Report results
no_output_timeout: "5m"
command: |
set -ex
source /Users/distiller/workspace/miniconda3/bin/activate
pip install boto3
export PYTHONPATH="$PWD"
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports

Expand Down
10 changes: 10 additions & 0 deletions .circleci/verbatim-sources/job-specs/job-specs-custom.yml
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,16 @@
chmod a+x .jenkins/pytorch/macos-test.sh
unbuffer .jenkins/pytorch/macos-test.sh 2>&1 | ts
- run:
name: Report results
no_output_timeout: "5m"
command: |
set -ex
source /Users/distiller/workspace/miniconda3/bin/activate
pip install boto3
export PYTHONPATH="$PWD"
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/auto_label.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
ISSUE_NUMBER="${PR_NUMBER}"
else
TITLE="${ISSUE_TITLE}"
ISSUE_NUMBER="${ISSUE_NUMBER}"
# ISSUE_NUMBER is already set
fi
echo ::set-output name=TITLE::"${TITLE}"
echo ::set-output name=ISSUE_NUMBER::"${ISSUE_NUMBER}"
Expand Down
50 changes: 31 additions & 19 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,34 +34,38 @@ jobs:
- name: Extract scripts from GitHub Actions workflows
run: tools/extract_scripts.py --out=.extracted_scripts
- name: ShellCheck
# https://github.com/koalaman/shellcheck/tree/v0.7.1#installing-a-pre-compiled-binary
# https://github.com/koalaman/shellcheck/tree/v0.7.2#installing-a-pre-compiled-binary
run: |
set -x
scversion="v0.7.1"
scversion="v0.7.2"
wget -qO- "https://github.com/koalaman/shellcheck/releases/download/${scversion?}/shellcheck-${scversion?}.linux.x86_64.tar.xz" | tar -xJv
sudo cp "shellcheck-${scversion}/shellcheck" /usr/bin/
rm -r "shellcheck-${scversion}"
shellcheck --version
tools/run_shellcheck.sh .jenkins/pytorch .extracted_scripts
- name: Ensure correct trailing newlines
run: |
(! git grep -Il '' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude)**.expect' ':(exclude)tools/clang_format_hash' | tools/trailing_newlines.py || (echo "The above files do not have correct trailing newlines; please normalize them"; false))
(! git --no-pager grep -Il '' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude)**.expect' ':(exclude)tools/clang_format_hash' | tools/trailing_newlines.py || (echo "The above files do not have correct trailing newlines; please normalize them"; false))
- name: Ensure no trailing spaces
run: |
(! git grep -In '[[:blank:]]$' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' || (echo "The above lines have trailing spaces; please remove them"; false))
(! git --no-pager grep -In '[[:blank:]]$' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' || (echo "The above lines have trailing spaces; please remove them"; false))
- name: Ensure no tabs
run: |
(! git grep -In $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above lines have tabs; please convert them to spaces"; false))
(! git --no-pager grep -In $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above lines have tabs; please convert them to spaces"; false))
- name: Ensure no non-breaking spaces
run: |
(! git grep -In $'\u00a0' -- . || (echo "The above lines have non-breaking spaces (U+00A0); please convert them to spaces (U+0020)"; false))
(! git --no-pager grep -In $'\u00a0' -- . || (echo "The above lines have non-breaking spaces (U+00A0); please convert them to spaces (U+0020)"; false))
- name: Ensure canonical include
run: |
(! git grep -In $'#include "' -- ./c10 ./aten ./torch/csrc ':(exclude)aten/src/ATen/native/quantized/cpu/qnnpack/**' || (echo "The above lines have include with quotes; please convert them to #include <xxxx>"; false))
(! git --no-pager grep -In $'#include "' -- ./c10 ./aten ./torch/csrc ':(exclude)aten/src/ATen/native/quantized/cpu/qnnpack/**' || (echo "The above lines have include with quotes; please convert them to #include <xxxx>"; false))
- name: Ensure no unqualified noqa
run: |
# shellcheck disable=SC2016
(! git grep -InP '# noqa(?!: [A-Z]+\d{3})' -- '**.py' ':(exclude)caffe2' || (echo 'The above lines have unqualified `noqa`; please convert them to `noqa: XXXX`'; false))
(! git --no-pager grep -InP '# noqa(?!: [A-Z]+\d{3})' -- '**.py' ':(exclude)caffe2' || (echo 'The above lines have unqualified `noqa`; please convert them to `noqa: XXXX`'; false))
- name: Ensure no unqualified type ignore
run: |
# shellcheck disable=SC2016
(! git --no-pager grep -InP '# type:\s*ignore(?!\[)' -- '**.py' ':(exclude)test/test_jit.py' || (echo 'The above lines have unqualified `type: ignore`; please convert them to `type: ignore[xxxx]`'; false))
# note that this next step depends on a clean checkout;
# if you run it locally then it will likely to complain
# about all the generated files in torch/test
Expand All @@ -79,7 +83,7 @@ jobs:
python torch/testing/check_kernel_launches.py |& tee "${GITHUB_WORKSPACE}"/cuda_kernel_launch_checks.txt
- name: Ensure no direct cub include
run: |
(! git grep -I -no $'#include <cub/' -- ./aten ':(exclude)aten/src/ATen/cuda/cub.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))
(! git --no-pager grep -I -no $'#include <cub/' -- ./aten ':(exclude)aten/src/ATen/cuda/cub.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))
python2-setup-compat:
runs-on: ubuntu-18.04
Expand Down Expand Up @@ -128,7 +132,7 @@ jobs:
run: |
set -eux
export PATH=~/.npm-global/bin:"$PATH"
for FILE in {CONTRIBUTING,README}.md; do
for FILE in $(git grep -Il '<!-- toc -->' -- '**.md'); do
markdown-toc --bullets='-' -i "$FILE"
done
- name: Assert that regenerating the ToCs didn't change them
Expand All @@ -153,21 +157,23 @@ jobs:
mkdir flake8-output
cd flake8-output
echo "$HEAD_SHA" > commit-sha.txt
- name: Run flake8
- name: Install dependencies
run: |
set -eux
pip install typing-extensions # for tools/translate_annotations.py
pip install -r requirements-flake8.txt
flake8 --version
- name: Run flake8
run: |
set -eux
flake8 | tee "${GITHUB_WORKSPACE}"/flake8-output.txt
cp flake8-output.txt flake8-output/annotations.json
- name: Translate annotations
if: github.event_name == 'pull_request'
env:
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
run: |
tools/translate_annotations.py \
--file=flake8-output.txt \
--file="${GITHUB_WORKSPACE}"/flake8-output.txt \
--regex='^(?P<filename>.*?):(?P<lineNumber>\d+):(?P<columnNumber>\d+): (?P<errorCode>\w+\d+) (?P<errorDesc>.*)' \
--commit="$HEAD_SHA" \
> flake8-output/annotations.json
Expand Down Expand Up @@ -218,10 +224,7 @@ jobs:
sudo apt-get update
sudo apt-get install -y clang-tidy-11
sudo update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-11 1000
- name: Run clang-tidy
env:
BASE_SHA: ${{ github.event.pull_request.base.sha }}
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
- name: Generate build files
run: |
set -eux
git remote add upstream https://github.com/pytorch/pytorch
Expand All @@ -245,6 +248,12 @@ jobs:
--native-functions-path aten/src/ATen/native/native_functions.yaml \
--nn-path aten/src
fi
- name: Run clang-tidy
env:
BASE_SHA: ${{ github.event.pull_request.base.sha }}
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
run: |
set -eux
# Run Clang-Tidy
# The negative filters below are to exclude files that include onnx_pb.h or
Expand Down Expand Up @@ -299,13 +308,16 @@ jobs:
architecture: x64
- name: Fetch PyTorch
uses: actions/checkout@v2
- name: Run cmakelint
- name: Install dependencies
run: |
set -eux
pip install cmakelint
cmakelint --version
- name: Run cmakelint
run: |
set -eux
git ls-files -z -- bootstrap '*.cmake' '*.cmake.in' '*CMakeLists.txt' | \
grep -E -z -v '^(cmake/Modules/|cmake/Modules_CUDA_fix/)' | \
grep -E -z -v '^(cmake/Modules/|cmake/Modules_CUDA_fix/|cmake/Caffe2Config.cmake.in|aten/src/ATen/ATenConfig.cmake.in|cmake/Caffe2ConfigVersion.cmake.in|cmake/TorchConfig.cmake.in|cmake/TorchConfigVersion.cmake.in|cmake/cmake_uninstall.cmake.in)' | \
xargs -0 cmakelint --config=.cmakelintrc --spaces=2 --quiet
mypy:
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/pytorch/macos-common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ if [ ! -d "${WORKSPACE_DIR}/miniconda3" ]; then
retry bash "${WORKSPACE_DIR}"/miniconda3.sh -b -p "${WORKSPACE_DIR}"/miniconda3
fi
export PATH="${WORKSPACE_DIR}/miniconda3/bin:$PATH"
# shellcheck disable=SC1090
# shellcheck disable=SC1091
source "${WORKSPACE_DIR}"/miniconda3/bin/activate
retry conda install -y mkl mkl-include numpy=1.18.5 pyyaml=5.3 setuptools=46.0.0 cmake cffi ninja typing_extensions dataclasses pip
# The torch.hub tests make requests to GitHub.
Expand Down
34 changes: 33 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
- [Unit testing](#unit-testing)
- [Python Unit Testing](#python-unit-testing)
- [Better local unit tests with `pytest`](#better-local-unit-tests-with-pytest)
- [Local linting](#local-linting)
- [Running `mypy`](#running-mypy)
- [C++ Unit Testing](#c-unit-testing)
- [Writing documentation](#writing-documentation)
Expand Down Expand Up @@ -357,13 +358,44 @@ The above is an example of testing a change to all Loss functions: this
command runs tests such as `TestNN.test_BCELoss` and
`TestNN.test_MSELoss` and can be useful to save keystrokes.


### Local linting

You can run the same linting steps that are used in CI locally via `make`:

```bash
make lint -j 6 # run lint (using 6 parallel jobs)
```

These jobs may require extra dependencies that aren't dependencies of PyTorch
itself, so you can install them via this command, which you should only have to
run once:

```bash
make setup_lint
```

To run a specific linting step, use one of these targets or see the
[`Makefile`](Makefile) for a complete list of options.

```bash
# Check for tabs, trailing newlines, etc.
make quick_checks

make flake8

make mypy

make cmakelint
```

### Running `mypy`

`mypy` is an optional static type checker for Python. We have multiple `mypy`
configs for the PyTorch codebase, so you can run them all using this command:

```bash
for CONFIG in mypy*.ini; do mypy --config="$CONFIG"; done
make mypy
```

See [Guide for adding type annotations to
Expand Down
5 changes: 4 additions & 1 deletion GLOSSARY.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# PyTorch Glossary

- [PyTorch Glossary](#pytorch-glossary)
<!-- toc -->

- [Operation and Kernel](#operation-and-kernel)
- [ATen](#aten)
- [Operation](#operation)
Expand All @@ -19,6 +20,8 @@
- [Tracing](#tracing)
- [Scripting](#scripting)

<!-- tocstop -->

# Operation and Kernel

## ATen
Expand Down
45 changes: 45 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,48 @@ shellcheck-gha:
generate-gha-workflows:
./.github/scripts/generate_linux_ci_workflows.py
$(MAKE) shellcheck-gha

setup_lint:
python tools/actions_local_runner.py --file .github/workflows/lint.yml \
--job 'flake8-py3' --step 'Install dependencies'
python tools/actions_local_runner.py --file .github/workflows/lint.yml \
--job 'cmakelint' --step 'Install dependencies'
pip install jinja2

quick_checks:
# TODO: This is broken when 'git config submodule.recurse' is 'true'
@python tools/actions_local_runner.py \
--file .github/workflows/lint.yml \
--job 'quick-checks' \
--step 'Ensure no trailing spaces' \
--step 'Ensure no tabs' \
--step 'Ensure no non-breaking spaces' \
--step 'Ensure canonical include' \
--step 'Ensure no unqualified noqa' \
--step 'Ensure no unqualified type ignore' \
--step 'Ensure no direct cub include' \
--step 'Ensure correct trailing newlines'

flake8:
@python tools/actions_local_runner.py \
--file .github/workflows/lint.yml \
--job 'flake8-py3' \
--step 'Run flake8'

mypy:
@python tools/actions_local_runner.py \
--file .github/workflows/lint.yml \
--job 'mypy' \
--step 'Run mypy'

cmakelint:
@python tools/actions_local_runner.py \
--file .github/workflows/lint.yml \
--job 'cmakelint' \
--step 'Run cmakelint'

clang_tidy:
echo "clang-tidy local lint is not yet implemented"
exit 1

lint: flake8 mypy quick_checks cmakelint generate-gha-workflows
2 changes: 1 addition & 1 deletion aten/src/ATen/ScalarOps.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Tensor& scalar_fill(Tensor& self, const Scalar& value) {

Tensor scalar_tensor_static(const Scalar& s, c10::optional<ScalarType> dtype_opt, c10::optional<Device> device_opt) {
at::tracer::impl::NoTracerDispatchMode tracer_guard;
at::AutoNonVariableTypeMode non_var_type_mode(true);
at::AutoDispatchBelowAutograd non_var_type_mode(true);
auto result = at::detail::empty_cpu({}, dtype_opt, c10::nullopt, device_opt, c10::nullopt, c10::nullopt);
scalar_fill(result, s);
return result;
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/TensorIndexing.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ static inline void set_item(const Tensor& self, ArrayRef<TensorIndex> indices, c
Tensor value;

{
at::AutoNonVariableTypeMode guard;
at::AutoDispatchBelowAutograd guard;
// TODO: This qint special case looks very suspicious...
if (isQIntType(self.scalar_type())) {
value = at::indexing::scalarToTensor(v, device(kCPU).dtype(kFloat), at::Device(kCPU));
Expand Down
5 changes: 5 additions & 0 deletions aten/src/ATen/TensorIterator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,7 @@ void TensorIteratorBase::compute_types(const TensorIteratorConfig& config) {
op.target_dtype = common_dtype_;
}
}
common_device_ = common_device;
}
}

Expand Down Expand Up @@ -1230,6 +1231,10 @@ void TensorIteratorBase::build(TensorIteratorConfig& config) {

if (is_meta_) return;

// XLA tensors don't have storage, so they don't have an underlying data pointer.
// Nothing beyond this point is important for meta functions, so it's fine to exit early here.
if (common_device_.type() == DeviceType::XLA) return;

for (auto& op : operands_) {
TORCH_INTERNAL_ASSERT(op.tensor.defined());
op.data = op.tensor.data_ptr();
Expand Down
4 changes: 4 additions & 0 deletions aten/src/ATen/TensorIterator.h
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,10 @@ struct TORCH_API TensorIteratorBase : public impl::MetaBase {
/// this matches the dtype of the output tensors, but not always!
ScalarType common_dtype_ = ScalarType::Undefined;

/// This is currently defined as kCPU, or the device of the first non-CPU
/// tensor argument. See TensorIteratorBase::compute_types for details.
Device common_device_ = kCPU;

/// Set by split(), see should_accumulate() and is_final_output()
bool accumulate_ = false;
bool final_output_ = true;
Expand Down
Loading

0 comments on commit 4c75799

Please sign in to comment.