Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make 1D integer sorting parallel #2

Closed
wants to merge 5 commits into from
Closed

Conversation

DamianSzwichtenberg
Copy link
Owner

@DamianSzwichtenberg DamianSzwichtenberg commented Feb 24, 2023

In GNN workloads we use torch.argsort to calculate permutation from CSR to CSC sparse matrix storage format. Till now, sorting one-dimensional data was performed sequentially. This change reuses radix sort from fbgemm and makes torch.(arg)sort work in parallel.

Performance measurements (measured on ICX platform - 40C):

| Size        | Before [s] | Now [s] | Speedup |
|-------------|------------|---------|---------|
| 10^5        | 0.0107     | 0.002   | 5.35x   |
| 4 * 10^5    | 0.0402     | 0.0059  | 6.81x   |
| 16 * 10^5   | 0.1756     | 0.0159  | 11.04x  |
| 64 * 10^5   | 0.7659     | 0.0626  | 12.23x  |
| 256 * 10^5  | 3.2334     | 0.2636  | 12.26x  |
| 1024 * 10^5 | 13.356     | 1.0283  | 12.98x  |

@DamianSzwichtenberg DamianSzwichtenberg self-assigned this Feb 24, 2023
@DamianSzwichtenberg DamianSzwichtenberg force-pushed the par-sort-1d branch 2 times, most recently from ae0c71f to 01acb4c Compare March 8, 2023 14:25
DamianSzwichtenberg pushed a commit that referenced this pull request Mar 28, 2023
Fixes part of pytorch#96414

Replaces any calls to sizes, with sym_sizes. Still seeing an error with the repro script:
``` Bash
Exception raised from sizes_default at /scratch/drisspg/work/pytorch/c10/core/TensorImpl.h:635 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x7d (0x7f697f4a141d in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xdd (0x7f697f49fbcd in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #2: c10::TensorImpl::sizes_custom() const + 0x95 (0x7f697f4824c5 in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #3: at::native::empty_like(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x92c (0x7f69809d18ac in /scratch/drisspg/work/pytorch/torch/lib/libtorch_cpu.so)
frame pytorch#4: <unknown function> + 0x23f5ce7 (0x7f698193bce7 in /scratch/drisspg/work/pytorch/torch/lib/libtorch_cpu.so)
```

still trying to track down this empty call

from the looks of it, might be coming from at::layer_norm?
the BT from lldb is 221 frames however, so lots of noise

Pull Request resolved: pytorch#96674
Approved by: https://github.com/ezyang
facebook-github-bot pushed a commit to pytorch/FBGEMM that referenced this pull request Apr 20, 2023
… negative integers (#1672)

Summary:
Move the `radix_sort` implementation to common utilities, so it can be used in PyTorch in case it was not built with FBGEMM GPU.
Add the possibility to handle negative integers, which is crucial for reusing `radix_sort` in PyTorch's `sort` operation.

Details:
This PR addresses two issues:
1.  `radix_sort` is currently used in [scatter_reduce](https://github.com/dszwicht/pytorch/blob/master/aten/src/ATen/native/cpu/ScatterGatherKernel.cpp#L630) (please view this [comment](https://github.com/pytorch/pytorch/pull/82703/files#r1045360609) for more information). Till now `radix_sort` was under `fbgemm_gpu` subproject. It means that implementation was not available in PyTorch in case it was built for CPU - that's why `radix_sort` was copy pasted under aten directory in PyTorch. This PR moves `radix_sort` implementation to common utilities.
2. In GNN workloads we often sort 1D integer data with non-negative values, for example, when converting CSR to CSC format. Unfortunately, `torch.sort` for 1D data works sequentially. `radix_sort` seems to be a perfect match to accelerate described case. However, suppose we want to do that on the PyTorch site. In that case, we have to either fallback to a regular path after detecting negative numbers in the tensor or perform post-processing, by swapping positive and negative blocks of data (data like `[2, -1, -2, 1]` after sorting will be in the following form `[1, 2, -2, -1]`, due to the fact of how numbers are stored). Both these solutions are not elegant. As an alternative, I propose the extension of `radix_sort` algorithm, by giving it capability to work with negative numbers. This can be enabled by passing an optional parameter, `maybe_with_neg_vals`. If set to `true`, we will perform all passes (up to the most significant sign bit) and apply a special prefix sum combination in the last pass. An example of how we can reuse fbgemm in PyTorch can be found in my private fork, [here](DamianSzwichtenberg/pytorch#2) (I also provide speedup data).

The above changes have several consequences:
1. `TORCH_CHECK` was replaced with `assert` as fbgemm CPU does not have PyTorch in its dependencies.
2. `__builtin_clz` was replaced with manual implementation as `__builtin_clz` is not portable.

Additional information for reviewers:
I did perform benchmarks of `radix_sort` before and after my code modification. I didn't observe any performance drop.

Pull Request resolved: #1672

Reviewed By: sryap

Differential Revision: D44616959

Pulled By: q10

fbshipit-source-id: f34594478c94ec6610c05545feb2044b58d79d66
liligwu added a commit to ROCm/FBGEMM that referenced this pull request Apr 20, 2023
* using different mechanism for host mapped pinned memory (pytorch#1638)

Summary:
Pull Request resolved: pytorch#1638

This diff adds another mechanism for allocating the host mapped pinned memory to reduce adverse affect on other processes running on the same host when one process is doing some large allocations.

Reviewed By: zyan0, jianyuh

Differential Revision: D43950253

fbshipit-source-id: 41a434cb63354509d32e00c851c5f3a2d68be686

* disable use_cpu test (pytorch#1635)

Summary:
This PR addresses the issue pytorch#1636

akin to https://github.com/pytorch/FBGEMM/blob/8616ed701015f8b9e4c2825ce592b204b4cfaf28/fbgemm_gpu/test/split_table_batched_embeddings_test.py#L1009

Pull Request resolved: pytorch#1635

Reviewed By: shintaro-iwasaki

Differential Revision: D44033725

Pulled By: q10

fbshipit-source-id: 49f28fc2f1c20948a42728eebf3defc5195baa5d

* Update API interface and reroute backend for exact_rowwise_adagrad FE when using freq based methods (pytorch#1352)

Summary:
Pull Request resolved: pytorch#1352

1. Update interface to accomadate rowwise_adagrad_with_counter.
2. Route backend for rowwise_adagrad to the new rowwise_adagrad_with_counter when freq based methods (e.g. freq sgd, counter adjusted regularization) are used.

Reviewed By: csmiler

Differential Revision: D36788395

fbshipit-source-id: 8eb5da8a5c8b52bc1e237af1054aac9f7245c443

* Remove sync point in jagged_dense_elementwise_add_jagged_output backward (pytorch#1642)

Summary:
Pull Request resolved: pytorch#1642

Remove sync point in jagged_dense_elementwise_add_jagged_output backward

Reviewed By: brad-mengchi

Differential Revision: D44039901

fbshipit-source-id: 8e7e23e4d9e01359e67e5b166adc57f894a1224d

* Add Comprehensive Build Instructions and Isolate CPU and ROCm Builds (pytorch#1639)

Summary:
- Remove `.post0` suffix from the autogenerated package version
- Document the full FBGEMM_GPU OSS build process in a separate Markdown file
- Remove installation of packages not needed for ROCm builds
- Migrate CPU and ROCm jobs to run on top of Docker containers instead of bare metal instances
- Update GitHub workflow configuration to cancel previous jobs for a PR if a new commit is pushed to the PR

Pull Request resolved: pytorch#1639

Reviewed By: shintaro-iwasaki

Differential Revision: D44076312

Pulled By: q10

fbshipit-source-id: 6b2d083022feb7421b26da2d998678e00c11f283

* include cstdint (pytorch#1640)

Summary:
fix build with gcc-13

Pull Request resolved: pytorch#1640

Reviewed By: shintaro-iwasaki

Differential Revision: D44044422

Pulled By: q10

fbshipit-source-id: 692ec9c34f4aaf726294a2b643fbceabf8159033

* Add support for group size > 54 in group_index_select (pytorch#1611)

Summary:
Pull Request resolved: pytorch#1611

If group size is larger than 54, internally breaks the group down into
smaller groups (each subgroup size is less than or equal to 54).

Reviewed By: jianyuh

Differential Revision: D43585937

fbshipit-source-id: bf14eeb79881a5737dcf7660e3e0f56d21f7b326

* Implement cache miss emulation in UVM_CACHING (pytorch#1637)

Summary:
Pull Request resolved: pytorch#1637

Enforce cache misses (even if trace-driven testing doesn't experience cache miss due to limited trace size) so that we can evaluate performance under cache misses.

Note that it's not exactly cache misses; enforce access to UVM by overriding lxu_cache_locations -- N / 256 requests.

Reviewed By: YuzeDaiMeta

Differential Revision: D42194019

fbshipit-source-id: ab04c1cc7a749e84d605cfe4f1687489ceab5725

* Add TensorAccessor with memcheck (pytorch#1602)

Summary:
Pull Request resolved: pytorch#1602

Illegal memory access is a common problem during GPU kernel execution.
The FBGEMM GPU relies on PyTorch's `C10_CUDA_KERNEL_LAUNCH_CHECK()` and
the CUDA runtime to detect such problems and throw an error.  However,
there are a few known issues with this approach.

(1) `C10_CUDA_KERNEL_LAUNCH_CHECK()` detects errors on the host.
However, due to the non-blocking, asynchronous nature of GPU kernel
execution, the error is caught on the host at a later point than where
the problematic kernel was launched.  This can cause the stack trace
to be inaccurate and make debugging more difficult.  Although the
issue can be fixed by running the code with `CUDA_LAUNCH_BLOCKING=1`,
this can change the state of the execution and cause Heisenbugs.

(2) Not all illegal memory accesses are caught by the runtime.  This
means that the system may not always throw an error when illegal
memory access occurs.

(3) Although the runtime throws an error for illegal memory access, it
is difficult to pinpoint the specific kernel and memory buffer/address
that is causing the problem.

For all the aforementioned reasons, we attempt to catch and throw an
error as soon as possible in the kernel when illegal memory accesses
occur in FBGEMM GPU.  We introduce the `FBGEMM_GPU_MEMCHECK` flag
to enable memory checking during compile time.  We copy PyTorch's
`TensorAccessor.h` into the FBGEMM GPU and extend it to check every
memory access through the `PackedTensorAccessor`.  If an invalid memory
access occurs, we throw an error using `CUDA_KERNEL_ASSERT`.  The error
message includes the name of the tensor and the kernel that caused the
problem.

If `FBGEMM_GPU_MEMCHECK` is enabled, FBGEMM operators will use
`fbgemm::PackedTensorAccessor`.  Otherwise, they will use
`at::PackedTensorAccessor`

`FBGEMM_GPU_MEMCHECK` integration in FBGEMM ops will be done in
subsequent diffs

Reviewed By: r-barnes

Differential Revision: D43421838

fbshipit-source-id: c8ef04970d94bb097cb5f09b42f994db72845167

* Fix compiling with Xcode 14.3 (pytorch#1648)

Summary:
Pull Request resolved: pytorch#1648

This hack is not needed in Xcode 14.3 anymore, where the clang version is 14.0.3. So change the workaround to only include up to 14.0.2.

Reviewed By: MatzeB

Differential Revision: D44130421

fbshipit-source-id: 1fb2948567941bdf6ee9487ccfaa9dfb2caf92dd

* Add support for building FBGEMM_GPU against Python 3.11 in OSS (pytorch#1646)

Summary:
- Parallelize the FBGEMM CI builds to build and test static and shared libraries independently instead of in serial
- Move the FBGEMM CI builds to run inside Docker containers
- Add support for building FBGEMM_GPU against Python 3.11 in OSS
- Move all FBGEMM_GPU nightly and release build jobs to run inside `amazonlinux:2023` Docker container
- Assuming no build errors or resource starvation, the full OSS build process now runs under 30 minutes.

Pull Request resolved: pytorch#1646

Reviewed By: shintaro-iwasaki

Differential Revision: D44157228

Pulled By: q10

fbshipit-source-id: 6403ea9955856157785c50837b0b8e4c0cd26d53

* Remove magic numbers from fbgemm/Types.h (pytorch#1629)

Summary:
Pull Request resolved: pytorch#1629

Replaces magic numbers with constexpr variables

Reviewed By: sryap

Differential Revision: D43776442

fbshipit-source-id: 5cef7566816f8730f5daa08948ee3260367787aa

* added check to avoid div 0 errors in cache report (pytorch#1645)

Summary:
Pull Request resolved: pytorch#1645

as in title

Reviewed By: jianyuh

Differential Revision: D44096435

fbshipit-source-id: a7a87a14ffecc2fb6e0be74d199d385357946672

* jagged_dense_bmm operator optimization (pytorch#1643)

Summary:
Pull Request resolved: pytorch#1643

This diff optimizes the jagged_dense_bmm operator with the following optimizations:
* tiling across thread blocks, and use GPU shared memory for thread block
* tiling across threads within a thread block, and use registers for each thread

Reviewed By: brad-mengchi

Differential Revision: D43674845

fbshipit-source-id: 85f0abf89fa958f79636ef59c3070a1c569b73c2

* jagged_dense_bmm: fix ROCm test failures (pytorch#1655)

Summary:
This patch fixes test failures on AMD GPUs.

1. Remove `__restrict__ `. I don't think it is needed even for CUDA, but it confuses HIPCC.
2. Use `uint32_t` instead of `auto`: old ROCm (including ROCm <= 5.3) does not have `+=` operator for the type of `blockIdx.z`, causing a compilation error. We observed that this issue is fixed in ROCm 5.4.3, but let's use `uint32_t` for now. We should revisit and use `auto` later. See this for details: ROCm/hipamd@86a1634

Pull Request resolved: pytorch#1655

Test Plan: GitHub Actions' AMD CI

Reviewed By: q10, brad-mengchi

Differential Revision: D44242622

Pulled By: shintaro-iwasaki

fbshipit-source-id: c9b88155ebf1ed881b2d03e3be0e8991b4b30174

* Support embedding dim 1024 ~ 2048 (pytorch#1656)

Summary:
Pull Request resolved: pytorch#1656

wushirong reported the failure on https://fburl.com/code/hae91ra7 .

- The embedding config is from  f418615450 .
- `max_int8_128b_rows` is 10 --> D = 1280

Our embedding dim has grown to 1024 + ?

Note that the static shared memory can only go up to 48 KB:

> Kernels relying on shared memory allocations over 48 KB per block are architecture-specific, as such they must use dynamic shared memory (rather than statically sized arrays)

in https://docs.nvidia.com/cuda/cuda-c-programming-guide/

for ptx shared mem error:
```
[2023-03-21T22:04:33.899-07:00] ptxas error   : Entry function '_ZN4nbit60INT8_split_embedding_codegen_forward_weighted_kernel_small_LIiN3c104HalfELm2ELm4ELm4E
Lm8ELm16ELb1EEEvN2at27GenericPackedTensorAccessorIhLm1ENS3_17RestrictPtrTraitsElEES6_NS4_IiLm1ES5_iEENS4_IlLm1ES5_iEENS4_IhLm1ES5_iEES7_N10fbgemm_gpu12FixedDiv
isorENS4_IT_Lm1ES5_iEESD_llNS4_IfLm1ES5_iEENS4_IT0_Lm2ES5_iEENS4_IhLm2ES5_lEES7_' uses too much shared data (0x10080 bytes, 0xc000 max)
```

Currently we reduce `InputRowsInFlight` to bypass the issue (the static shared memory used in the kernel is
```
  typedef uint4 AllBuffers[WarpsPerBlock][OutputRowsPerThread][InputRowsInFlight][NumUint4LoadsPerRow];
  __shared__ AllBuffers buffers;
```

Long term, we can change the static shared memory to dynamic shared memory, and increase the shared memory size to be 64 KB +.

Reviewed By: wushirong

Differential Revision: D44270081

fbshipit-source-id: 367ae838ea073dfe58d859ea3c0e6c7190beca6a

* Containerize the remaining FBGEMM_GPU CI jobs (pytorch#1658)

Summary:
- Containerize the remaining FBGEMM_GPU CI jobs
- Add Conda cleanups to make PyTorch and CUDA installs more reliable
- Update post-install checks for PyTorch to work with ROCm
- Update the CI to continue running on jobs that fail on just a few variants
- Use PIP to install PyTorch GPU nightly as the nightly packages show up in PIP more reliably than in Conda

Pull Request resolved: pytorch#1658

Reviewed By: shintaro-iwasaki

Differential Revision: D44306708

Pulled By: q10

fbshipit-source-id: 5f0862f18eca7151759d9983aa97849222539d7d

* Add tbe_input_combine_with_length for GPU (pytorch#1647)

Summary:
Pull Request resolved: pytorch#1647

Implement `tbe_input_combine_with_length` for GPU.  The operator takes
3 lists of tensors (`indices`, `lengths`, and `per_sample_weights`)
and concatenates each one into a single tensor.  Implicit type casting
is also performed if the input types are different from the output
types.  `indices` and `lengths` tensors can be of type `int32_t` or
`int64_t`.  The outputs for `indices` concatenation and `lengths`
concatenation are fixed to `int32_t`.  `per_sample_weights` must be
`float`.

Reviewed By: bangshengtang

Differential Revision: D44076452

fbshipit-source-id: f6ce8628e7345093bb55835f9523870c2914516f

* jagged_jagged_bmm operator optimization (pytorch#1644)

Summary:
Pull Request resolved: pytorch#1644

This diff optimizes the jagged_jagged_bmm operator using tiling across thread blocks and GPU shared memory.

Reviewed By: brad-mengchi

Differential Revision: D44029528

fbshipit-source-id: fa5cd5a26893f935427bce5efb7dfcc731c3f47d

* Specify device to emulate_cache_miss kernel (pytorch#1660)

Summary:
Pull Request resolved: pytorch#1660

When enabled emulate cache miss, it caused illegal memory access, if we're using more than one GPU. It turns out that previous diff didn't specify device within emulate_cache_miss kernel.

This diff fixes it. In addition, cleaned up a bit (e.g., no need to used index_t based kernel launch for emulate_cache_miss kernel, as lxu_cache_locations is always with int32_t.

Reviewed By: sryap, YuzeDaiMeta

Differential Revision: D44340131

fbshipit-source-id: d99ba2364e9030cbca6c1166e578d24d99646bb1

* Add C++17 Support to FBGEMM and FBGEMM_GPU OSS builds (pytorch#1652)

Summary:
- Add C++17 support for the entire FBGEMM_GPU build
- Add C++17 support for the entire FBGEMM build
- Update FBGEMM tests and benchmarks to be C++17-compatible
- Make FBGEMM builds output more logging
- Cherry-pick code changes from D43776442 v4 now that C++17 is fully supported

Pull Request resolved: pytorch#1652

Reviewed By: shintaro-iwasaki

Differential Revision: D44287321

Pulled By: q10

fbshipit-source-id: 4bf2bcf66d528939865d42b6deafc470bee55d17

* Prune CPU/GPU TBE optimizer codegen (pytorch#1659)

Summary:
Pull Request resolved: pytorch#1659

This diff aims to reduce the build time and libary size of
`//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops`.

The diff modifies the build target to generate and compile only the
necessary files. This is based on the fact that CPU and GPU do not
support all optimizers in `SplitTBE`.  (Before this diff, all optimizers
were generated and compiled for both CPU and GPU.)

The following is the list of supported optimizers

|OptimType|Generated optimizer|Supported on CPU|Supported on GPU|
|EXACT_ADAGRAD|adagrad|x|x|
|EXACT_ROWWISE_ADAGRAD|rowwise_adagrad_with_counter|x|x|
||rowwise_adagrad|x|x|
|EXACT_ROWWISE_WEIGHTED_ADAGRAD|rowwise_weighted_adagrad|x|x|
|EXACT_SGD|sgd|x|x|
|SGD|approx_sgd|x|x|
|ROWWISE_ADAGRAD|approx_rowwise_adagrad_with_counter|x||
||approx_rowwise_adagrad|x||
|ADAM|adam||x|
|LAMB|lamb||x|
|LARS_SGD|lars_sgd||x|
|PARTIAL_ROWWISE_ADAM|partial_rowwise_adam||x|
|PARTIAL_ROWWISE_LAMB|partial_rowwise_lamb||x|
|-|rowwise_adagrad_with_weight_decay|||
|-|approx_rowwise_adagrad_with_weight_decay|||
Note: x = supported

Reviewed By: jianyuh

Differential Revision: D44326540

fbshipit-source-id: 02413256b4a675f13ada8e8820820cb5112cb405

* Fix the Documentation Build Job (pytorch#1673)

Summary:
- Rewrite the documentation builds job to use the build infrastructure tooling
- Rename workflow files for consistency

Pull Request resolved: pytorch#1673

Reviewed By: shintaro-iwasaki

Differential Revision: D44472660

Pulled By: q10

fbshipit-source-id: 60434c1f7098b7efa8c750133bb22f14fc98d5dc

* Back out "Prune CPU/GPU TBE optimizer codegen" (pytorch#1675)

Summary:
Pull Request resolved: pytorch#1675

Original commit changeset: 02413256b4a6

Original Phabricator Diff: D44326540

Reviewed By: q10, jianyuh

Differential Revision: D44475251

fbshipit-source-id: 5be66944a833e03a2737fc6d1baaa5c351455b2c

* Prepare bounds_check_indices for VBE (pytorch#1633)

Summary:
Pull Request resolved: pytorch#1633

Prepare `bounds_check_indices` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Update the backend logic to process VBE data

Reviewed By: jianyuh

Differential Revision: D43253703

fbshipit-source-id: 2870f0c41a96265650281a9b6362d4e6dc48009b

* Move pruning/index_remapping support to embedding inplace update files (pytorch#1667)

Summary:
Pull Request resolved: pytorch#1667

As title. This diff moves pruning/index_remapping support to embedding inplace update files.

Reviewed By: jianyuh

Differential Revision: D44409419

fbshipit-source-id: 93fc91d83502eb95cb0feca2a8a03b003c336078

* jagged_softmax forward optimization (pytorch#1661)

Summary:
Pull Request resolved: pytorch#1661

This diff optimizes jagged_softmax forward with more efficient reduction from cub library.

Reviewed By: brad-mengchi

Differential Revision: D44161021

fbshipit-source-id: bf2e059d14ef4d7ad311edac65155a463ba653ff

* jagged_softmax backward optimization (pytorch#1662)

Summary:
Pull Request resolved: pytorch#1662

This diff optimizes jagged_softmax backward with more efficient reduction from cub library

Reviewed By: brad-mengchi

Differential Revision: D44205819

fbshipit-source-id: cd1d7a886d6ba68201dc1ad782c2e8cde7ff706b

* multi-gpu all_to_one improvements (pytorch#1674)

Summary:
Pull Request resolved: pytorch#1674

improved multi-gpu all_to_one with:
	1. new intermediate hop selection taking advantage of distinct NVLinks
	2. overlapping of intermediate hop transfers with each-other and with direct-peer transfers

Reviewed By: doehyun

Differential Revision: D44285941

fbshipit-source-id: 0202083f04388b5ba60b8155809433f334993ef4

* Extract and export weights offsets/placements initialization functions (pytorch#1669)

Summary:
Pull Request resolved: pytorch#1669

Extract portions initializing the weights_placements/offsets tensors into separate functions and jit.export them.
SplitState is converted to a NamedTuple since we can't jit.script a dataclass that also holds an enum.

Reviewed By: houseroad

Differential Revision: D44338256

fbshipit-source-id: e1c12e5956f7217d51cd190958c3764d220e521d

* Fix the ROCm Test Job (pytorch#1668)

Summary:
- Clean up the ROCm test job and re-enable ROCm testing on the rocm instances.
- Update the build scripts framework to build FBGEMM_GPU against the correct hardware target that it is intended to be tested on.  One thing that was discovered was that if FBGEMM_GPU was built with `PYTORCH_ROCM_ARCH=gfx90a` but run on `gfx908` target, the tests will fail with a segfault.  While the failure is expected, the segfault can be unfriendly and confusing for users.
- Enable correct compilation of `merge_pooled_embeddings` operator under ROCm
- Fix existing code in `jagged_tensor_ops` from PR pytorch#1661 and pytorch#1662 that break its compilation under ROCm 5.3

Pull Request resolved: pytorch#1668

Reviewed By: shintaro-iwasaki

Differential Revision: D44453594

Pulled By: q10

fbshipit-source-id: 2030cd0e00c6ff9694c2783dfd62c31cf5543da2

* Use exported functions instead of calling initialize_weights in weights loading (pytorch#1676)

Summary:
Pull Request resolved: pytorch#1676

Export a function to reset the embedding specs by target location

Reviewed By: RoshanPAN, houseroad

Differential Revision: D44338258

fbshipit-source-id: 502733e9f3a164450a02656d2822492fbf69f994

* Extract index remappings array initialization and jit.export it (pytorch#1670)

Summary:
Pull Request resolved: pytorch#1670

ATT

Reviewed By: RoshanPAN, houseroad

Differential Revision: D44338257

fbshipit-source-id: c091666c7a4d294c283f5e3774d0494089fc3478

* Disable COUNTER in FBGEMM test (pytorch#1683)

Summary:
Pull Request resolved: pytorch#1683

Disable FBGEMM test on COUNTER mode temporarily.

Reviewed By: sryap

Differential Revision: D44589052

fbshipit-source-id: f2af6f9e3cce75d4c599c4708055e5f52ac705e2

* update hipify_torch and remove manual mapping of C10 macros (pytorch#1682)

Summary: Pull Request resolved: pytorch#1682

Reviewed By: shintaro-iwasaki

Differential Revision: D44599348

Pulled By: q10

fbshipit-source-id: 8f968a7c21b09358eac070a35ee15d5b767ea94c

* Install NVIDIA Drivers on Instances Missing the Drivers (pytorch#1684)

Summary:
- Use the pytorch/test-infra action ot install NVIDIA drivers properly if the instance is missing the drivers

Pull Request resolved: pytorch#1684

Reviewed By: shintaro-iwasaki

Differential Revision: D44603925

Pulled By: q10

fbshipit-source-id: 712bdf5c2af67c5a6f540567abcc47ed892912c1

* Clean up the linting job (pytorch#1686)

Summary:
Sumary:

- Clean up the linting job to use the build scripts infrastructure
- Delete the Conda prefix directory before creating a new environment, if it exists

Pull Request resolved: pytorch#1686

Reviewed By: shintaro-iwasaki

Differential Revision: D44646234

Pulled By: q10

fbshipit-source-id: d754efeadffb265c9e55bc302606fc1e60ef8b51

* reduce_to_one (pytorch#1571)

Summary:
Pull Request resolved: pytorch#1571

reduce_to_one for row-wise sharding in inference
Similar approach to all_to_one but without having the source waiting for target to be ready for potential WAR and WAW dependency violation because in this reduce_to_one implementation we create a new destination tensor.

Reviewed By: xing-liu, jianyuh

Differential Revision: D34263436

fbshipit-source-id: 7b1630b395311cfd6fef124113436f87f51a6fba

* Reorganize the build scripts (pytorch#1685)

Summary: Pull Request resolved: pytorch#1685

Reviewed By: r-barnes, shintaro-iwasaki

Differential Revision: D44654808

Pulled By: q10

fbshipit-source-id: a58987b4a3970139bba72db8cecc89c0256fba76

* Prune CPU/GPU TBE optimizer codegen (pytorch#1678)

Summary:
Pull Request resolved: pytorch#1678

This diff aims to reduce the build time and libary size of
`//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops`.

[1/2] Update `lookup_invoker` to enable the function invoker based on
`has_cpu_support` and `has_gpu_support`
[2/2] Update the code generation part

The diff modifies the build target to generate and compile only the
necessary files. This is based on the fact that CPU and GPU do not
support all optimizers in `SplitTBE`.  (Before this diff, all optimizers
were generated and compiled for both CPU and GPU.)

The following is the list of supported optimizers

|OptimType|Generated optimizer|Supported on CPU|Supported on GPU|
|EXACT_ADAGRAD|adagrad|x|x|
|EXACT_ROWWISE_ADAGRAD|rowwise_adagrad_with_counter|x|x|
||rowwise_adagrad|x|x|
|EXACT_ROWWISE_WEIGHTED_ADAGRAD|rowwise_weighted_adagrad|x|x|
|EXACT_SGD|sgd|x|x|
|SGD|approx_sgd|x|x|
|ROWWISE_ADAGRAD|approx_rowwise_adagrad_with_counter|x||
||approx_rowwise_adagrad|x||
|ADAM|adam||x|
|LAMB|lamb||x|
|LARS_SGD|lars_sgd||x|
|PARTIAL_ROWWISE_ADAM|partial_rowwise_adam||x|
|PARTIAL_ROWWISE_LAMB|partial_rowwise_lamb||x|
|-|rowwise_adagrad_with_weight_decay|||
|-|approx_rowwise_adagrad_with_weight_decay|||

Reviewed By: q10

Differential Revision: D44484764

fbshipit-source-id: f04710e66498bdcbdad619d48411c2403316901c

* thread tiling for jagged_jagged_bmm (pytorch#1691)

Summary:
Pull Request resolved: pytorch#1691

This diff adds thread tiling optimization in jagged_jagged_bmm operator, where each thread will process a tile of elements instead of one. The implementation is similar to the one applied to jagged_dense_bmm: D43674845.

Reviewed By: brad-mengchi

Differential Revision: D44764339

fbshipit-source-id: ca4cf257bac755ab97754fdc6605072cfbfb1c4d

* tune the tile sizes for jagged_dense_bmm (pytorch#1692)

Summary:
Pull Request resolved: pytorch#1692

Tune the tile sizes based on the input tensor size. If M > N, then use larger tile size in M dimension, otherwise use larger tile size in N dimension.

Reviewed By: brad-mengchi

Differential Revision: D44791699

fbshipit-source-id: 348a66089d781e9fef141b63d7a56e6dfa5da905

* Populate supported optims to match OSS Pytorch state dict (pytorch#1632)

Summary:
Pull Request resolved: pytorch#1632

ATT.

Reviewed By: jianyuh

Differential Revision: D43887969

fbshipit-source-id: 048ff61a925113b29c547abf20d7acdc4a50b8d7

* Build Scripts and README Improvements (pytorch#1695)

Summary:
- Update build scripts to print out cc, c++, and nvcc preprocessor defines
- Print out all undefined symbols in the output library after build to inspect whether or not templates have been un-instantiated
- Handle the case where `TORCH_CUDA_ARCH_LIST` is pre-defined in the environment
- Clean up the FBGEMM_GPU READMEs to consolidate all FBGEMM_GPU build instructions into `docs/BuildInstructions.md`
- Fix the build badges for FBGEMM and FBGEMM_GPU
- Add Slack contact information to the READMEs
- Remove deprecated GitHub workflows and build scripts in favor of the new scripts, which cover all the functionality of the old scripts

Pull Request resolved: pytorch#1695

Reviewed By: shintaro-iwasaki

Differential Revision: D44901368

Pulled By: q10

fbshipit-source-id: bef6045347c905a051970e4e5f8630175e0f5ef6

* Add Documentation to Work Around GCC 12 Regressions (pytorch#1697)

Summary: Pull Request resolved: pytorch#1697

Reviewed By: shintaro-iwasaki

Differential Revision: D44935915

Pulled By: q10

fbshipit-source-id: e1bdd4ebff18bd9708208a5b659ef9a93ebc866a

* Fix build instructions (pytorch#1701)

Summary:
This change fixes a missing step (cd) in the build instructions.

Pull Request resolved: pytorch#1701

Reviewed By: sryap

Differential Revision: D45011147

Pulled By: q10

fbshipit-source-id: 704ce5bd3cfbd62c31f434c830a7300e5d645024

* Fix a build error from -Wno-unused-but-set-variable (pytorch#1702)

Summary:
This project is compiled with -Wall and -Werror (see pytorch#868) and is throwing an error for the unused variable here. This code appears to be debugging code that was used to verify that the function it's contained in was originally implemented properly so the most straightforward solution is to just remove it.

Pull Request resolved: pytorch#1702

Reviewed By: sryap

Differential Revision: D45011174

Pulled By: q10

fbshipit-source-id: 2c252cfa6063789371f5fba5f642c2f4fb72455f

* Fix exception in QuantUtilsTest (pytorch#1703)

Summary:
This test mistakenly calls reserve() to set a vector's length instead of resize(). reserve() allocates memory for the specified number of elements, but does not actually increase the number of elements that can legally be stored in the vector. This test runs with ASAN enabled which is catching this illegal access and causing the test to fail.

This change fixes the code to instead call resize(); the test now passes.

Pull Request resolved: pytorch#1703

Reviewed By: sryap

Differential Revision: D45011317

Pulled By: q10

fbshipit-source-id: 2840d7bfcfb46ca1523f55e77a3834a1d561c045

* Support EXACT_ADAGRAD in `get_optimizer_state` (pytorch#1700)

Summary:
Pull Request resolved: pytorch#1700

This diff support `get_optimizer_state` for exact_adagrad.
Exact_adagrad is not supported in `get_optimizer_state`. However, this is needed for creating fused optimizer in torchrec.

Reviewed By: r-barnes

Differential Revision: D44963975

fbshipit-source-id: e2f523dfc1e1d17a4925e7ce4a9e65829f1cf1b0

* Split the Rendering of `embedding_forward_quantized_split_template.cu` into Smaller Files (pytorch#1694)

Summary:
`embedding_forward_quantized_split_template.cu` is a very large jinja template that renders 30+ C++ templates, which are then instantiated to over 600+ kernel functions.  There are three sets of jinja templates in `embedding_forward_quantized_split_template.cu`: those related to `int_nbit_split_embedding_*`, `pruned_hashmap_lookup_*` and `pruned_array_lookup_*`..

Currently, the rendering produces a single file, which takes a large amount of time to compile.   This PR does two things at a high level.  First, it breaks up the jinja template into multiple jinja templates.  Then, it forces each of these smaller jinja templates to render multiple source files instead of a single source file.  This change will enable build parallelization and overall build time savings.

Details:

- Port improvements to `embedding_forward_quantized_split_template.cu` from D44707812
- Move the non-jinja-template code inside `embedding_forward_quantized_split_template.cu` over to `embedding_forward_template_helpers.cuh`
- Move `pruned_hashmap_lookup_*` and `pruned_array_lookup_*` sets of jinja templates out to  non-jinja-template `embedding_forward_quantized_split_lookup.cu`, since the template generated functions are redundant.
- Break the `int_nbit_split_embedding_*` set of jinja templates into two files, one for rendering kernel-side code (`embedding_forward_quantized_split_nbit_kernel_template.cu`) and the other for rendering host-side code (`embedding_forward_quantized_split_nbit_host_template.cu`)
- For the `int_nbit_split_embedding_*` host-side jinja template, make it render `weighted`, `unweighted`, and `unweighted nobag` variants into separate source files
- For the `int_nbit_split_embedding_*` kernel-side jinja template, make it render into N = [`weighted`, `unweighted`, and `unweighted nobag` variants ] x [ 6 embedding types ] separate source files, each containing a single C++ template kernel function.  Also generate the code to explicitly instantiate the kernel templates.  For each of the C++ templates being generated, there will be 2 {device-only bool} x [3-4] (output types) x [3-5] (cases) = 18-40 actual template instantiations
- To help with debugging missing template instantiations, print out all undefined symbols in the output library after build to inspect whether or not templates have been un-instantiated
- Update build scripts to print out `cc`, `c++`, and `nvcc` preprocessor defines
- Handle the case where `TORCH_CUDA_ARCH_LIST` is pre-defined in the environment

Pull Request resolved: pytorch#1694

Reviewed By: sryap, r-barnes

Differential Revision: D44842524

Pulled By: q10

fbshipit-source-id: 96f92e40ab2fec598aeb8c483e94997ac050aae7

* Back out "Prune CPU/GPU TBE optimizer codegen" (pytorch#1706)

Summary:
Pull Request resolved: pytorch#1706

Original commit changeset: f04710e66498

Original Phabricator Diff: D44484764

Reviewed By: q10, brad-mengchi, jianyuh, shintaro-iwasaki

Differential Revision: D45054051

fbshipit-source-id: 9d14504c76eb93b2f1b14f4c2ec4c5b807c7fc4a

* Use CUB kernel for 2D asynchronous_complete_cumsum (pytorch#1707)

Summary:
Pull Request resolved: pytorch#1707

Temporarily use the CUB kernel instead of the custom kernel for 2D
`asynchronous_complete_cumsum`

Reviewed By: q10, brad-mengchi, jianyuh

Differential Revision: D45062784

fbshipit-source-id: cebe3992ff8ebec9c0f554e729b8d79a1eced1de

* Split the Code Generation for `embedding_backward_split_template.cu` into Smaller Files (pytorch#1705)

Summary:
`embedding_backward_split_template.cu` contains both jinja-template and non-jinja-template code, and some of the templating is unneccessary.  Furthermore, the template generates both the vanilla and `nobag` variants of unweighted into the same source file.  This PR moves the non-jinja-template code out of the template, de-duplicates code are unneccessarily templated, and splits the generation of the code to three files per optimizer, one for `weighted`, `unweighted nobag`, and `unweighted`.

Details:

- Migrate non-jinja-templated code out of `embedding_backward_split_template.cu` and into `embedding_backward_template_helpers.cuh`
- De-templatize `split_embedding_backward_codegen_{{ optimizer }}_{{ wdesc }}_find_long_segments` into `split_embedding_backward_codegen_find_long_segments` since there is no implementation difference between the optimizers and weighted vs unweighted
- Migrate `grad_mean_kernel` and `split_embedding_backward_codegen_find_long_segments` into a separate non-template source file to de-duplicate code generation and compilation
- Split the code generation of `embedding_backward_split_template.cu` into 3 files per optimizer, according to weighted, unweighted_nobag, and unweighted

Pull Request resolved: pytorch#1705

Reviewed By: sryap

Differential Revision: D45073273

Pulled By: q10

fbshipit-source-id: e82ea643f8e67ad5aa0b3de03562532c5735453d

* Add jagged slice op for cpu (pytorch#1690)

Summary:
Pull Request resolved: pytorch#1690

The context why this is needed is as follows
1) For really long sparse features we want to split them into multiple chunks that can be fed into the model
2) Slicing requires users to require per row start point & a maximum L.

Based on these requirements, a custom op mimicing the slice semantics of a normal tensor works best.

An example usage using pseudo code

```
input_jagged_tensor = [[1, 2, 3, 4], [1, 2, 3], [1, 2, 3, 4, 5, 6], [1], [1, 2]]
start = [0, 0, 0, 0, 0]
slice_length = 3

>> jagged_slice(input_jagged_tensor, start, slice_length)

output_jagged_tensor = [[1, 2, 3], [1, 2, 3], [1, 2, 3], [1], [1, 2]]

```

A corresponding operation for dense tensor would look like
```
dense_tensor = torch.randn((8, 10))
slice_dense_tensor = dense_tensor[:, 1:3]
```

Reviewed By: sryap

Differential Revision: D44299744

fbshipit-source-id: 44996f2f2ec5fc5f31dda4cb3bd8f0241497df66

* Move radix sort to common utilities and add the possibility to handle negative integers (pytorch#1672)

Summary:
Move the `radix_sort` implementation to common utilities, so it can be used in PyTorch in case it was not built with FBGEMM GPU.
Add the possibility to handle negative integers, which is crucial for reusing `radix_sort` in PyTorch's `sort` operation.

Details:
This PR addresses two issues:
1.  `radix_sort` is currently used in [scatter_reduce](https://github.com/dszwicht/pytorch/blob/master/aten/src/ATen/native/cpu/ScatterGatherKernel.cpp#L630) (please view this [comment](https://github.com/pytorch/pytorch/pull/82703/files#r1045360609) for more information). Till now `radix_sort` was under `fbgemm_gpu` subproject. It means that implementation was not available in PyTorch in case it was built for CPU - that's why `radix_sort` was copy pasted under aten directory in PyTorch. This PR moves `radix_sort` implementation to common utilities.
2. In GNN workloads we often sort 1D integer data with non-negative values, for example, when converting CSR to CSC format. Unfortunately, `torch.sort` for 1D data works sequentially. `radix_sort` seems to be a perfect match to accelerate described case. However, suppose we want to do that on the PyTorch site. In that case, we have to either fallback to a regular path after detecting negative numbers in the tensor or perform post-processing, by swapping positive and negative blocks of data (data like `[2, -1, -2, 1]` after sorting will be in the following form `[1, 2, -2, -1]`, due to the fact of how numbers are stored). Both these solutions are not elegant. As an alternative, I propose the extension of `radix_sort` algorithm, by giving it capability to work with negative numbers. This can be enabled by passing an optional parameter, `maybe_with_neg_vals`. If set to `true`, we will perform all passes (up to the most significant sign bit) and apply a special prefix sum combination in the last pass. An example of how we can reuse fbgemm in PyTorch can be found in my private fork, [here](DamianSzwichtenberg/pytorch#2) (I also provide speedup data).

The above changes have several consequences:
1. `TORCH_CHECK` was replaced with `assert` as fbgemm CPU does not have PyTorch in its dependencies.
2. `__builtin_clz` was replaced with manual implementation as `__builtin_clz` is not portable.

Additional information for reviewers:
I did perform benchmarks of `radix_sort` before and after my code modification. I didn't observe any performance drop.

Pull Request resolved: pytorch#1672

Reviewed By: sryap

Differential Revision: D44616959

Pulled By: q10

fbshipit-source-id: f34594478c94ec6610c05545feb2044b58d79d66

* Daily `arc lint --take CLANGFORMAT`

Reviewed By: bigfootjon

Differential Revision: D45141964

fbshipit-source-id: 58308a31522a3b1446835e358a93483b611c4b15

---------

Co-authored-by: Banit Agrawal <[email protected]>
Co-authored-by: Sabin Devkota <[email protected]>
Co-authored-by: Junjie Yang <[email protected]>
Co-authored-by: Benson Ma <[email protected]>
Co-authored-by: Alfredo Tupone <[email protected]>
Co-authored-by: Sarunya Pumma <[email protected]>
Co-authored-by: Doe Hyun Yoon <[email protected]>
Co-authored-by: Matt Galloway <[email protected]>
Co-authored-by: Richard Barnes <[email protected]>
Co-authored-by: Xiao Sun <[email protected]>
Co-authored-by: Rengan Xu <[email protected]>
Co-authored-by: siwasaki <[email protected]>
Co-authored-by: Jianyu Huang <[email protected]>
Co-authored-by: Yue Dong <[email protected]>
Co-authored-by: Geet Sethi <[email protected]>
Co-authored-by: Janet Yang <[email protected]>
Co-authored-by: Wang Zhou <[email protected]>
Co-authored-by: Jongsoo Park <[email protected]>
Co-authored-by: Tran Le <[email protected]>
Co-authored-by: Ryan Landay <[email protected]>
Co-authored-by: Devashish Tyagi <[email protected]>
Co-authored-by: Szwichtenberg, Damian <[email protected]>
Co-authored-by: generatedunixname89002005325676 <[email protected]>
liligwu added a commit to ROCm/FBGEMM that referenced this pull request May 5, 2023
* using different mechanism for host mapped pinned memory (#1638)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1638

This diff adds another mechanism for allocating the host mapped pinned memory to reduce adverse affect on other processes running on the same host when one process is doing some large allocations.

Reviewed By: zyan0, jianyuh

Differential Revision: D43950253

fbshipit-source-id: 41a434cb63354509d32e00c851c5f3a2d68be686

* disable use_cpu test (#1635)

Summary:
This PR addresses the issue https://github.com/pytorch/FBGEMM/issues/1636

akin to https://github.com/pytorch/FBGEMM/blob/8616ed701015f8b9e4c2825ce592b204b4cfaf28/fbgemm_gpu/test/split_table_batched_embeddings_test.py#L1009

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1635

Reviewed By: shintaro-iwasaki

Differential Revision: D44033725

Pulled By: q10

fbshipit-source-id: 49f28fc2f1c20948a42728eebf3defc5195baa5d

* Update API interface and reroute backend for exact_rowwise_adagrad FE when using freq based methods (#1352)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1352

1. Update interface to accomadate rowwise_adagrad_with_counter.
2. Route backend for rowwise_adagrad to the new rowwise_adagrad_with_counter when freq based methods (e.g. freq sgd, counter adjusted regularization) are used.

Reviewed By: csmiler

Differential Revision: D36788395

fbshipit-source-id: 8eb5da8a5c8b52bc1e237af1054aac9f7245c443

* Remove sync point in jagged_dense_elementwise_add_jagged_output backward (#1642)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1642

Remove sync point in jagged_dense_elementwise_add_jagged_output backward

Reviewed By: brad-mengchi

Differential Revision: D44039901

fbshipit-source-id: 8e7e23e4d9e01359e67e5b166adc57f894a1224d

* Add Comprehensive Build Instructions and Isolate CPU and ROCm Builds (#1639)

Summary:
- Remove `.post0` suffix from the autogenerated package version
- Document the full FBGEMM_GPU OSS build process in a separate Markdown file
- Remove installation of packages not needed for ROCm builds
- Migrate CPU and ROCm jobs to run on top of Docker containers instead of bare metal instances
- Update GitHub workflow configuration to cancel previous jobs for a PR if a new commit is pushed to the PR

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1639

Reviewed By: shintaro-iwasaki

Differential Revision: D44076312

Pulled By: q10

fbshipit-source-id: 6b2d083022feb7421b26da2d998678e00c11f283

* include cstdint (#1640)

Summary:
fix build with gcc-13

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1640

Reviewed By: shintaro-iwasaki

Differential Revision: D44044422

Pulled By: q10

fbshipit-source-id: 692ec9c34f4aaf726294a2b643fbceabf8159033

* Add support for group size > 54 in group_index_select (#1611)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1611

If group size is larger than 54, internally breaks the group down into
smaller groups (each subgroup size is less than or equal to 54).

Reviewed By: jianyuh

Differential Revision: D43585937

fbshipit-source-id: bf14eeb79881a5737dcf7660e3e0f56d21f7b326

* Implement cache miss emulation in UVM_CACHING (#1637)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1637

Enforce cache misses (even if trace-driven testing doesn't experience cache miss due to limited trace size) so that we can evaluate performance under cache misses.

Note that it's not exactly cache misses; enforce access to UVM by overriding lxu_cache_locations -- N / 256 requests.

Reviewed By: YuzeDaiMeta

Differential Revision: D42194019

fbshipit-source-id: ab04c1cc7a749e84d605cfe4f1687489ceab5725

* Add TensorAccessor with memcheck (#1602)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1602

Illegal memory access is a common problem during GPU kernel execution.
The FBGEMM GPU relies on PyTorch's `C10_CUDA_KERNEL_LAUNCH_CHECK()` and
the CUDA runtime to detect such problems and throw an error.  However,
there are a few known issues with this approach.

(1) `C10_CUDA_KERNEL_LAUNCH_CHECK()` detects errors on the host.
However, due to the non-blocking, asynchronous nature of GPU kernel
execution, the error is caught on the host at a later point than where
the problematic kernel was launched.  This can cause the stack trace
to be inaccurate and make debugging more difficult.  Although the
issue can be fixed by running the code with `CUDA_LAUNCH_BLOCKING=1`,
this can change the state of the execution and cause Heisenbugs.

(2) Not all illegal memory accesses are caught by the runtime.  This
means that the system may not always throw an error when illegal
memory access occurs.

(3) Although the runtime throws an error for illegal memory access, it
is difficult to pinpoint the specific kernel and memory buffer/address
that is causing the problem.

For all the aforementioned reasons, we attempt to catch and throw an
error as soon as possible in the kernel when illegal memory accesses
occur in FBGEMM GPU.  We introduce the `FBGEMM_GPU_MEMCHECK` flag
to enable memory checking during compile time.  We copy PyTorch's
`TensorAccessor.h` into the FBGEMM GPU and extend it to check every
memory access through the `PackedTensorAccessor`.  If an invalid memory
access occurs, we throw an error using `CUDA_KERNEL_ASSERT`.  The error
message includes the name of the tensor and the kernel that caused the
problem.

If `FBGEMM_GPU_MEMCHECK` is enabled, FBGEMM operators will use
`fbgemm::PackedTensorAccessor`.  Otherwise, they will use
`at::PackedTensorAccessor`

`FBGEMM_GPU_MEMCHECK` integration in FBGEMM ops will be done in
subsequent diffs

Reviewed By: r-barnes

Differential Revision: D43421838

fbshipit-source-id: c8ef04970d94bb097cb5f09b42f994db72845167

* Fix compiling with Xcode 14.3 (#1648)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1648

This hack is not needed in Xcode 14.3 anymore, where the clang version is 14.0.3. So change the workaround to only include up to 14.0.2.

Reviewed By: MatzeB

Differential Revision: D44130421

fbshipit-source-id: 1fb2948567941bdf6ee9487ccfaa9dfb2caf92dd

* Add support for building FBGEMM_GPU against Python 3.11 in OSS (#1646)

Summary:
- Parallelize the FBGEMM CI builds to build and test static and shared libraries independently instead of in serial
- Move the FBGEMM CI builds to run inside Docker containers
- Add support for building FBGEMM_GPU against Python 3.11 in OSS
- Move all FBGEMM_GPU nightly and release build jobs to run inside `amazonlinux:2023` Docker container
- Assuming no build errors or resource starvation, the full OSS build process now runs under 30 minutes.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1646

Reviewed By: shintaro-iwasaki

Differential Revision: D44157228

Pulled By: q10

fbshipit-source-id: 6403ea9955856157785c50837b0b8e4c0cd26d53

* Remove magic numbers from fbgemm/Types.h (#1629)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1629

Replaces magic numbers with constexpr variables

Reviewed By: sryap

Differential Revision: D43776442

fbshipit-source-id: 5cef7566816f8730f5daa08948ee3260367787aa

* added check to avoid div 0 errors in cache report (#1645)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1645

as in title

Reviewed By: jianyuh

Differential Revision: D44096435

fbshipit-source-id: a7a87a14ffecc2fb6e0be74d199d385357946672

* jagged_dense_bmm operator optimization (#1643)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1643

This diff optimizes the jagged_dense_bmm operator with the following optimizations:
* tiling across thread blocks, and use GPU shared memory for thread block
* tiling across threads within a thread block, and use registers for each thread

Reviewed By: brad-mengchi

Differential Revision: D43674845

fbshipit-source-id: 85f0abf89fa958f79636ef59c3070a1c569b73c2

* jagged_dense_bmm: fix ROCm test failures (#1655)

Summary:
This patch fixes test failures on AMD GPUs.

1. Remove `__restrict__ `. I don't think it is needed even for CUDA, but it confuses HIPCC.
2. Use `uint32_t` instead of `auto`: old ROCm (including ROCm <= 5.3) does not have `+=` operator for the type of `blockIdx.z`, causing a compilation error. We observed that this issue is fixed in ROCm 5.4.3, but let's use `uint32_t` for now. We should revisit and use `auto` later. See this for details: https://github.com/ROCm-Developer-Tools/hipamd/commit/86a1634c642daeda1e984d4124bcc2aeba5c4e19

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1655

Test Plan: GitHub Actions' AMD CI

Reviewed By: q10, brad-mengchi

Differential Revision: D44242622

Pulled By: shintaro-iwasaki

fbshipit-source-id: c9b88155ebf1ed881b2d03e3be0e8991b4b30174

* Support embedding dim 1024 ~ 2048 (#1656)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1656

wushirong reported the failure on https://fburl.com/code/hae91ra7 .

- The embedding config is from  f418615450 .
- `max_int8_128b_rows` is 10 --> D = 1280

Our embedding dim has grown to 1024 + ?

Note that the static shared memory can only go up to 48 KB:

> Kernels relying on shared memory allocations over 48 KB per block are architecture-specific, as such they must use dynamic shared memory (rather than statically sized arrays)

in https://docs.nvidia.com/cuda/cuda-c-programming-guide/

for ptx shared mem error:
```
[2023-03-21T22:04:33.899-07:00] ptxas error   : Entry function '_ZN4nbit60INT8_split_embedding_codegen_forward_weighted_kernel_small_LIiN3c104HalfELm2ELm4ELm4E
Lm8ELm16ELb1EEEvN2at27GenericPackedTensorAccessorIhLm1ENS3_17RestrictPtrTraitsElEES6_NS4_IiLm1ES5_iEENS4_IlLm1ES5_iEENS4_IhLm1ES5_iEES7_N10fbgemm_gpu12FixedDiv
isorENS4_IT_Lm1ES5_iEESD_llNS4_IfLm1ES5_iEENS4_IT0_Lm2ES5_iEENS4_IhLm2ES5_lEES7_' uses too much shared data (0x10080 bytes, 0xc000 max)
```

Currently we reduce `InputRowsInFlight` to bypass the issue (the static shared memory used in the kernel is
```
  typedef uint4 AllBuffers[WarpsPerBlock][OutputRowsPerThread][InputRowsInFlight][NumUint4LoadsPerRow];
  __shared__ AllBuffers buffers;
```

Long term, we can change the static shared memory to dynamic shared memory, and increase the shared memory size to be 64 KB +.

Reviewed By: wushirong

Differential Revision: D44270081

fbshipit-source-id: 367ae838ea073dfe58d859ea3c0e6c7190beca6a

* Containerize the remaining FBGEMM_GPU CI jobs (#1658)

Summary:
- Containerize the remaining FBGEMM_GPU CI jobs
- Add Conda cleanups to make PyTorch and CUDA installs more reliable
- Update post-install checks for PyTorch to work with ROCm
- Update the CI to continue running on jobs that fail on just a few variants
- Use PIP to install PyTorch GPU nightly as the nightly packages show up in PIP more reliably than in Conda

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1658

Reviewed By: shintaro-iwasaki

Differential Revision: D44306708

Pulled By: q10

fbshipit-source-id: 5f0862f18eca7151759d9983aa97849222539d7d

* Add tbe_input_combine_with_length for GPU (#1647)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1647

Implement `tbe_input_combine_with_length` for GPU.  The operator takes
3 lists of tensors (`indices`, `lengths`, and `per_sample_weights`)
and concatenates each one into a single tensor.  Implicit type casting
is also performed if the input types are different from the output
types.  `indices` and `lengths` tensors can be of type `int32_t` or
`int64_t`.  The outputs for `indices` concatenation and `lengths`
concatenation are fixed to `int32_t`.  `per_sample_weights` must be
`float`.

Reviewed By: bangshengtang

Differential Revision: D44076452

fbshipit-source-id: f6ce8628e7345093bb55835f9523870c2914516f

* jagged_jagged_bmm operator optimization (#1644)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1644

This diff optimizes the jagged_jagged_bmm operator using tiling across thread blocks and GPU shared memory.

Reviewed By: brad-mengchi

Differential Revision: D44029528

fbshipit-source-id: fa5cd5a26893f935427bce5efb7dfcc731c3f47d

* Specify device to emulate_cache_miss kernel (#1660)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1660

When enabled emulate cache miss, it caused illegal memory access, if we're using more than one GPU. It turns out that previous diff didn't specify device within emulate_cache_miss kernel.

This diff fixes it. In addition, cleaned up a bit (e.g., no need to used index_t based kernel launch for emulate_cache_miss kernel, as lxu_cache_locations is always with int32_t.

Reviewed By: sryap, YuzeDaiMeta

Differential Revision: D44340131

fbshipit-source-id: d99ba2364e9030cbca6c1166e578d24d99646bb1

* Add C++17 Support to FBGEMM and FBGEMM_GPU OSS builds (#1652)

Summary:
- Add C++17 support for the entire FBGEMM_GPU build
- Add C++17 support for the entire FBGEMM build
- Update FBGEMM tests and benchmarks to be C++17-compatible
- Make FBGEMM builds output more logging
- Cherry-pick code changes from D43776442 v4 now that C++17 is fully supported

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1652

Reviewed By: shintaro-iwasaki

Differential Revision: D44287321

Pulled By: q10

fbshipit-source-id: 4bf2bcf66d528939865d42b6deafc470bee55d17

* Prune CPU/GPU TBE optimizer codegen (#1659)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1659

This diff aims to reduce the build time and libary size of
`//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops`.

The diff modifies the build target to generate and compile only the
necessary files. This is based on the fact that CPU and GPU do not
support all optimizers in `SplitTBE`.  (Before this diff, all optimizers
were generated and compiled for both CPU and GPU.)

The following is the list of supported optimizers

|OptimType|Generated optimizer|Supported on CPU|Supported on GPU|
|EXACT_ADAGRAD|adagrad|x|x|
|EXACT_ROWWISE_ADAGRAD|rowwise_adagrad_with_counter|x|x|
||rowwise_adagrad|x|x|
|EXACT_ROWWISE_WEIGHTED_ADAGRAD|rowwise_weighted_adagrad|x|x|
|EXACT_SGD|sgd|x|x|
|SGD|approx_sgd|x|x|
|ROWWISE_ADAGRAD|approx_rowwise_adagrad_with_counter|x||
||approx_rowwise_adagrad|x||
|ADAM|adam||x|
|LAMB|lamb||x|
|LARS_SGD|lars_sgd||x|
|PARTIAL_ROWWISE_ADAM|partial_rowwise_adam||x|
|PARTIAL_ROWWISE_LAMB|partial_rowwise_lamb||x|
|-|rowwise_adagrad_with_weight_decay|||
|-|approx_rowwise_adagrad_with_weight_decay|||
Note: x = supported

Reviewed By: jianyuh

Differential Revision: D44326540

fbshipit-source-id: 02413256b4a675f13ada8e8820820cb5112cb405

* Fix the Documentation Build Job (#1673)

Summary:
- Rewrite the documentation builds job to use the build infrastructure tooling
- Rename workflow files for consistency

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1673

Reviewed By: shintaro-iwasaki

Differential Revision: D44472660

Pulled By: q10

fbshipit-source-id: 60434c1f7098b7efa8c750133bb22f14fc98d5dc

* Back out "Prune CPU/GPU TBE optimizer codegen" (#1675)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1675

Original commit changeset: 02413256b4a6

Original Phabricator Diff: D44326540

Reviewed By: q10, jianyuh

Differential Revision: D44475251

fbshipit-source-id: 5be66944a833e03a2737fc6d1baaa5c351455b2c

* Prepare bounds_check_indices for VBE (#1633)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1633

Prepare `bounds_check_indices` for variable batch size TBE (VBE).

- Update the frontend API to accept VBE args
- Update the backend logic to process VBE data

Reviewed By: jianyuh

Differential Revision: D43253703

fbshipit-source-id: 2870f0c41a96265650281a9b6362d4e6dc48009b

* Move pruning/index_remapping support to embedding inplace update files (#1667)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1667

As title. This diff moves pruning/index_remapping support to embedding inplace update files.

Reviewed By: jianyuh

Differential Revision: D44409419

fbshipit-source-id: 93fc91d83502eb95cb0feca2a8a03b003c336078

* jagged_softmax forward optimization (#1661)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1661

This diff optimizes jagged_softmax forward with more efficient reduction from cub library.

Reviewed By: brad-mengchi

Differential Revision: D44161021

fbshipit-source-id: bf2e059d14ef4d7ad311edac65155a463ba653ff

* jagged_softmax backward optimization (#1662)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1662

This diff optimizes jagged_softmax backward with more efficient reduction from cub library

Reviewed By: brad-mengchi

Differential Revision: D44205819

fbshipit-source-id: cd1d7a886d6ba68201dc1ad782c2e8cde7ff706b

* multi-gpu all_to_one improvements (#1674)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1674

improved multi-gpu all_to_one with:
	1. new intermediate hop selection taking advantage of distinct NVLinks
	2. overlapping of intermediate hop transfers with each-other and with direct-peer transfers

Reviewed By: doehyun

Differential Revision: D44285941

fbshipit-source-id: 0202083f04388b5ba60b8155809433f334993ef4

* Extract and export weights offsets/placements initialization functions (#1669)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1669

Extract portions initializing the weights_placements/offsets tensors into separate functions and jit.export them.
SplitState is converted to a NamedTuple since we can't jit.script a dataclass that also holds an enum.

Reviewed By: houseroad

Differential Revision: D44338256

fbshipit-source-id: e1c12e5956f7217d51cd190958c3764d220e521d

* Fix the ROCm Test Job (#1668)

Summary:
- Clean up the ROCm test job and re-enable ROCm testing on the rocm instances.
- Update the build scripts framework to build FBGEMM_GPU against the correct hardware target that it is intended to be tested on.  One thing that was discovered was that if FBGEMM_GPU was built with `PYTORCH_ROCM_ARCH=gfx90a` but run on `gfx908` target, the tests will fail with a segfault.  While the failure is expected, the segfault can be unfriendly and confusing for users.
- Enable correct compilation of `merge_pooled_embeddings` operator under ROCm
- Fix existing code in `jagged_tensor_ops` from PR https://github.com/pytorch/FBGEMM/issues/1661 and https://github.com/pytorch/FBGEMM/issues/1662 that break its compilation under ROCm 5.3

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1668

Reviewed By: shintaro-iwasaki

Differential Revision: D44453594

Pulled By: q10

fbshipit-source-id: 2030cd0e00c6ff9694c2783dfd62c31cf5543da2

* Use exported functions instead of calling initialize_weights in weights loading (#1676)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1676

Export a function to reset the embedding specs by target location

Reviewed By: RoshanPAN, houseroad

Differential Revision: D44338258

fbshipit-source-id: 502733e9f3a164450a02656d2822492fbf69f994

* Extract index remappings array initialization and jit.export it (#1670)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1670

ATT

Reviewed By: RoshanPAN, houseroad

Differential Revision: D44338257

fbshipit-source-id: c091666c7a4d294c283f5e3774d0494089fc3478

* Disable COUNTER in FBGEMM test (#1683)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1683

Disable FBGEMM test on COUNTER mode temporarily.

Reviewed By: sryap

Differential Revision: D44589052

fbshipit-source-id: f2af6f9e3cce75d4c599c4708055e5f52ac705e2

* update hipify_torch and remove manual mapping of C10 macros (#1682)

Summary: Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1682

Reviewed By: shintaro-iwasaki

Differential Revision: D44599348

Pulled By: q10

fbshipit-source-id: 8f968a7c21b09358eac070a35ee15d5b767ea94c

* Install NVIDIA Drivers on Instances Missing the Drivers (#1684)

Summary:
- Use the pytorch/test-infra action ot install NVIDIA drivers properly if the instance is missing the drivers

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1684

Reviewed By: shintaro-iwasaki

Differential Revision: D44603925

Pulled By: q10

fbshipit-source-id: 712bdf5c2af67c5a6f540567abcc47ed892912c1

* Clean up the linting job (#1686)

Summary:
Sumary:

- Clean up the linting job to use the build scripts infrastructure
- Delete the Conda prefix directory before creating a new environment, if it exists

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1686

Reviewed By: shintaro-iwasaki

Differential Revision: D44646234

Pulled By: q10

fbshipit-source-id: d754efeadffb265c9e55bc302606fc1e60ef8b51

* reduce_to_one (#1571)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1571

reduce_to_one for row-wise sharding in inference
Similar approach to all_to_one but without having the source waiting for target to be ready for potential WAR and WAW dependency violation because in this reduce_to_one implementation we create a new destination tensor.

Reviewed By: xing-liu, jianyuh

Differential Revision: D34263436

fbshipit-source-id: 7b1630b395311cfd6fef124113436f87f51a6fba

* Reorganize the build scripts (#1685)

Summary: Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1685

Reviewed By: r-barnes, shintaro-iwasaki

Differential Revision: D44654808

Pulled By: q10

fbshipit-source-id: a58987b4a3970139bba72db8cecc89c0256fba76

* Prune CPU/GPU TBE optimizer codegen (#1678)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1678

This diff aims to reduce the build time and libary size of
`//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops`.

[1/2] Update `lookup_invoker` to enable the function invoker based on
`has_cpu_support` and `has_gpu_support`
[2/2] Update the code generation part

The diff modifies the build target to generate and compile only the
necessary files. This is based on the fact that CPU and GPU do not
support all optimizers in `SplitTBE`.  (Before this diff, all optimizers
were generated and compiled for both CPU and GPU.)

The following is the list of supported optimizers

|OptimType|Generated optimizer|Supported on CPU|Supported on GPU|
|EXACT_ADAGRAD|adagrad|x|x|
|EXACT_ROWWISE_ADAGRAD|rowwise_adagrad_with_counter|x|x|
||rowwise_adagrad|x|x|
|EXACT_ROWWISE_WEIGHTED_ADAGRAD|rowwise_weighted_adagrad|x|x|
|EXACT_SGD|sgd|x|x|
|SGD|approx_sgd|x|x|
|ROWWISE_ADAGRAD|approx_rowwise_adagrad_with_counter|x||
||approx_rowwise_adagrad|x||
|ADAM|adam||x|
|LAMB|lamb||x|
|LARS_SGD|lars_sgd||x|
|PARTIAL_ROWWISE_ADAM|partial_rowwise_adam||x|
|PARTIAL_ROWWISE_LAMB|partial_rowwise_lamb||x|
|-|rowwise_adagrad_with_weight_decay|||
|-|approx_rowwise_adagrad_with_weight_decay|||

Reviewed By: q10

Differential Revision: D44484764

fbshipit-source-id: f04710e66498bdcbdad619d48411c2403316901c

* thread tiling for jagged_jagged_bmm (#1691)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1691

This diff adds thread tiling optimization in jagged_jagged_bmm operator, where each thread will process a tile of elements instead of one. The implementation is similar to the one applied to jagged_dense_bmm: D43674845.

Reviewed By: brad-mengchi

Differential Revision: D44764339

fbshipit-source-id: ca4cf257bac755ab97754fdc6605072cfbfb1c4d

* tune the tile sizes for jagged_dense_bmm (#1692)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1692

Tune the tile sizes based on the input tensor size. If M > N, then use larger tile size in M dimension, otherwise use larger tile size in N dimension.

Reviewed By: brad-mengchi

Differential Revision: D44791699

fbshipit-source-id: 348a66089d781e9fef141b63d7a56e6dfa5da905

* Populate supported optims to match OSS Pytorch state dict (#1632)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1632

ATT.

Reviewed By: jianyuh

Differential Revision: D43887969

fbshipit-source-id: 048ff61a925113b29c547abf20d7acdc4a50b8d7

* Build Scripts and README Improvements (#1695)

Summary:
- Update build scripts to print out cc, c++, and nvcc preprocessor defines
- Print out all undefined symbols in the output library after build to inspect whether or not templates have been un-instantiated
- Handle the case where `TORCH_CUDA_ARCH_LIST` is pre-defined in the environment
- Clean up the FBGEMM_GPU READMEs to consolidate all FBGEMM_GPU build instructions into `docs/BuildInstructions.md`
- Fix the build badges for FBGEMM and FBGEMM_GPU
- Add Slack contact information to the READMEs
- Remove deprecated GitHub workflows and build scripts in favor of the new scripts, which cover all the functionality of the old scripts

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1695

Reviewed By: shintaro-iwasaki

Differential Revision: D44901368

Pulled By: q10

fbshipit-source-id: bef6045347c905a051970e4e5f8630175e0f5ef6

* Add Documentation to Work Around GCC 12 Regressions (#1697)

Summary: Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1697

Reviewed By: shintaro-iwasaki

Differential Revision: D44935915

Pulled By: q10

fbshipit-source-id: e1bdd4ebff18bd9708208a5b659ef9a93ebc866a

* Fix build instructions (#1701)

Summary:
This change fixes a missing step (cd) in the build instructions.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1701

Reviewed By: sryap

Differential Revision: D45011147

Pulled By: q10

fbshipit-source-id: 704ce5bd3cfbd62c31f434c830a7300e5d645024

* Fix a build error from -Wno-unused-but-set-variable (#1702)

Summary:
This project is compiled with -Wall and -Werror (see https://github.com/pytorch/FBGEMM/pull/868) and is throwing an error for the unused variable here. This code appears to be debugging code that was used to verify that the function it's contained in was originally implemented properly so the most straightforward solution is to just remove it.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1702

Reviewed By: sryap

Differential Revision: D45011174

Pulled By: q10

fbshipit-source-id: 2c252cfa6063789371f5fba5f642c2f4fb72455f

* Fix exception in QuantUtilsTest (#1703)

Summary:
This test mistakenly calls reserve() to set a vector's length instead of resize(). reserve() allocates memory for the specified number of elements, but does not actually increase the number of elements that can legally be stored in the vector. This test runs with ASAN enabled which is catching this illegal access and causing the test to fail.

This change fixes the code to instead call resize(); the test now passes.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1703

Reviewed By: sryap

Differential Revision: D45011317

Pulled By: q10

fbshipit-source-id: 2840d7bfcfb46ca1523f55e77a3834a1d561c045

* Support EXACT_ADAGRAD in `get_optimizer_state` (#1700)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1700

This diff support `get_optimizer_state` for exact_adagrad.
Exact_adagrad is not supported in `get_optimizer_state`. However, this is needed for creating fused optimizer in torchrec.

Reviewed By: r-barnes

Differential Revision: D44963975

fbshipit-source-id: e2f523dfc1e1d17a4925e7ce4a9e65829f1cf1b0

* Split the Rendering of `embedding_forward_quantized_split_template.cu` into Smaller Files (#1694)

Summary:
`embedding_forward_quantized_split_template.cu` is a very large jinja template that renders 30+ C++ templates, which are then instantiated to over 600+ kernel functions.  There are three sets of jinja templates in `embedding_forward_quantized_split_template.cu`: those related to `int_nbit_split_embedding_*`, `pruned_hashmap_lookup_*` and `pruned_array_lookup_*`..

Currently, the rendering produces a single file, which takes a large amount of time to compile.   This PR does two things at a high level.  First, it breaks up the jinja template into multiple jinja templates.  Then, it forces each of these smaller jinja templates to render multiple source files instead of a single source file.  This change will enable build parallelization and overall build time savings.

Details:

- Port improvements to `embedding_forward_quantized_split_template.cu` from D44707812
- Move the non-jinja-template code inside `embedding_forward_quantized_split_template.cu` over to `embedding_forward_template_helpers.cuh`
- Move `pruned_hashmap_lookup_*` and `pruned_array_lookup_*` sets of jinja templates out to  non-jinja-template `embedding_forward_quantized_split_lookup.cu`, since the template generated functions are redundant.
- Break the `int_nbit_split_embedding_*` set of jinja templates into two files, one for rendering kernel-side code (`embedding_forward_quantized_split_nbit_kernel_template.cu`) and the other for rendering host-side code (`embedding_forward_quantized_split_nbit_host_template.cu`)
- For the `int_nbit_split_embedding_*` host-side jinja template, make it render `weighted`, `unweighted`, and `unweighted nobag` variants into separate source files
- For the `int_nbit_split_embedding_*` kernel-side jinja template, make it render into N = [`weighted`, `unweighted`, and `unweighted nobag` variants ] x [ 6 embedding types ] separate source files, each containing a single C++ template kernel function.  Also generate the code to explicitly instantiate the kernel templates.  For each of the C++ templates being generated, there will be 2 {device-only bool} x [3-4] (output types) x [3-5] (cases) = 18-40 actual template instantiations
- To help with debugging missing template instantiations, print out all undefined symbols in the output library after build to inspect whether or not templates have been un-instantiated
- Update build scripts to print out `cc`, `c++`, and `nvcc` preprocessor defines
- Handle the case where `TORCH_CUDA_ARCH_LIST` is pre-defined in the environment

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1694

Reviewed By: sryap, r-barnes

Differential Revision: D44842524

Pulled By: q10

fbshipit-source-id: 96f92e40ab2fec598aeb8c483e94997ac050aae7

* Back out "Prune CPU/GPU TBE optimizer codegen" (#1706)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1706

Original commit changeset: f04710e66498

Original Phabricator Diff: D44484764

Reviewed By: q10, brad-mengchi, jianyuh, shintaro-iwasaki

Differential Revision: D45054051

fbshipit-source-id: 9d14504c76eb93b2f1b14f4c2ec4c5b807c7fc4a

* Use CUB kernel for 2D asynchronous_complete_cumsum (#1707)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1707

Temporarily use the CUB kernel instead of the custom kernel for 2D
`asynchronous_complete_cumsum`

Reviewed By: q10, brad-mengchi, jianyuh

Differential Revision: D45062784

fbshipit-source-id: cebe3992ff8ebec9c0f554e729b8d79a1eced1de

* Split the Code Generation for `embedding_backward_split_template.cu` into Smaller Files (#1705)

Summary:
`embedding_backward_split_template.cu` contains both jinja-template and non-jinja-template code, and some of the templating is unneccessary.  Furthermore, the template generates both the vanilla and `nobag` variants of unweighted into the same source file.  This PR moves the non-jinja-template code out of the template, de-duplicates code are unneccessarily templated, and splits the generation of the code to three files per optimizer, one for `weighted`, `unweighted nobag`, and `unweighted`.

Details:

- Migrate non-jinja-templated code out of `embedding_backward_split_template.cu` and into `embedding_backward_template_helpers.cuh`
- De-templatize `split_embedding_backward_codegen_{{ optimizer }}_{{ wdesc }}_find_long_segments` into `split_embedding_backward_codegen_find_long_segments` since there is no implementation difference between the optimizers and weighted vs unweighted
- Migrate `grad_mean_kernel` and `split_embedding_backward_codegen_find_long_segments` into a separate non-template source file to de-duplicate code generation and compilation
- Split the code generation of `embedding_backward_split_template.cu` into 3 files per optimizer, according to weighted, unweighted_nobag, and unweighted

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1705

Reviewed By: sryap

Differential Revision: D45073273

Pulled By: q10

fbshipit-source-id: e82ea643f8e67ad5aa0b3de03562532c5735453d

* Add jagged slice op for cpu (#1690)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1690

The context why this is needed is as follows
1) For really long sparse features we want to split them into multiple chunks that can be fed into the model
2) Slicing requires users to require per row start point & a maximum L.

Based on these requirements, a custom op mimicing the slice semantics of a normal tensor works best.

An example usage using pseudo code

```
input_jagged_tensor = [[1, 2, 3, 4], [1, 2, 3], [1, 2, 3, 4, 5, 6], [1], [1, 2]]
start = [0, 0, 0, 0, 0]
slice_length = 3

>> jagged_slice(input_jagged_tensor, start, slice_length)

output_jagged_tensor = [[1, 2, 3], [1, 2, 3], [1, 2, 3], [1], [1, 2]]

```

A corresponding operation for dense tensor would look like
```
dense_tensor = torch.randn((8, 10))
slice_dense_tensor = dense_tensor[:, 1:3]
```

Reviewed By: sryap

Differential Revision: D44299744

fbshipit-source-id: 44996f2f2ec5fc5f31dda4cb3bd8f0241497df66

* Move radix sort to common utilities and add the possibility to handle negative integers (#1672)

Summary:
Move the `radix_sort` implementation to common utilities, so it can be used in PyTorch in case it was not built with FBGEMM GPU.
Add the possibility to handle negative integers, which is crucial for reusing `radix_sort` in PyTorch's `sort` operation.

Details:
This PR addresses two issues:
1.  `radix_sort` is currently used in [scatter_reduce](https://github.com/dszwicht/pytorch/blob/master/aten/src/ATen/native/cpu/ScatterGatherKernel.cpp#L630) (please view this [comment](https://github.com/pytorch/pytorch/pull/82703/files#r1045360609) for more information). Till now `radix_sort` was under `fbgemm_gpu` subproject. It means that implementation was not available in PyTorch in case it was built for CPU - that's why `radix_sort` was copy pasted under aten directory in PyTorch. This PR moves `radix_sort` implementation to common utilities.
2. In GNN workloads we often sort 1D integer data with non-negative values, for example, when converting CSR to CSC format. Unfortunately, `torch.sort` for 1D data works sequentially. `radix_sort` seems to be a perfect match to accelerate described case. However, suppose we want to do that on the PyTorch site. In that case, we have to either fallback to a regular path after detecting negative numbers in the tensor or perform post-processing, by swapping positive and negative blocks of data (data like `[2, -1, -2, 1]` after sorting will be in the following form `[1, 2, -2, -1]`, due to the fact of how numbers are stored). Both these solutions are not elegant. As an alternative, I propose the extension of `radix_sort` algorithm, by giving it capability to work with negative numbers. This can be enabled by passing an optional parameter, `maybe_with_neg_vals`. If set to `true`, we will perform all passes (up to the most significant sign bit) and apply a special prefix sum combination in the last pass. An example of how we can reuse fbgemm in PyTorch can be found in my private fork, [here](https://github.com/dszwicht/pytorch/pull/2) (I also provide speedup data).

The above changes have several consequences:
1. `TORCH_CHECK` was replaced with `assert` as fbgemm CPU does not have PyTorch in its dependencies.
2. `__builtin_clz` was replaced with manual implementation as `__builtin_clz` is not portable.

Additional information for reviewers:
I did perform benchmarks of `radix_sort` before and after my code modification. I didn't observe any performance drop.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1672

Reviewed By: sryap

Differential Revision: D44616959

Pulled By: q10

fbshipit-source-id: f34594478c94ec6610c05545feb2044b58d79d66

* Daily `arc lint --take CLANGFORMAT`

Reviewed By: bigfootjon

Differential Revision: D45141964

fbshipit-source-id: 58308a31522a3b1446835e358a93483b611c4b15

* `CMakeLists.txt` Cleanups (#1712)

Summary:
- Re-organize and comment the `CMakeLists.txt` for FBGEMM_GPU for better clarity
- Disable verbose HIPCC warnings that are non-actionable when building the ROCm variant of FBGEMM_GPU

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1712

Reviewed By: shintaro-iwasaki

Differential Revision: D45189904

Pulled By: q10

fbshipit-source-id: 3df6ff3b957886c64bc13fc6bc7a0147b74ee783

* support indices broadcast for reorder_batched_ad_indices (#1711)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1711

this is to support the case for request-only combined input sparse feature broadcast

when `broadcast_indices` is enabled, the assumption for the inputs:
- `cat_ad_offsets` and `cat_ad_indices` only contain the offsets and indices for the combined batches, where each batch only contain one instance (potentially multiple tables)
- `reordered_cat_ad_offsets` needs to be after broadcasted, and contains `num_ads_in_batch * num_tables + 1` elements
- `batch_offsets` is also after broadcasted
- `num_indices_after_broadcast` is required to allocate the output buffer

added coverage for the newly added branch

Reviewed By: r-barnes

Differential Revision: D45155887

fbshipit-source-id: 67f96d60168aa83cf24fef459addee89f06e1c6b

* Add a check that get_filelist python exec process worked (#1715)

Summary:
Add a check that get_filelist python exec worked.
If bad params (python, args, ...), get_filelist() was continuing without noticing/warning/erroring out,
making cmake failing later for weird reasons ("no sources").
Adds a safety check on the RESULT_VARIABLE of cmake execute_process().

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1715

Reviewed By: shintaro-iwasaki

Differential Revision: D45235231

Pulled By: q10

fbshipit-source-id: 049eae1fc5d7f42d73048e81c02c2f282d8859b0

* Fix compilation error under ROCm 5.3 (#1719)

Summary:
- Fix bug introduced by PR 1711 (D45155887), which broke compilation of FBGEMM_GPU under ROCm 5.3

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1719

Reviewed By: sryap

Differential Revision: D45238536

Pulled By: q10

fbshipit-source-id: de9d2aa01ced0a37be1ea7903a361e3a24beed8d

* Backward Split, pt. 2: Migrate `*_warp_per_row` and `*_cta_per_row` kernel templates out of `embedding_backward_split_template.cu` (#1710)

Summary:
- Migrate the definition of `split_embedding_*_backward_codegen_*_*_kernel_warp_per_row_1` from `embedding_backward_split_template.cu` over to `embedding_backward_split_kernel_warp_template.cu` and explicitly instantiate the templates separately
- Migrate the definition of `split_embedding_*_backward_codegen_*_*_kernel_cta_per_row_1` from `embedding_backward_split_template.cu` over to `embedding_backward_split_kernel_cta_template.cu` and explicitly instantiate the templates separately

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1710

Reviewed By: sryap

Differential Revision: D45205217

Pulled By: q10

fbshipit-source-id: 96b34e9389e70b64d8391f2c9d39f4009f3d65ce

* Add CLI support (M,N,K) to GEMMsBenchmark (#1721)

Summary:
Add CLI support (M,N,K) to GEMMsBenchmark

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1721

Reviewed By: sryap

Differential Revision: D45281533

Pulled By: q10

fbshipit-source-id: 0ce5b38f54877acb26421dead1d2dc63cd11a2a1

* Fix data conversion in radix sort that can cause data loss (#1718)

Summary:
Fix data conversion in `radix_sort` that can cause data loss.

Details:
When `elements_count` is passed to the internal kernel implementation it is implicitly converted from `int64_t` to `int`. It can cause data loss, resulting in a partially sorted array. This PR fixes this issue. As a result of changing the `elements_count` type in internal functions to `int64_t`, `histogram` and `histogram_ps` types also were updated (to not generate further conversions).
This is a follow-up for https://github.com/pytorch/FBGEMM/issues/1672.

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1718

Reviewed By: sryap

Differential Revision: D45253811

Pulled By: q10

fbshipit-source-id: a5368a4401f05ebc471cb17107297a48f43a75c0

* support lengths broadcast for reorder_batched_ad_lengths (#1716)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1716

similar to D45155887

when `broadcast_lengths` is enabled, the lengths are copied from the only instance of each batch, this is also to facilitate request-only broadcast

Reviewed By: r-barnes

Differential Revision: D45208736

fbshipit-source-id: 2c06cd4e9aae0c9c4e0668098de7db6f6da8c06b

* remove optional for two ops (#1722)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1722

remove unnecessary optional decorators for the two newly added sparse ops

Reviewed By: r-barnes

Differential Revision: D45286152

fbshipit-source-id: 26109548db1acbc8fdf1a5183977eb8c64b45d41

* Prepare bounds_check_indices for VBE (#1713)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1713

Prepare bounds_check_indices for variable batch size TBE (VBE).

- Update arg names

Reviewed By: jspark1105, r-barnes

Differential Revision: D45203680

fbshipit-source-id: 396c4122058db8dd1fc9eb5f0d620e8179c3e7a9

* Add check on configs and logging (#1728)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1728

Freq-SGD requires to set both `weight_decay_mode=WeightDecayMode.COUNTER` and `counter_based_regularization` to kick in. Previously we checked when `weight_decay_mode` is set but no config provided. There's another missing case when the config is provided but users forget to set `weight_decay_mode`. We add the check in this diff.

In addition, added logging to print out whether **internally**  counter is used or not to make debugging easier.

Reviewed By: dvksabin

Differential Revision: D45329516

fbshipit-source-id: 30389671c34a17d4baf48726f28096a670ede0b6

* Prepare transpose_embedding_input for VBE (#1717)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1717

Prepare `transpose_embedding_input` for variable batch size TBE (VBE).

- Update the frontend API with new args

Reviewed By: yuguo68

Differential Revision: D45212897

fbshipit-source-id: 5ad11a737130777fbe119aed6c7086e892752f4a

* Convert GEMMsBench timebreakdown to a runtime cli option (#1725)

Summary:
Convert timebreakdown to a runtime cli option.
Note: there is no code to measure packing, compute, kernel time ...
so these are (atm) reported as 0, only total time is measured.
```
     M,      N,      K,             Type,     Packing (us),      Kernel(us),    Postproc (us),       Total (us),  GOPs
    64,    800,    320,  FBGEMM_i8_acc32,                0,                 0,                0,          218.593, 149.9
    64,    800,    320,  FBGEMM_i8_acc16,              0.0,               0.0,              0.0,            187.6, 174.7
```

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1725

Reviewed By: sryap

Differential Revision: D45361847

Pulled By: q10

fbshipit-source-id: 4f2991a6208f0a5ae780729ce19bee611720953b

* Fix error with empty row_counter_dev (#1730)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1730

In some cases, `torch.max(row_counter_dev)` causes failure because `row_counter_dev` is an empty tensor, example flow (f431977946).

Here we guard the op by first checking if `row_counter_dev` is empty.

Reviewed By: sryap

Differential Revision: D45342010

fbshipit-source-id: 756a481c1098095f71dbb278ea84a01e89783790

* padding for fp8-rowwise quantization for varying length of 1D Tensor (#1729)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1729

As all gather becomes expensive for tensor/sequential parallel training, we create padded rowwise quantization/dequantization kernels for flattened tensor to convert between fp8 (stored as uint8 for gpu <= A100) and fp32 formats.
Since the activations/grads will be concat into 1d tensor for all gather, the scaling to fit into fp8 format's range might be tricky as small elements will be quantized to zero if the scale is chosen to accommodate the largest element in the model.

Thus, we continue to use row-wise quantization used in the previous all2all kernel. Every block with the size of "row_dim" will be quantized with the scale choose to accommodate the largest value in the block.

Since the total length of the flattened tensor will not always be divisible by row_dim, we'll pad the 1D tensor to multiple of row_dim. As such, the padding/unpadding is handled by quantize/dequantize kernels and will be invisible to API calling them.

Reviewed By: rohan-varma

Differential Revision:
D42721325

Privacy Context Container: L1138451

fbshipit-source-id: 33c712ba2fae709d29babee5ee4a8af6c7637b68

* Improve `TORCH_CHECK` diagnostics in files including deeplearning/fbgemm/fbgemm_gpu/codegen/embedding_forward_split_cpu.cpp (#1732)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1732

`TORCH_CHECK` produces pretty generic error messages. Using, eg, `TORCH_CHECK_GE` produces a message that shows the names of the variables being compared as well as their values at the time of comparison. This makes debugging easier.

 - If you approve of this diff, please use the "Accept & Ship" button :-)

(7 files modified.)

Reviewed By: bangshengtang

Differential Revision: D45402701

fbshipit-source-id: 42501350543e31455e430b240e53f8e1883eb1ba

* Improve `TORCH_CHECK` diagnostics in files including deeplearning/fbgemm/fbgemm_gpu/codegen/embedding_backward_dense_host.cpp (#1733)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1733

`TORCH_CHECK` produces pretty generic error messages. Using, eg, `TORCH_CHECK_GE` produces a message that shows the names of the variables being compared as well as their values at the time of comparison. This makes debugging easier.

 - If you approve of this diff, please use the "Accept & Ship" button :-)

(7 files modified.)

Reviewed By: bangshengtang

Differential Revision: D45402700

fbshipit-source-id: 275bf837341a00d1cd4642b31bf9168455fa6c77

* Build cleanups (#1731)

Summary:
- Further break up `setup_env.bash` into separate domain scripts for easier maintenance
- Update FBGEMM `CMakeLists.txt` to remove warning (https://github.com/pytorch/FBGEMM/issues/1714)

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1731

Reviewed By: sryap

Differential Revision: D45406676

Pulled By: q10

fbshipit-source-id: 3ff5a7e2486b6898cb450d268a092371da5c2717

* Improve `TORCH_CHECK` diagnostics in files including deeplearning/fbgemm/fbgemm_gpu/fb/src/split_embeddings_utils.cu (#1735)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1735

`TORCH_CHECK` produces pretty generic error messages. Using, eg, `TORCH_CHECK_GE` produces a message that shows the names of the variables being compared as well as their values at the time of comparison. This makes debugging easier.

 - If you approve of this diff, please use the "Accept & Ship" button :-)

(7 files modified.)

Reviewed By: bangshengtang

Differential Revision: D45402704

fbshipit-source-id: 9e9b1c1f526a398bbe50c99055187195ab751fa2

* Improve `TORCH_CHECK` diagnostics in files including deeplearning/fbgemm/fbgemm_gpu/src/split_embeddings_utils.cu (#1737)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1737

`TORCH_CHECK` produces pretty generic error messages. Using, eg, `TORCH_CHECK_GE` produces a message that shows the names of the variables being compared as well as their values at the time of comparison. This makes debugging easier.

 - If you approve of this diff, please use the "Accept & Ship" button :-)

(3 files modified.)

Reviewed By: bangshengtang

Differential Revision: D45402697

fbshipit-source-id: c490d39bc826eab44ec16cbcc86273f8d7258fd9

* Use volatile pointer in inclusive_sum_scan_kernel (#1739)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1739

In the multi-block cumsum case, the `inclusive_sum_scan_kernel`
implements the stream-scan technique in which each thread block has to
consume the preceding sum result from the previous block. The sum
result is passed via the `block_sums` buffer (global memory). To ensure
that the sum results are visible for inter-thread-block consumption,
the buffer has to be declared as `volatile` to prevent the compiler from
caching the results in registers. This diff adds the `volatile` keyword
to `block_sums`.

Reviewed By: q10

Differential Revision: D45435897

fbshipit-source-id: f81a25b43eda18ae1eb18bed33f595fc27ef2707

* BF16 support for HBC ops. (#1744)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1744

Adding BF16 support for HBC ops, and updates on tests.

Reviewed By: q10, sryap

Differential Revision: D45449360

fbshipit-source-id: 8321155b426143d80064f12a910c0626bdfafbba

* Use designated initializers & kernel launch checks in deeplearning/fbgemm/include/fbgemm/Utils.h (#1746)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1746

Designated initializers can make the code cleaner

 - If you approve of this diff, please use the "Accept & Ship" button :-)

(1 files modified.)

Reviewed By: sryap

Differential Revision: D45464948

fbshipit-source-id: 28e38dc60b893fe7c91db0d791e069a6de87b420

* Dynamically determine platform name in FBGEMM scripts (#1742)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1742

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1738

Instead of hardcoding x86_64 when installing dependencies, let's now dynamically determine the platform name

Reviewed By: excelle08

Differential Revision: D45246996

fbshipit-source-id: d9031e76a915c2362be62c85a3c1f0786828ca8b

* Split the Rendering of `embedding_forward_split_template.cu` into Smaller Files (#1723)

Summary:
- Migrate `*_embedding_*_codegen_forward_*_kernel` out of `embedding_forward_split_template.cu` and into `embedding_forward_split_kernel_template.cu`
- Migrate `*_embedding_nobag_codegen_forward_unweighted_small_kernel` out of `embedding_forward_split_template.cu` and into `embedding_forward_split_kernel_small_template.cu`

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1723

Reviewed By: sryap

Differential Revision: D45363388

Pulled By: q10

fbshipit-source-id: 563ca610b15830aca854bc00d6a31fd6e8cb8a53

* Installation instructions for OSS (#1750)

Summary:
- Add installation instructions for OSS
- Migrate Installation, Test, and Documentation information out of the README
- Add link to GitHub Discussions in the README
- Migrate the Netlify configuration from website to TOML file in the repo so that build jobs are configurable by developers

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/1750

Reviewed By: sryap, shintaro-iwasaki

Differential Revision: D45540724

Pulled By: q10

fbshipit-source-id: beaab824cc5d441b96b89daea2a71f541e21f2ec

---------

Co-authored-by: Banit Agrawal <[email protected]>
Co-authored-by: Sabin Devkota <[email protected]>
Co-authored-by: Junjie Yang <[email protected]>
Co-authored-by: Benson Ma <[email protected]>
Co-authored-by: Alfredo Tupone <[email protected]>
Co-authored-by: Sarunya Pumma <[email protected]>
Co-authored-by: Doe Hyun Yoon <[email protected]>
Co-authored-by: Matt Galloway <[email protected]>
Co-authored-by: Richard Barnes <[email protected]>
Co-authored-by: Xiao Sun <[email protected]>
Co-authored-by: Rengan Xu <[email protected]>
Co-authored-by: siwasaki <[email protected]>
Co-authored-by: Jianyu Huang <[email protected]>
Co-authored-by: Yue Dong <[email protected]>
Co-authored-by: Geet Sethi <[email protected]>
Co-authored-by: Janet Yang <[email protected]>
Co-authored-by: Wang Zhou <[email protected]>
Co-authored-by: Jongsoo Park <[email protected]>
Co-authored-by: Tran Le <[email protected]>
Co-authored-by: Ryan Landay <[email protected]>
Co-authored-by: Devashish Tyagi <[email protected]>
Co-authored-by: Szwichtenberg, Damian <[email protected]>
Co-authored-by: generatedunixname89002005325676 <[email protected]>
Co-authored-by: Bangsheng Tang <[email protected]>
Co-authored-by: William Tambellini <[email protected]>
Co-authored-by: Jason Park <[email protected]>
DamianSzwichtenberg pushed a commit that referenced this pull request May 5, 2023
…#94297)

Hi!

I've been fuzzing different pytorch modules, and found a crash inside one of them.

Specifically, I'm talking about a module that processes `script_call` rpc requests and a function `ScriptCall::fromIValues(std::vector<at::IValue>& ivalues)`.

Running this test case causes a crash that occurs when `ivalues.back()` is called [script_call.cpp:90](https://github.com/pytorch/pytorch/blob/abc54f93145830b502400faa92bec86e05422fbd/torch/csrc/distributed/rpc/script_call.cpp#L90). The crash occurs because the vector `ivalues` is empty.

All tests were performed on this pytorch version: [abc54f9](https://github.com/pytorch/pytorch/tree/abc54f93145830b502400faa92bec86e05422fbd)

The provided patch checks if there are enough elements in the ivalues vector.

### How to reproduce

1. To reproduce the crash, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch)

2. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

3. Copy crash file to the current directory:

    - [crash-9f76d4e37a2391136a4ce07d47269db1e063e4b4.zip](https://github.com/pytorch/pytorch/files/10674059/crash-9f76d4e37a2391136a4ce07d47269db1e063e4b4.zip)

4. Run the container: ``docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash``

5. And execute the binary: `/message_deserialize_fuzz /homedir/crash-9f76d4e37a2391136a4ce07d47269db1e063e4b4`

After execution completes you will see this stacktrace:

```asan
AddressSanitizer:DEADLYSIGNAL
=================================================================
==57==ERROR: AddressSanitizer: SEGV on unknown address (pc 0x0000008e7b19 bp 0x7ffd2fdded70 sp 0x7ffd2fddec40 T0)
==57==The signal is caused by a READ memory access.
==57==Hint: this fault was caused by a dereference of a high value address (see register values below).  Disassemble the provided pc to learn which register was used.
    #0 0x8e7b19 in c10::IValue::isString() const /pytorch_fuzz/aten/src/ATen/core/ivalue.h:639:27
    #1 0x8e7b19 in c10::IValue::toStringRef[abi:cxx11]() const /pytorch_fuzz/aten/src/ATen/core/ivalue_inl.h:2179:3
    #2 0xe04fb58 in torch::distributed::rpc::ScriptCall::fromIValues(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch_fuzz/torch/csrc/distributed/rpc/script_call.cpp:90:53
    #3 0xe0511f0 in torch::distributed::rpc::ScriptCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/script_call.cpp:133:10
    pytorch#4 0xe0ff71e in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/utils.cpp:102:14
    pytorch#5 0x602a41 in LLVMFuzzerTestOneInput /message_deserialize_fuzz.cc:192:27
    pytorch#6 0x52ce61 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
    pytorch#7 0x516d7c in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
    pytorch#8 0x51cacb in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
    pytorch#9 0x546062 in main /llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
    pytorch#10 0x7f41e42a8082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
    pytorch#11 0x51169d in _start (/message_deserialize_fuzz+0x51169d)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /pytorch_fuzz/aten/src/ATen/core/ivalue.h:639:27 in c10::IValue::isString() const
==57==ABORTING
```
Pull Request resolved: pytorch#94297
Approved by: https://github.com/ezyang
DamianSzwichtenberg pushed a commit that referenced this pull request May 5, 2023
…ytorch#94300)

Hi!

I've been fuzzing different pytorch modules, and found a crash inside one of them.

Specifically, I'm talking about a module for unpickling and a function called `Unpickler::readInstruction()`. Running this function with provided crash file results in a crash, which occurs while calling `auto dict = stack_.at(dict_pos).toGenericDict();` [unpickler.cpp:561](https://github.com/pytorch/pytorch/blob/0e94fbc0c8ab1572c88159c1a4c397b6eb824c01/torch/csrc/jit/serialization/unpickler.cpp#L561). The crash occurs, because the index `dict_pos` is out of bounds (which itself happens because the stack size is 0).

Besides this pull-request, there is another one related to unpickler hardening: pytorch#84343

All tests were performed on this pytorch version: [abc54f9](https://github.com/pytorch/pytorch/tree/abc54f93145830b502400faa92bec86e05422fbd)

### How to reproduce

1. To reproduce the crash, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch)

2. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

3. Copy crash file to the current directory:

    - [crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip](https://github.com/pytorch/pytorch/files/10674361/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip)

4. Run the container: ``docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash``

5. And execute the binary: `/message_deserialize_sydr /homedir/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e`

After execution completes you will see this error message:

```txt
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
```

And this stacktrace:

```asan
erminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
==39== ERROR: libFuzzer: deadly signal
    #0 0x5d0df1 in __sanitizer_print_stack_trace /llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3
    #1 0x545727 in fuzzer::PrintStackTrace() /llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5
    #2 0x52b933 in fuzzer::Fuzzer::CrashCallback() /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3
    #3 0x7f9118e0341f  (/lib/x86_64-linux-gnu/libpthread.so.0+0x1441f)
    pytorch#4 0x7f9118c2300a in raise (/lib/x86_64-linux-gnu/libc.so.6+0x4300a)
    pytorch#5 0x7f9118c02858 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x22858)
    pytorch#6 0x7f9119040910  (/lib/x86_64-linux-gnu/libstdc++.so.6+0x9e910)
    pytorch#7 0x7f911904c38b  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa38b)
    pytorch#8 0x7f911904c3f6 in std::terminate() (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa3f6)
    pytorch#9 0x7f911904c6a8 in __cxa_throw (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa6a8)
    pytorch#10 0x7f91190433aa  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xa13aa)
    pytorch#11 0x63acdf in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_range_check(unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1073:4
    pytorch#12 0xce8f93e in std::vector<c10::IValue, std::allocator<c10::IValue> >::at(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1094:2
    pytorch#13 0xce8f93e in torch::jit::Unpickler::readInstruction() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:546:26
    pytorch#14 0xce8d527 in torch::jit::Unpickler::run() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:235:27
    pytorch#15 0xce8d1c2 in torch::jit::Unpickler::parse_ivalue() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:192:3
    pytorch#16 0xcdf0792 in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:127:20
    pytorch#17 0xcdf104d in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:137:10
    pytorch#18 0xe0532db in torch::distributed::rpc::ScriptRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/script_remote_call.cpp:74:16
    pytorch#19 0xe0ffa10 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/utils.cpp:108:14
    pytorch#20 0x602a41 in LLVMFuzzerTestOneInput /message_deserialize_fuzz.cc:192:27
    pytorch#21 0x52ce61 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
    pytorch#22 0x516d7c in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
    pytorch#23 0x51cacb in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
    pytorch#24 0x546062 in main /llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
    pytorch#25 0x7f9118c04082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
    pytorch#26 0x51169d in _start (/message_deserialize_fuzz+0x51169d)

NOTE: libFuzzer has rudimentary signal handlers.
      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
```
Pull Request resolved: pytorch#94300
Approved by: https://github.com/malfet, https://github.com/apach301
DamianSzwichtenberg pushed a commit that referenced this pull request Jun 1, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.

<details>
<summary>ASAN report</summary>

```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
    #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
    #1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
    #2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
    #3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
    pytorch#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
    pytorch#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
    pytorch#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
    pytorch#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
    pytorch#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
    pytorch#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
    pytorch#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
    pytorch#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    pytorch#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
    pytorch#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    pytorch#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    pytorch#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
    pytorch#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
    pytorch#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
    pytorch#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
    pytorch#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
    pytorch#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
    pytorch#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
 const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
    pytorch#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    pytorch#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    pytorch#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
    pytorch#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    pytorch#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
    pytorch#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
    pytorch#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
    pytorch#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
    pytorch#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
    pytorch#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#33 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#56 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#65 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#72 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#81 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#90 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
    pytorch#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#111 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#118 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#133 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#142 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
    pytorch#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#159 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#168 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#183 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#190 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#205 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#214 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#225 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    pytorch#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    pytorch#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#240 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#249 0x3ffa2e05447 in call_function Python/ceval.c:5891
    pytorch#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215

0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
    #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75

previously allocated by thread T0 here:
    #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
    #2 0x3ff76f79e17  (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)

SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
  0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
  0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1115867==ABORTING
```
</details>

<details>
<summary>Additional backtraces (not full)</summary>

Memory deallocation:
```
#0  operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
#2  0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
#3  0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
pytorch#4  0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
pytorch#5  0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
pytorch#6  0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
pytorch#7  0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
    this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
pytorch#8  c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
    args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
pytorch#9  0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
pytorch#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
pytorch#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
pytorch#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
pytorch#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
pytorch#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
pytorch#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```

Memory access:
```
#0  c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
#1  0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
#2  0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
#3  0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
pytorch#4  0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
pytorch#5  0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
pytorch#6  0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
pytorch#7  0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
pytorch#8  0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
pytorch#9  0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
pytorch#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
 > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
pytorch#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
pytorch#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
pytorch#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
pytorch#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
pytorch#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
pytorch#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
pytorch#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
pytorch#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
    at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: pytorch#101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
DamianSzwichtenberg pushed a commit that referenced this pull request Jun 1, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    #2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    #3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    pytorch#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    pytorch#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    pytorch#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    pytorch#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    pytorch#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    pytorch#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#14 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    pytorch#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#25 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#34 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#41 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#50 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#59 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    pytorch#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#80 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#87 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#102 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#111 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    pytorch#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#128 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#137 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#152 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#159 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#174 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#183 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#194 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    pytorch#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    pytorch#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    pytorch#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#209 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    pytorch#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#218 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    pytorch#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    pytorch#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#229 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    pytorch#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#236 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#243 0x3ffab105447 in call_function Python/ceval.c:5891
    pytorch#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    pytorch#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    pytorch#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    #2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
pytorch#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
pytorch#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
pytorch#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
pytorch#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
pytorch#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
pytorch#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
pytorch#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
pytorch#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
pytorch#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
pytorch#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
pytorch#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
pytorch#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
pytorch#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
pytorch#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
pytorch#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
pytorch#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
pytorch#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
pytorch#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
pytorch#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
pytorch#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
pytorch#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
pytorch#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
pytorch#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
pytorch#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
pytorch#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
pytorch#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
pytorch#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
pytorch#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
pytorch#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
pytorch#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
pytorch#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
pytorch#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: pytorch#101400
Approved by: https://github.com/zou3519
DamianSzwichtenberg pushed a commit that referenced this pull request Jun 1, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed.

<details>
<summary>ASAN report</summary>

```
=================================================================
==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930
READ of size 4 at 0x03ff70f54570 thread T0
    #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129
    #1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550
    #2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021
    #3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182
    pytorch#4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __
vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991
    pytorch#5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074
    pytorch#6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/
user/pytorch/aten/src/ATen/cpu/vml.h:71
    pytorch#7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*,
float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239
    pytorch#8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71
    pytorch#9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    pytorch#10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406
    pytorch#11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c
onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/
c10/util/FunctionRef.h:43
    pytorch#12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64
    pytorch#13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt
orch/aten/src/ATen/TensorIteratorInternal.h:52
    pytorch#14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777
    pytorch#15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749
    pytorch#16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera
tor.h:421
    pytorch#17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    pytorch#18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    pytorch#19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    pytorch#20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out
&) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158
    pytorch#21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330
    pytorch#22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307
    pytorch#23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    pytorch#24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463
    pytorch#25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:50
    pytorch#26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:103
    pytorch#27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s
rc/ATen/core/dispatch/Dispatcher.h:639
    pytorch#28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    pytorch#29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215
    pytorch#30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107
    pytorch#31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953
    pytorch#32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955
    pytorch#33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    pytorch#34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    pytorch#37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/
torch/csrc/utils/python_dispatch.cpp:175
    pytorch#45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::Op
eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    pytorch#46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operator
Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    pytorch#47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b
oxing/BoxedKernel_impl.h:41
    pytorch#48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor
e/boxing/KernelFunction_impl.h:43
    pytorch#49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6
91
    pytorch#50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    pytorch#51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    pytorch#52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    pytorch#53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1
0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    pytorch#54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::
IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    pytorch#55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    pytorch#56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin
ux-gnu/11/include/g++-v11/bits/std_function.h:590
    pytorch#57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    pytorch#58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11::
kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    pytorch#59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    pytorch#60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    pytorch#61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo
id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    pytorch#62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h
ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    pytorch#63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    pytorch#64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    pytorch#65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    pytorch#66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    pytorch#67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    pytorch#70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    pytorch#75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    pytorch#77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#86 0x3ffa5feb289 in call_function Python/ceval.c:5891
    pytorch#87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch
/torch/csrc/utils/python_dispatch.cpp:175
    pytorch#103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::O
peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    pytorch#104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operato
rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    pytorch#105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/
boxing/BoxedKernel_impl.h:41
    pytorch#106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co
re/boxing/KernelFunction_impl.h:43
    pytorch#107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:
691
    pytorch#108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    pytorch#109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    pytorch#110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    pytorch#111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c
10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    pytorch#112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:
:IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    pytorch#113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    pytorch#114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li
nux-gnu/11/include/g++-v11/bits/std_function.h:590
    pytorch#115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    pytorch#116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:
:kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    pytorch#117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    pytorch#118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    pytorch#119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    pytorch#120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    pytorch#121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    pytorch#122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    pytorch#123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    pytorch#124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    pytorch#125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    pytorch#128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    pytorch#133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    pytorch#135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83
    pytorch#144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485
    pytorch#146 0x3ffa5e84f2d in callmethod Objects/call.c:557
    pytorch#147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577
    pytorch#148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py
torch/torch/csrc/utils/python_arg_parser.cpp:338
    pytorch#149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827
    pytorch#150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    pytorch#151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    pytorch#152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    pytorch#153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    pytorch#154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    pytorch#155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    pytorch#156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    pytorch#157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    pytorch#160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    pytorch#165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    pytorch#167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215
    pytorch#168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    pytorch#169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#170 0x3ffa5feb289 in call_function Python/ceval.c:5891
    pytorch#171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    pytorch#172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#177 0x3ffa5feb289 in call_function Python/ceval.c:5891
    pytorch#178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    pytorch#179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    pytorch#183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    pytorch#207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    pytorch#215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    pytorch#216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    pytorch#217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    pytorch#218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    pytorch#225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    pytorch#226 0x3ffa5feb289 in call_function Python/ceval.c:5891
    pytorch#227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    pytorch#228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    pytorch#232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    pytorch#240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    pytorch#248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    pytorch#249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    pytorch#250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    pytorch#251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    pytorch#252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    pytorch#253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    pytorch#254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    pytorch#255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267

0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648
SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2
Shadow bytes around the buggy address:
  0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9
  0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==2030580==ABORTING
```
</details>

It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x.

See also: shibatch/sleef#464

Pull Request resolved: pytorch#102266
Approved by: https://github.com/malfet
DamianSzwichtenberg pushed a commit that referenced this pull request Jun 1, 2023
…2156)

Hi!

I've been fuzzing different pytorch modules with with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch), and found a multiple crashes in torch::jit::load() function.

All found errors could be reproduced with provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch).

### Crash in torch/csrc/jit/unpickler.cpp:1075

[crash-1f59083b8396c5b62b4705c7556e68f129e833b1.zip](https://github.com/pytorch/pytorch/files/11552947/crash-1f59083b8396c5b62b4705c7556e68f129e833b1.zip)

```asan
    "#0  0x00007ffff7a5600b in raise () from /lib/x86_64-linux-gnu/libc.so.6",
    "#1  0x00007ffff7a35859 in abort () from /lib/x86_64-linux-gnu/libc.so.6",
    "#2  0x00007ffff7ce3911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#3  0x00007ffff7cef38c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#4  0x00007ffff7cef3f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#5  0x00007ffff7cef6a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#6  0x00007ffff7ce6326 in std::__throw_length_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#7  0x00007ffff7d87edc in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_create(unsigned long&, unsigned long) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#8  0x00007ffff7d88880 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::reserve(unsigned long) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#9  0x000000000ea52931 in torch::jit::Unpickler::readBytes[abi:cxx11](unsigned long) (this=this@entry=0x7fffffffac10, length=length@entry=8358680908539635837) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:1075",
    "pytorch#10 0x000000000ea4c3a0 in torch::jit::Unpickler::readInstruction (this=0x7fffffff90d0) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:355",
    "pytorch#11 0x000000000ea49eb8 in torch::jit::Unpickler::run (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "pytorch#12 0x000000000ea49b12 in torch::jit::Unpickler::parse_ivalue (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "pytorch#13 0x000000000e960a9f in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=<optimized out>, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "pytorch#14 0x000000000e8ef599 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=0x7fffffffbc60, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "pytorch#15 0x000000000e8eb886 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=<optimized out>, device=..., extra_files=..., restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "pytorch#16 0x000000000e8e9cc5 in torch::jit::import_ir_module (cu=..., in=..., device=..., extra_files=..., load_debug_files=<optimized out>, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "pytorch#17 0x000000000e8f37bf in torch::jit::import_ir_module (cu=..., in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "pytorch#18 0x000000000e8f615a in torch::jit::load (in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "pytorch#19 0x00000000005c2d61 in LLVMFuzzerTestOneInput (data=<optimized out>, size=1663) at /load.cc:42",
    "pytorch#20 0x00000000005c2a8e in ExecuteFilesOnyByOne (argc=2, argv=0x7fffffffc6b8, callback=callback@entry=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255",
    "pytorch#21 0x00000000005c2899 in LLVMFuzzerRunDriver (argcp=argcp@entry=0x7fffffffc5b4, argvp=argvp@entry=0x7fffffffc5b8, callback=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:364",
    "pytorch#22 0x00000000005c2459 in main (argc=2, argv=0x7fffffffc6b8) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300"

```

### Crash in torch/csrc/jit/unpickler.cpp:386

[crash-2e9923de375c393e700e8c0441f0ebe8252ca364.zip](https://github.com/pytorch/pytorch/files/11552950/crash-2e9923de375c393e700e8c0441f0ebe8252ca364.zip)

```asan
    "#0  0x00007ffff7a5600b in raise () from /lib/x86_64-linux-gnu/libc.so.6",
    "#1  0x00007ffff7a35859 in abort () from /lib/x86_64-linux-gnu/libc.so.6",
    "#2  0x00007ffff7ce3911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#3  0x00007ffff7cef38c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#4  0x00007ffff7cef3f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#5  0x00007ffff7cef6a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#6  0x00007ffff7ce6326 in std::__throw_length_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "pytorch#7  0x0000000000670aff in std::vector<c10::IValue, std::allocator<c10::IValue> >::reserve (this=this@entry=0x7fffffff9750, __n=__n@entry=18446744073709551614) at /usr/include/c++/10/bits/vector.tcc:70",
    "pytorch#8  0x000000000ea4d5cd in torch::jit::Unpickler::readInstruction (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:386",
    "pytorch#9  0x000000000ea49eb8 in torch::jit::Unpickler::run (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "pytorch#10 0x000000000ea49b12 in torch::jit::Unpickler::parse_ivalue (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "pytorch#11 0x000000000e960a9f in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=<optimized out>, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "pytorch#12 0x000000000e8ef599 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=0x7fffffffbc60, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "pytorch#13 0x000000000e8eb886 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=<optimized out>, device=..., extra_files=..., restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "pytorch#14 0x000000000e8e9cc5 in torch::jit::import_ir_module (cu=..., in=..., device=..., extra_files=..., load_debug_files=<optimized out>, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "pytorch#15 0x000000000e8f37bf in torch::jit::import_ir_module (cu=..., in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "pytorch#16 0x000000000e8f615a in torch::jit::load (in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "pytorch#17 0x00000000005c2d61 in LLVMFuzzerTestOneInput (data=<optimized out>, size=5498) at /load.cc:42",
    "pytorch#18 0x00000000005c2a8e in ExecuteFilesOnyByOne (argc=2, argv=0x7fffffffc6b8, callback=callback@entry=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255",
    "pytorch#19 0x00000000005c2899 in LLVMFuzzerRunDriver (argcp=argcp@entry=0x7fffffffc5b4, argvp=argvp@entry=0x7fffffffc5b8, callback=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:364",
    "pytorch#20 0x00000000005c2459 in main (argc=2, argv=0x7fffffffc6b8) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300"
```

### Crash in torch/csrc/jit/serialization/source_range_serialization.cpp:211

[crash-5598d386057152f606bfa69d85605499e8852625.zip](https://github.com/pytorch/pytorch/files/11552952/crash-5598d386057152f606bfa69d85605499e8852625.zip)

```asan
    "#0  torch::jit::ConcreteSourceRangeUnpickler::unpickle (this=0x99b8d80) at /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:211",
    "#1  0x0000000004042566 in torch::jit::ConcreteSourceRangeUnpickler::findSourceRangeThatGenerated (this=0x99aa1c0, range=...) at /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:229",
    "#2  0x00000000007b5cc8 in torch::jit::Source::findSourceRangeThatGenerated (this=<optimized out>, range=...) at /pytorch/torch/csrc/jit/frontend/source_range.cpp:144",
    "#3  torch::jit::SourceRange::findSourceRangeThatGenerated (this=0x7fffffffa650) at /pytorch/torch/csrc/jit/frontend/source_range.h:384",
    "pytorch#4  torch::jit::SourceRange::highlight (this=0x7fffffffa650, out=...) at /pytorch/torch/csrc/jit/frontend/source_range.cpp:149",
    "pytorch#5  0x00000000007a0e74 in torch::jit::Lexer::expected (this=this@entry=0x99979a0, what=..., t=...) at /pytorch/torch/csrc/jit/frontend/lexer.h:461",
    "pytorch#6  0x000000000079fcaa in torch::jit::Lexer::lexRaw (this=this@entry=0x99979a0, whitespace_token=false) at /pytorch/torch/csrc/jit/frontend/lexer.h:552",
    "pytorch#7  0x000000000079fd23 in torch::jit::Lexer::lex (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/lexer.h:487",
    "pytorch#8  0x00000000007a1da1 in torch::jit::Lexer::next (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/lexer.h:436",
    "pytorch#9  0x0000000003bff6a8 in torch::jit::Lexer::nextIf (this=0x99979a0, kind=330) at /pytorch/torch/csrc/jit/frontend/lexer.h:444",
    "pytorch#10 torch::jit::ParserImpl::parseReturnAnnotation (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:703",
    "pytorch#11 0x0000000003bfd500 in torch::jit::ParserImpl::parseDecl (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:729",
    "pytorch#12 0x0000000003bfb725 in torch::jit::ParserImpl::parseFunction (this=this@entry=0x99979a0, is_method=true) at /pytorch/torch/csrc/jit/frontend/parser.cpp:755",
    "pytorch#13 0x0000000003bfdc28 in torch::jit::ParserImpl::parseStmt (this=this@entry=0x99979a0, in_class=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:599",
    "pytorch#14 0x0000000003bfd8dd in torch::jit::ParserImpl::parseStatements (this=this@entry=0x99979a0, expect_indent=<optimized out>, in_class=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:697",
    "pytorch#15 0x0000000003bfc4ba in torch::jit::ParserImpl::parseClass (this=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:747",
    "pytorch#16 0x0000000003bfaddc in torch::jit::Parser::parseClass (this=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:812",
    "pytorch#17 0x0000000004008e2d in torch::jit::SourceImporterImpl::parseSourceIfNeeded (this=this@entry=0x95d41f0, qualifier=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:182",
    "pytorch#18 0x0000000004008ab7 in torch::jit::SourceImporterImpl::findNamedType (this=this@entry=0x95d41f0, name=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:135",
    "pytorch#19 0x000000000400d010 in torch::jit::SourceImporterImpl::resolveType (this=0x95d41f0, name=..., loc=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:261",
    "pytorch#20 0x0000000003c20821 in torch::jit::ScriptTypeParser::parseTypeFromExpr (this=this@entry=0x7fffffffb658, expr=...) at /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:238",
    "pytorch#21 0x0000000003c20acc in torch::jit::ScriptTypeParser::parseType (this=0x7fffffffb658, str=...) at /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:312",
    "pytorch#22 0x0000000004019416 in torch::jit::SourceImporter::loadType (this=<optimized out>, name=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:786",
    "pytorch#23 0x0000000003ff365e in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0::operator()(c10::QualifiedName const&) const (this=<optimized out>, qn=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:146",
    "pytorch#24 std::__invoke_impl<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(std::__invoke_other, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) (__f=..., __args=...) at /usr/include/c++/10/bits/invoke.h:60",
    "pytorch#25 std::__invoke_r<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) (__fn=..., __args=...) at /usr/include/c++/10/bits/invoke.h:113",
    "pytorch#26 std::_Function_handler<c10::StrongTypePtr (c10::QualifiedName const&), torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0>::_M_invoke(std::_Any_data const&, c10::QualifiedName const&) (__functor=..., __args=...) at /usr/include/c++/10/bits/std_function.h:291",
    "pytorch#27 0x000000000404e5c4 in std::function<c10::StrongTypePtr (c10::QualifiedName const&)>::operator()(c10::QualifiedName const&) const (this=0x7fffffffbf28, __args=...) at /usr/include/c++/10/bits/std_function.h:622",
    "pytorch#28 torch::jit::Unpickler::readGlobal (this=this@entry=0x7fffffffbd50, module_name=..., class_name=...) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:820",
    "pytorch#29 0x0000000004049ce5 in torch::jit::Unpickler::readInstruction (this=this@entry=0x7fffffffbd50) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:496",
    "pytorch#30 0x00000000040497a8 in torch::jit::Unpickler::run (this=0x7fffffffbd50) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "pytorch#31 0x00000000040494f9 in torch::jit::Unpickler::parse_ivalue (this=0x99aa1c0) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "pytorch#32 0x00000000040075f8 in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=0x0, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "pytorch#33 0x0000000003ff3545 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=this@entry=0x7fffffffc2b8, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "pytorch#34 0x0000000003fed8bf in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=this@entry=0x7fffffffc2b8, device=device@entry=..., extra_files=..., restore_shapes=220) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "pytorch#35 0x0000000003febb0f in torch::jit::import_ir_module (cu=..., in=..., device=..., device@entry=..., extra_files=..., load_debug_files=true, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "pytorch#36 0x0000000003feb7a1 in torch::jit::import_ir_module (cu=..., in=..., device=..., device@entry=..., load_debug_files=false) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "pytorch#37 0x0000000003ff015a in torch::jit::load (in=..., device=device@entry=..., load_debug_files=true) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "pytorch#38 0x00000000004a1655 in LLVMFuzzerTestOneInput (data=0x981a680 \"PK\\003\\004\", size=1609) at /load.cc:42",
    "pytorch#39 0x00000000004a1dbf in main ()"
```

### Segmentation fault in /pytorch/aten/src/ATen/core/ivalue.h:526

[crash-9bd059c1ae85ab9cdb41d786932214d942baa189.zip](https://github.com/pytorch/pytorch/files/11552956/crash-9bd059c1ae85ab9cdb41d786932214d942baa189.zip)

```asan
    "==8528==ERROR: AddressSanitizer: SEGV on unknown address (pc 0x00000e55d97e bp 0x7fffffffb4d0 sp 0x7fffffffb360 T0)",
    "==8528==The signal is caused by a READ memory access.",
    "==8528==Hint: this fault was caused by a dereference of a high value address (see register values below).  Disassemble the provided pc to learn which register was used.",
    "    #0 0xe55d97e in c10::IValue::isTuple() const /pytorch/aten/src/ATen/core/ivalue.h:526:26",
    "    #1 0xe55d97e in torch::distributed::rpc::GloballyUniqueId::fromIValue(c10::IValue const&) /pytorch/torch/csrc/distributed/rpc/types.cpp:60:3",
    "    #2 0xe4b04fb in torch::distributed::rpc::ScriptRemoteCall::fromIValues(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch/torch/csrc/distributed/rpc/script_remote_call.cpp:33:20",
    "    #3 0xe4b1ed5 in torch::distributed::rpc::ScriptRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/script_remote_call.cpp:80:10",
    "    pytorch#4 0xe55f8a0 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:108:14",
    "    pytorch#5 0x6120a8 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27",
    "    pytorch#6 0x535de1 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15",
    "    pytorch#7 0x51fcec in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6",
    "    pytorch#8 0x525a3b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9",
    "    pytorch#9 0x54eff2 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10",
    "    pytorch#10 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)",
    "    pytorch#11 0x51a60d in _start (/message_deserialize_fuzz+0x51a60d)",
    "",
    "AddressSanitizer can not provide additional info.",
    "SUMMARY: AddressSanitizer: SEGV /pytorch/aten/src/ATen/core/ivalue.h:526:26 in c10::IValue::isTuple() const",
    "==8528==ABORTING"
```
Pull Request resolved: pytorch#102156
Approved by: https://github.com/ezyang
DamianSzwichtenberg pushed a commit that referenced this pull request Jun 1, 2023
Pass size argument.

<details>
<summary>ASAN report</summary>

```
==1640574==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x609000022160 at pc 0x03ff31a04b42 bp 0x03ff69885dc0 sp 0x03ff69885db0
READ of size 16 at 0x609000022160 thread T1
    #0 0x3ff31a04b41 in at::vec::ZVECTOR::Vectorized<unsigned char, void>::loadu(void const*, int) /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:397
    #1 0x3ff31a04b41 in at::vec::ZVECTOR::Vectorized<c10::quint8, void>::loadu(void const*, int) /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1574
    #2 0x3ff31a04b41 in operator() /home/user/pytorch/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp:2668
    #3 0x3ff31cefa5d in void at::internal::invoke_parallel<at::native::(anonymous namespace)::quantized_normalize_kernel(at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, int, int, long, long
, double, at::Tensor*)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const::{lambda(long, long)#1}>(long, long, long, at::native::(anonymous namespace)::quantized_normalize_kernel(at::Tens
or const&, at::Tensor const&, at::Tensor const&, bool, int, int, long, long, double, at::Tensor*)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const::{lambda(long, long)#1} const&) [clone
 ._omp_fn.0] /home/user/pytorch/aten/src/ATen/ParallelOpenMP.h:42
    pytorch#4 0x3ff6f31f52d in gomp_thread_start /var/tmp/portage/sys-devel/gcc-12.2.1_p20230304/work/gcc-12-20230304/libgomp/team.c:129
    pytorch#5 0x3ff82218381 in start_thread /usr/src/debug/sys-libs/glibc-2.37-r1/glibc-2.37/nptl/pthread_create.c:444
    pytorch#6 0x3ff822943f1  (/lib64/libc.so.6+0x1143f1)

0x609000022160 is located 0 bytes to the right of 32-byte region [0x609000022140,0x609000022160)
allocated by thread T0 here:
    #0 0x3ff82a3663f in __interceptor_posix_memalign /usr/src/debug/sys-devel/gcc-11.3.1_p20230303/gcc-11-20230303/libsanitizer/asan/asan_malloc_linux.cpp:226
    #1 0x3ff6f53ad95 in c10::alloc_cpu(unsigned long) /home/user/pytorch/c10/core/impl/alloc_cpu.cpp:74

Thread T1 created by T0 here:
    #0 0x3ff829dc263 in __interceptor_pthread_create /usr/src/debug/sys-devel/gcc-11.3.1_p20230303/gcc-11-20230303/libsanitizer/asan/asan_interceptors.cpp:216
    #1 0x3ff6f31fad5 in gomp_team_start /var/tmp/portage/sys-devel/gcc-12.2.1_p20230304/work/gcc-12-20230304/libgomp/team.c:858

SUMMARY: AddressSanitizer: heap-buffer-overflow /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:397 in at::vec::ZVECTOR::Vectorized<unsigned char, void>::loadu(void const*, int)
Shadow bytes around the buggy address:
  0x100c12000043d0: 00 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c12000043e0: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c12000043f0: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1200004400: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1200004410: fa fa fa fa fa fa fa fa fd fa fa fa fa fa fa fa
=>0x100c1200004420: fa fa fa fa fa fa fa fa 00 00 00 00[fa]fa fa fa
  0x100c1200004430: fa fa fa fa fa fa fa fa fd fd fa fa fa fa fa fa
  0x100c1200004440: fa fa fa fa fa fa fa fa fd fd fa fa fa fa fa fa
  0x100c1200004450: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1200004460: 00 00 fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1200004470: 00 00 fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1640574==ABORTING
```
</details>

Pull Request resolved: pytorch#101970
Approved by: https://github.com/Skylion007, https://github.com/jgong5
DamianSzwichtenberg pushed a commit that referenced this pull request Jul 24, 2023
…kler (pytorch#103667)

Hi!

I've been fuzzing different pytorch modules with with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch), and found a heap buffer overflow error that occures by incorrect loop condition in torch::jit::unpickler.cpp. This bug was found in several fuzzing targets: it can be triggered by `torch::jit::load()` method when loading a .pt model and by `torch::distributed::rpc::deserializeRequest()` method in RPC module.

All found errors could be reproduced with provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch).

### PoC for deserealizeRequest():
[crash-0722408578cd2f26593b5a01e26d2a078d3dc5f6.zip](https://github.com/pytorch/pytorch/files/11756694/crash-0722408578cd2f26593b5a01e26d2a078d3dc5f6.zip)

```
=================================================================
==29858==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020004ed808 at pc 0x000000680084 bp 0x7ffcbd8220d0 sp 0x7ffcbd8220c8
READ of size 4 at 0x6020004ed808 thread T0
    #0 0x680083 in c10::IValue::IValue(c10::IValue const&) /pytorch/aten/src/ATen/core/ivalue.h:224:33
    #1 0xdc4beb8 in std::pair<c10::impl::DictIterator<c10::IValue, c10::IValue, ska_ordered::detailv3::sherwood_v3_table<std::pair<c10::IValue, c10::IValue>, c10::IValue, c10::detail::DictKeyHash, ska_ordered::detailv3::KeyOrValueHasher<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyHash>, c10::detail::DictKeyEqualTo, ska_ordered::detailv3::KeyOrValueEquality<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyEqualTo>, std::allocator<std::pair<c10::IValue, c10::IValue> >, std::allocator<ska_ordered::detailv3::sherwood_v3_entry<std::pair<c10::IValue, c10::IValue> > > >::templated_iterator<std::pair<c10::IValue, c10::IValue> > >, bool> c10::Dict<c10::IValue, c10::IValue>::insert_or_assign<c10::IValue&, c10::IValue&>(c10::IValue&, c10::IValue&) const /pytorch/aten/src/ATen/core/Dict_inl.h:136:5
    #2 0xea680a7 in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:452:14
    #3 0xea64e07 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251:27
    pytorch#4 0xea64a61 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3
    pytorch#5 0xe9b13ce in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:126:20
    pytorch#6 0xe9b178c in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:136:10
    pytorch#7 0xfdc8aa1 in torch::distributed::rpc::(anonymous namespace)::toIValues(torch::distributed::rpc::Message const&, torch::distributed::rpc::MessageType) /pytorch/torch/csrc/distributed/rpc/rref_proto.cpp:23:16
    pytorch#8 0xfdca3ca in torch::distributed::rpc::PythonRRefFetchCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/rref_proto.cpp:105:17
    pytorch#9 0xfe7f347 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:117:14
    pytorch#10 0x5c5d13 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27
    pytorch#11 0x5c2bfd in ExecuteFilesOnyByOne /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
    pytorch#12 0x5c2a08 in LLVMFuzzerRunDriver /AFLplusplus/utils/aflpp_driver/aflpp_driver.c
    pytorch#13 0x5c25c8 in main /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300:10
    pytorch#14 0x7feb90908082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
    pytorch#15 0x50237d in _start (/message_deserialize_afl+0x50237d)

0x6020004ed808 is located 8 bytes to the right of 16-byte region [0x6020004ed7f0,0x6020004ed800)
allocated by thread T0 here:
    #0 0x5bfc1d in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3
    #1 0x32ad8d1 in std::_Vector_base<c10::IValue, std::allocator<c10::IValue> >::_M_allocate(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:346:20
    #2 0x32ad8d1 in void std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_realloc_insert<double>(__gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue, std::allocator<c10::IValue> > >, double&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:440:33

SUMMARY: AddressSanitizer: heap-buffer-overflow /pytorch/aten/src/ATen/core/ivalue.h:224:33 in c10::IValue::IValue(c10::IValue const&)
Shadow bytes around the buggy address:
  0x0c0480095ab0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa 00 00
  0x0c0480095ac0: fa fa 00 00 fa fa 00 00 fa fa 04 fa fa fa 04 fa
  0x0c0480095ad0: fa fa 00 fa fa fa fd fa fa fa 04 fa fa fa 00 fa
  0x0c0480095ae0: fa fa 00 fa fa fa fd fa fa fa fd fa fa fa fd fa
  0x0c0480095af0: fa fa fd fd fa fa 00 00 fa fa 00 fa fa fa 00 00
=>0x0c0480095b00: fa[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0480095b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0480095b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0480095b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0480095b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0480095b50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==29858==ABORTING
```

### PoC for load():
[crash-2bd32e496811fb06de24a2bb720dc6490218009f.zip](/uploads/53d108cdd434ec4b11a2034bbca3cfd8/crash-2bd32e496811fb06de24a2bb720dc6490218009f.zip)

```
==29865==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60c00031f388 at pc 0x000000669984 bp 0x7ffd6c6de630 sp 0x7ffd6c6de628
READ of size 4 at 0x60c00031f388 thread T0
    #0 0x669983 in c10::IValue::IValue(c10::IValue const&) /pytorch/aten/src/ATen/core/ivalue.h:224:33
    #1 0xdc3de68 in std::pair<c10::impl::DictIterator<c10::IValue, c10::IValue, ska_ordered::detailv3::sherwood_v3_table<std::pair<c10::IValue, c10::IValue>, c10::IValue, c10::detail::DictKeyHash, ska_ordered::detailv3::KeyOrValueHasher<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyHash>, c10::detail::DictKeyEqualTo, ska_ordered::detailv3::KeyOrValueEquality<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyEqualTo>, std::allocator<std::pair<c10::IValue, c10::IValue> >, std::allocator<ska_ordered::detailv3::sherwood_v3_entry<std::pair<c10::IValue, c10::IValue> > > >::templated_iterator<std::pair<c10::IValue, c10::IValue> > >, bool> c10::Dict<c10::IValue, c10::IValue>::insert_or_assign<c10::IValue&, c10::IValue&>(c10::IValue&, c10::IValue&) const /pytorch/aten/src/ATen/core/Dict_inl.h:136:5
    #2 0xea5a207 in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:452:14
    #3 0xea56f67 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251:27
    pytorch#4 0xea56bc1 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3
    pytorch#5 0xe96db4e in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) /pytorch/torch/csrc/jit/serialization/import_read.cpp:53:20
    pytorch#6 0xe8fc648 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import.cpp:184:10
    pytorch#7 0xe8f8935 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize(c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:287:19
    pytorch#8 0xe8f6d74 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:386:25
    pytorch#9 0xe90086e in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:322:10
    pytorch#10 0xe903209 in torch::jit::load(std::istream&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:482:10
    pytorch#11 0x5c2d60 in LLVMFuzzerTestOneInput /load.cc:42:14
    pytorch#12 0x5c2a8d in ExecuteFilesOnyByOne /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
    pytorch#13 0x5c2898 in LLVMFuzzerRunDriver /AFLplusplus/utils/aflpp_driver/aflpp_driver.c
    pytorch#14 0x5c2458 in main /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300:10
    pytorch#15 0x7f156ae33082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
    pytorch#16 0x50220d in _start (/load_afl+0x50220d)

0x60c00031f388 is located 8 bytes to the right of 128-byte region [0x60c00031f300,0x60c00031f380)
allocated by thread T0 here:
    #0 0x5bfaad in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3
    #1 0xa86231 in std::_Vector_base<c10::IValue, std::allocator<c10::IValue> >::_M_allocate(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:346:20
    #2 0xa86231 in void std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_realloc_insert<c10::IValue&>(__gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue, std::allocator<c10::IValue> > >, c10::IValue&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:440:33

SUMMARY: AddressSanitizer: heap-buffer-overflow /pytorch/aten/src/ATen/core/ivalue.h:224:33 in c10::IValue::IValue(c10::IValue const&)
Shadow bytes around the buggy address:
  0x0c188005be20: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  0x0c188005be30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c188005be40: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c188005be50: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  0x0c188005be60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c188005be70: fa[fa]fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c188005be80: 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
  0x0c188005be90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c188005bea0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c188005beb0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c188005bec0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==29865==ABORTING
```
Pull Request resolved: pytorch#103667
Approved by: https://github.com/albanD
DamianSzwichtenberg pushed a commit that referenced this pull request Jul 24, 2023
…103969)

Hi! We've been fuzzing torchvision project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz).
We've found a heap buffer overflow error at `source_range_serialization.cpp:73` in pytorch project.

The error occurs because there is not check in `deserialize_source` that `text_table_` size can be less than `fnameIndex`. To prevent the error the corresponding check must be located.

torchvision version: 9d0a93eee90bf7c401b74ebf9c8be80346254f15
pytorch version: 0f1621d

OS: Ubuntu 20.04

How to reproduce

1. Build docker from [here](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/torchvision) and run the container:

        sudo docker build -t oss-sydr-fuzz-torchvision .
        sudo docker run --privileged --rm -v `pwd`:/fuzz -it oss-sydr-fuzz-torchvision /bin/bash

2. Run the target on this input:  [serialization-crash.txt](https://github.com/pytorch/pytorch/files/11819901/serialization-crash.txt)

        /encode_png_fuzz serialization-crash.txt

3. You will see the following output:

        =================================================================
        ==13==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200055a630 at pc 0x0000010197b7 bp 0x7ffd4cfb15f0 sp 0x7ffd4cfb15e8
        READ of size 8 at 0x60200055a630 thread T0
            #0 0x10197b6 in std::__shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, (__gnu_cxx::_Lock_policy)2>::get() const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:1325:16
            #1 0x10197b6 in std::__shared_ptr_access<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, (__gnu_cxx::_Lock_policy)2, false, false>::_M_get() const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:1024:66
            #2 0x10197b6 in std::__shared_ptr_access<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, (__gnu_cxx::_Lock_policy)2, false, false>::operator*() const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:1011:10
            #3 0xde888c2 in torch::jit::SourceRangeDeserializer::deserialize_source(c10::IValue const&) /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:73:16
            pytorch#4 0xde8802b in torch::jit::SourceRangeDeserializer::deserialize(c10::IValue const&) /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:51:37
            pytorch#5 0xde8e9c7 in torch::jit::ConcreteSourceRangeUnpickler::unpickle() /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:224:39
            pytorch#6 0xde8fb19 in torch::jit::ConcreteSourceRangeUnpickler::findSourceRangeThatGenerated(torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:231:3
            pytorch#7 0x10798e7 in torch::jit::Source::findSourceRangeThatGenerated(torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/frontend/source_range.cpp:144:23
            pytorch#8 0x1079d9a in torch::jit::SourceRange::findSourceRangeThatGenerated() const /pytorch/torch/csrc/jit/frontend/source_range.h:384:26
            pytorch#9 0x1079acd in torch::jit::SourceRange::highlight(std::ostream&) const /pytorch/torch/csrc/jit/frontend/source_range.cpp:149:32
            pytorch#10 0x1026fe2 in torch::jit::Lexer::expected(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch::jit::Token const&) /pytorch/torch/csrc/jit/frontend/lexer.h:461:13
            pytorch#11 0x10417d9 in torch::jit::Lexer::expected(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/frontend/lexer.h:465:5
            pytorch#12 0x102e52c in torch::jit::Lexer::expect(int) /pytorch/torch/csrc/jit/frontend/lexer.h:471:7
            pytorch#13 0xcee774c in torch::jit::ParserImpl::parseIdent() /pytorch/torch/csrc/jit/frontend/parser.cpp:52:16
            pytorch#14 0xcef4ea8 in torch::jit::ParserImpl::parseBaseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:195:22
            pytorch#15 0xcef2c1b in torch::jit::ParserImpl::parseExp(int) /pytorch/torch/csrc/jit/frontend/parser.cpp:284:16
            pytorch#16 0xcefac6a in torch::jit::ParserImpl::parseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:262:12
            pytorch#17 0xcefac6a in torch::jit::ParserImpl::parseSubscriptExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:403:15
            pytorch#18 0xceff39f in torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()::operator()() const /pytorch/torch/csrc/jit/frontend/parser.cpp:354:54
            pytorch#19 0xceff39f in torch::jit::Expr std::__invoke_impl<void, torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()&>(std::__invoke_other, torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
            pytorch#20 0xceea935 in torch::jit::ParserImpl::parseSequence(int, int, int, std::function<void ()> const&) /pytorch/torch/csrc/jit/frontend/parser.cpp:339:7
            pytorch#21 0xceefd69 in torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)()) /pytorch/torch/csrc/jit/frontend/parser.cpp:353:5
            pytorch#22 0xcef895a in torch::jit::ParserImpl::parseSubscript(c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> > const&) /pytorch/torch/csrc/jit/frontend/parser.cpp:430:9
            pytorch#23 0xcef5e5c in torch::jit::ParserImpl::parseBaseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:206:18
            pytorch#24 0xcef2c1b in torch::jit::ParserImpl::parseExp(int) /pytorch/torch/csrc/jit/frontend/parser.cpp:284:16
            pytorch#25 0xceeeb9d in torch::jit::ParserImpl::parseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:262:12
            pytorch#26 0xceeeb9d in torch::jit::ParserImpl::parseExpOrExpTuple() /pytorch/torch/csrc/jit/frontend/parser.cpp:94:19
            pytorch#27 0xcee8a36 in torch::jit::ParserImpl::parseStmt(bool) /pytorch/torch/csrc/jit/frontend/parser.cpp:612:20
            pytorch#28 0xcee7e72 in torch::jit::ParserImpl::parseStatements(bool, bool) /pytorch/torch/csrc/jit/frontend/parser.cpp:697:23
            pytorch#29 0xcee56f5 in torch::jit::ParserImpl::parseClass() /pytorch/torch/csrc/jit/frontend/parser.cpp:747:9
            pytorch#30 0xcee544a in torch::jit::Parser::parseClass() /pytorch/torch/csrc/jit/frontend/parser.cpp:812:17
            pytorch#31 0xdddbea9 in torch::jit::SourceImporterImpl::parseSourceIfNeeded(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:182:42
            pytorch#32 0xdddadbc in torch::jit::SourceImporterImpl::findNamedType(c10::QualifiedName const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:135:3
            pytorch#33 0xdde1d88 in torch::jit::SourceImporterImpl::resolveType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:261:10
            pytorch#34 0xcf2ba5f in torch::jit::ScriptTypeParser::parseTypeFromExpr(torch::jit::Expr const&) const /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:238:24
            pytorch#35 0xcf2bec7 in torch::jit::ScriptTypeParser::parseType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:312:10
            pytorch#36 0xddf4284 in torch::jit::SourceImporter::loadType(c10::QualifiedName const&) const /pytorch/torch/csrc/jit/serialization/import_source.cpp:786:27
            pytorch#37 0xdd739f7 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0::operator()(c10::QualifiedName const&) const /pytorch/torch/csrc/jit/serialization/import.cpp:146:33
            pytorch#38 0xdd739f7 in c10::StrongTypePtr std::__invoke_impl<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(std::__invoke_other, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
            pytorch#39 0xdd73880 in std::enable_if<is_invocable_r_v<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>, c10::StrongTypePtr>::type std::__invoke_r<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:113:9
            pytorch#40 0xdd736d6 in std::_Function_handler<c10::StrongTypePtr (c10::QualifiedName const&), torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0>::_M_invoke(std::_Any_data const&, c10::QualifiedName const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:291:9
            pytorch#41 0xdd76349 in std::function<c10::StrongTypePtr (c10::QualifiedName const&)>::operator()(c10::QualifiedName const&) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14
            pytorch#42 0xdeb9f48 in torch::jit::Unpickler::readGlobal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/unpickler.cpp:835:9
            pytorch#43 0xdeb012d in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:511:7
            pytorch#44 0xdeae437 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251:27
            pytorch#45 0xdeae0d2 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3
            pytorch#46 0xddd6de3 in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) /pytorch/torch/csrc/jit/serialization/import_read.cpp:53:20
            pytorch#47 0xdd732dd in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import.cpp:184:10
            pytorch#48 0xdd69885 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize(c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:287:19
            pytorch#49 0xdd6c855 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:438:25
            pytorch#50 0xdd6c1c7 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:421:10
            pytorch#51 0xdd6dce4 in torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:503:10
            pytorch#52 0xf2d3f75 in torch::serialize::InputArchive::load_from(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) /pytorch/torch/csrc/api/src/serialize/input-archive.cpp:97:13
            pytorch#53 0x60509c in void torch::load<at::Tensor, char*&>(at::Tensor&, char*&) /pytorch/torch/include/torch/csrc/api/include/torch/serialize.h:107:11
            pytorch#54 0x6036be in LLVMFuzzerTestOneInput /vision/encode_png.cc:38:5
            pytorch#55 0x66b041 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
            pytorch#56 0x6544cc in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
            pytorch#57 0x65a61b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
            pytorch#58 0x654222 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
            pytorch#59 0x7f3d12cc7082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
            pytorch#60 0x542cdd in _start (/encode_png_fuzz+0x542cdd)

        0x60200055a630 is located 16 bytes to the right of 16-byte region [0x60200055a610,0x60200055a620)
        allocated by thread T0 here:
            #0 0x60057d in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3
            #1 0xde9185d in std::_Vector_base<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_M_allocate(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:346:20
            #2 0xde9185d in void std::vector<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_M_realloc_insert<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >(__gnu_cxx::__normal_iterator<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >*, std::vector<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > >, std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:440:33
            #3 0xde916a1 in std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >& std::vector<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::emplace_back<std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >(std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:121:4
            pytorch#4 0xde8f445 in torch::jit::SourceRangeDeserializer::SourceRangeDeserializer(c10::IValue) /pytorch/torch/csrc/jit/serialization/source_range_serialization.h:42:19
            pytorch#5 0xde8e141 in torch::jit::ConcreteSourceRangeUnpickler::unpickle() /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:215:28
            pytorch#6 0xde8fb19 in torch::jit::ConcreteSourceRangeUnpickler::findSourceRangeThatGenerated(torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:231:3
            pytorch#7 0x10798e7 in torch::jit::Source::findSourceRangeThatGenerated(torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/frontend/source_range.cpp:144:23
            pytorch#8 0x1079d9a in torch::jit::SourceRange::findSourceRangeThatGenerated() const /pytorch/torch/csrc/jit/frontend/source_range.h:384:26
            pytorch#9 0x1079acd in torch::jit::SourceRange::highlight(std::ostream&) const /pytorch/torch/csrc/jit/frontend/source_range.cpp:149:32
            pytorch#10 0x1026fe2 in torch::jit::Lexer::expected(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch::jit::Token const&) /pytorch/torch/csrc/jit/frontend/lexer.h:461:13
            pytorch#11 0x10417d9 in torch::jit::Lexer::expected(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/frontend/lexer.h:465:5
            pytorch#12 0xcee774c in torch::jit::ParserImpl::parseIdent() /pytorch/torch/csrc/jit/frontend/parser.cpp:52:16
            pytorch#13 0xcef4ea8 in torch::jit::ParserImpl::parseBaseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:195:22
            pytorch#14 0xcef2c1b in torch::jit::ParserImpl::parseExp(int) /pytorch/torch/csrc/jit/frontend/parser.cpp:284:16
            pytorch#15 0xcefac6a in torch::jit::ParserImpl::parseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:262:12
            pytorch#16 0xcefac6a in torch::jit::ParserImpl::parseSubscriptExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:403:15
            pytorch#17 0xceff39f in torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()::operator()() const /pytorch/torch/csrc/jit/frontend/parser.cpp:354:54
            pytorch#18 0xceff39f in torch::jit::Expr std::__invoke_impl<void, torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()&>(std::__invoke_other, torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)())::'lambda'()&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
            pytorch#19 0xceea935 in torch::jit::ParserImpl::parseSequence(int, int, int, std::function<void ()> const&) /pytorch/torch/csrc/jit/frontend/parser.cpp:339:7
            pytorch#20 0xceefd69 in torch::jit::List<torch::jit::Expr> torch::jit::ParserImpl::parseList<torch::jit::Expr>(int, int, int, torch::jit::Expr (torch::jit::ParserImpl::*)()) /pytorch/torch/csrc/jit/frontend/parser.cpp:353:5
            pytorch#21 0xcef895a in torch::jit::ParserImpl::parseSubscript(c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> > const&) /pytorch/torch/csrc/jit/frontend/parser.cpp:430:9
            pytorch#22 0xcef5e5c in torch::jit::ParserImpl::parseBaseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:206:18
            pytorch#23 0xcef2c1b in torch::jit::ParserImpl::parseExp(int) /pytorch/torch/csrc/jit/frontend/parser.cpp:284:16
            pytorch#24 0xceeeb9d in torch::jit::ParserImpl::parseExp() /pytorch/torch/csrc/jit/frontend/parser.cpp:262:12
            pytorch#25 0xceeeb9d in torch::jit::ParserImpl::parseExpOrExpTuple() /pytorch/torch/csrc/jit/frontend/parser.cpp:94:19
            pytorch#26 0xcee8a36 in torch::jit::ParserImpl::parseStmt(bool) /pytorch/torch/csrc/jit/frontend/parser.cpp:612:20
            pytorch#27 0xcee7e72 in torch::jit::ParserImpl::parseStatements(bool, bool) /pytorch/torch/csrc/jit/frontend/parser.cpp:697:23
            pytorch#28 0xcee56f5 in torch::jit::ParserImpl::parseClass() /pytorch/torch/csrc/jit/frontend/parser.cpp:747:9
            pytorch#29 0xcee544a in torch::jit::Parser::parseClass() /pytorch/torch/csrc/jit/frontend/parser.cpp:812:17
            pytorch#30 0xdddbea9 in torch::jit::SourceImporterImpl::parseSourceIfNeeded(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:182:42
            pytorch#31 0xdddadbc in torch::jit::SourceImporterImpl::findNamedType(c10::QualifiedName const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:135:3
            pytorch#32 0xdde1d88 in torch::jit::SourceImporterImpl::resolveType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch::jit::SourceRange const&) /pytorch/torch/csrc/jit/serialization/import_source.cpp:261:10
            pytorch#33 0xcf2ba5f in torch::jit::ScriptTypeParser::parseTypeFromExpr(torch::jit::Expr const&) const /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:238:24

        SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:1325:16 in std::__shared_ptr<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, (__gnu_cxx::_Lock_policy)2>::get() const
        Shadow bytes around the buggy address:
          0x0c04800a3470: fa fa 00 00 fa fa 00 00 fa fa fd fa fa fa 00 00
          0x0c04800a3480: fa fa fd fa fa fa fd fd fa fa fd fd fa fa fd fa
          0x0c04800a3490: fa fa fd fd fa fa 00 00 fa fa 00 00 fa fa 00 00
          0x0c04800a34a0: fa fa fd fa fa fa fd fd fa fa fd fa fa fa 00 fa
          0x0c04800a34b0: fa fa fd fd fa fa fd fd fa fa fd fa fa fa fd fd
        =>0x0c04800a34c0: fa fa 00 00 fa fa[fa]fa fa fa fa fa fa fa fa fa
          0x0c04800a34d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
          0x0c04800a34e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
          0x0c04800a34f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
          0x0c04800a3500: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
          0x0c04800a3510: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        Shadow byte legend (one shadow byte represents 8 application bytes):
          Addressable:           00
          Partially addressable: 01 02 03 04 05 06 07
          Heap left redzone:       fa
          Freed heap region:       fd
          Stack left redzone:      f1
          Stack mid redzone:       f2
          Stack right redzone:     f3
          Stack after return:      f5
          Stack use after scope:   f8
          Global redzone:          f9
          Global init order:       f6
          Poisoned by user:        f7
          Container overflow:      fc
          Array cookie:            ac
          Intra object redzone:    bb
          ASan internal:           fe
          Left alloca redzone:     ca
          Right alloca redzone:    cb
        ==13==ABORTING
Pull Request resolved: pytorch#103969
Approved by: https://github.com/davidberard98
DamianSzwichtenberg pushed a commit that referenced this pull request Jul 24, 2023
Fixes ASAN stack-use-after-scope in MKLDNN.
The stack trace is
```
2023-06-27T16:37:20.9099950Z ==1424==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7f0c5dc20980 at pc 0x7f0c61286a73 bp 0x7ffef8e76990 sp 0x7ffef8e76118
2023-06-27T16:37:20.9100054Z READ of size 24 at 0x7f0c5dc20980 thread T0
2023-06-27T16:37:20.9100327Z     #0 0x7f0c61286a72 in memcmp (/usr/lib/llvm-7/lib/clang/7.0.1/lib/linux/libclang_rt.asan-x86_64.so+0x5da72)
2023-06-27T16:37:20.9100701Z     #1 0x7f0c2f395d0b in c10::ArrayRef<long>::equals(c10::ArrayRef<long>) const (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xcb8bd0b)
2023-06-27T16:37:20.9101196Z     #2 0x7f0c314a1bb1 in at::native::mkldnn_matmul(at::Tensor const&, at::Tensor const&, at::Tensor const&, float, float) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xec97bb1)
2023-06-27T16:37:20.9101714Z     #3 0x7f0c301f49c5 in at::native::bmm_out_or_baddbmm_(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::Scalar const&, c10::Scalar const&, bool) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xd9ea9c5)
2023-06-27T16:37:20.9102153Z     pytorch#4 0x7f0c301f85ab in at::native::structured_bmm_out_cpu::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xd9ee5ab)
2023-06-27T16:37:20.9102601Z     pytorch#5 0x7f0c32cb3cb6 in at::(anonymous namespace)::wrapper_CPU_bmm(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x104a9cb6)
2023-06-27T16:37:20.9103662Z     pytorch#6 0x7f0c32ea1f43 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&), &(at::(anonymous namespace)::wrapper_CPU_bmm(at::Tensor const&, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x10697f43)
2023-06-27T16:37:20.9104330Z     pytorch#7 0x7f0c3187252a in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) const (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xf06852a)
2023-06-27T16:37:20.9104756Z     pytorch#8 0x7f0c3257e097 in at::_ops::bmm::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xfd74097)
2023-06-27T16:37:20.9105237Z     pytorch#9 0x7f0c383c31c3 in torch::autograd::VariableType::(anonymous namespace)::bmm(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x15bb91c3)
2023-06-27T16:37:20.9106496Z     pytorch#10 0x7f0c383c25b9 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &(torch::autograd::VariableType::(anonymous namespace)::bmm(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x15bb85b9)
2023-06-27T16:37:20.9106874Z     pytorch#11 0x7f0c3257da60 in at::_ops::bmm::call(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xfd73a60)
2023-06-27T16:37:20.9107275Z     pytorch#12 0x7f0c301fc0e2 in at::native::_matmul_impl(at::Tensor&, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xd9f20e2)
2023-06-27T16:37:20.9107647Z     pytorch#13 0x7f0c301f9c21 in at::native::matmul(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xd9efc21)
2023-06-27T16:37:20.9108853Z     pytorch#14 0x7f0c33dca7e3 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&), &(at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__matmul(at::Tensor const&, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x115c07e3)
2023-06-27T16:37:20.9109255Z     pytorch#15 0x7f0c32958ef0 in at::_ops::matmul::call(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x1014eef0)
2023-06-27T16:37:20.9110023Z     pytorch#16 0x7f0c2f596b62 in at::autocast::WrapFunction_<(at::autocast::CastPolicy)0, (c10::DeviceType)0, at::Tensor (at::Tensor const&, at::Tensor const&), &(at::_ops::matmul::call(at::Tensor const&, at::Tensor const&)), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >::call(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xcd8cb62)
2023-06-27T16:37:20.9110723Z     pytorch#17 0x7f0c2f348403 in c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >::operator()(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xcb3e403)
2023-06-27T16:37:20.9111596Z     pytorch#18 0x7f0c2f348063 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0xcb3e063)
2023-06-27T16:37:20.9111976Z     pytorch#19 0x7f0c32958ef0 in at::_ops::matmul::call(at::Tensor const&, at::Tensor const&) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so+0x1014eef0)
2023-06-27T16:37:20.9112383Z     pytorch#20 0x7f0c5803dc3e in torch::autograd::THPVariable_matmul(_object*, _object*, _object*) (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib/libtorch_python.so+0x2b2cc3e)
2023-06-27T16:37:20.9112561Z warning: parsing line table prologue at 0x00000000 should have ended at 0x0000050b but it ended at 0x0000050a
2023-06-27T16:37:20.9112713Z     pytorch#21 0x5074a6 in cfunction_call (/opt/conda/envs/py_3.9/bin/python3.9+0x5074a6)
2023-06-27T16:37:20.9112857Z     pytorch#22 0x505997 in _PyObject_Call (/opt/conda/envs/py_3.9/bin/python3.9+0x505997)
2023-06-27T16:37:20.9113114Z     pytorch#23 0x505997 in PyObject_Call /croot/python-split_1684193875530/work/build-static/<invalid>:293:12
2023-06-27T16:37:20.9113258Z     pytorch#24 0x4ed302 in do_call_core (/opt/conda/envs/py_3.9/bin/python3.9+0x4ed302)
2023-06-27T16:37:20.9113633Z     pytorch#25 0x4ed302 in _PyEval_EvalFrameDefault /croot/python-split_1684193875530/work/build-static/<invalid>:3582:22
2023-06-27T16:37:20.9113780Z     pytorch#26 0x4e6729 in _PyEval_EvalFrame (/opt/conda/envs/py_3.9/bin/python3.9+0x4e6729)
2023-06-27T16:37:20.9114041Z     pytorch#27 0x4e6729 in _PyEval_EvalCode /croot/python-split_1684193875530/work/build-static/<invalid>:4329:14
2023-06-27T16:37:20.9114202Z     pytorch#28 0x4efd7d in _PyFunction_Vectorcall (/opt/conda/envs/py_3.9/bin/python3.9+0x4efd7d)
```

Pull Request resolved: pytorch#104331
Approved by: https://github.com/soulitzer
DamianSzwichtenberg pushed a commit that referenced this pull request Jul 24, 2023
Hi! We've been fuzzing torchvision project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz).
We've found a SEGV error at address 0x0 at `vector.h:163` in pytorch third-party project flatbuffers.

The error occurs because the `ivalues` field of flatbuffer module can be null, so the corresponding check must be inserted.

torchvision version: 9d0a93eee90bf7c401b74ebf9c8be80346254f15

pytorch version: 0f1621d

OS: Ubuntu 20.04

How to reproduce

1. Build docker from [here](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/torchvision) and run the container:

        sudo docker build -t oss-sydr-fuzz-torchvision .
        sudo docker run --privileged --rm -v `pwd`:/fuzz -it oss-sydr-fuzz-torchvision /bin/bash

2. Run the target on this input:
[malformed-module.txt](https://github.com/pytorch/pytorch/files/11879653/malformed-module.txt)

        /encode_png_fuzz malformed-module.txt

3. You will see the following output:

        AddressSanitizer:DEADLYSIGNAL
        =================================================================
        ==1154==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000d17cc61 bp 0x7ffcbe8637f0 sp 0x7ffcbe863660 T0)
        ==1154==The signal is caused by a READ memory access.
        ==1154==Hint: address points to the zero page.
            #0 0xd17cc61 in flatbuffers::Vector<flatbuffers::Offset<torch::jit::mobile::serialization::IValue> >::size() const /pytorch/third_party/flatbuffers/include/flatbuffers/vector.h:163:48
            #1 0xd17cc61 in torch::jit::(anonymous namespace)::FlatbufferLoader::parseModule(torch::jit::mobile::serialization::Module*) /pytorch/torch/csrc/jit/mobile/flatbuffer_loader.cpp:293:32
            #2 0xd17dd23 in torch::jit::parse_and_initialize_mobile_module_for_jit(void*, unsigned long, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, std::vector<c10::IValue, std::allocator<c10::IValue> >&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >*) /pytorch/torch/csrc/jit/mobile/flatbuffer_loader.cpp:809:29
            #3 0xdd661b4 in torch::jit::parse_and_initialize_jit_module(std::shared_ptr<char>, unsigned long, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, c10::optional<c10::Device>) /pytorch/torch/csrc/jit/serialization/import.cpp:345:28
            pytorch#4 0xdd6b24a in torch::jit::_load_jit_module_from_bytes(std::shared_ptr<char>, unsigned long, std::shared_ptr<torch::jit::CompilationUnit>, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:547:14
            pytorch#5 0xdd6c6df in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:443:10
            pytorch#6 0xdd6c1c7 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:421:10
            pytorch#7 0xdd6dce4 in torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:503:10
            pytorch#8 0xf2d3f75 in torch::serialize::InputArchive::load_from(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) /pytorch/torch/csrc/api/src/serialize/input-archive.cpp:97:13
            pytorch#9 0x60509c in void torch::load<at::Tensor, char*&>(at::Tensor&, char*&) /pytorch/torch/include/torch/csrc/api/include/torch/serialize.h:107:11
            pytorch#10 0x6036be in LLVMFuzzerTestOneInput /vision/encode_png.cc:38:5
            pytorch#11 0x66b041 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
            pytorch#12 0x6544cc in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
            pytorch#13 0x65a61b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
            pytorch#14 0x654222 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
            pytorch#15 0x7f0c87b9c082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
            pytorch#16 0x542cdd in _start (/encode_png_fuzz+0x542cdd)

        AddressSanitizer can not provide additional info.
        SUMMARY: AddressSanitizer: SEGV /pytorch/third_party/flatbuffers/include/flatbuffers/vector.h:163:48 in flatbuffers::Vector<flatbuffers::Offset<torch::jit::mobile::serialization::IValue> >::size() const
        ==1154==ABORTING

Pull Request resolved: pytorch#104243
Approved by: https://github.com/kit1980
DamianSzwichtenberg pushed a commit that referenced this pull request Jul 24, 2023
Hi! we've been fuzzing PyTorch project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch).
We've found a couple heap-buffer-overflows in `distributed/rpc` module.

PyTorch version: pytorch@0f1621d

OS: Ubuntu 20.04

### How to reproduce

1.  Build docker from this [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch) and run the container.
2.  Then run `message_deserialize-afl++` fuzzing target on provided crash-inputs ([crash-056826339f6da8dbb97c944178e94494369a9e22.zip](https://github.com/pytorch/pytorch/files/12096151/crash-056826339f6da8dbb97c944178e94494369a9e22.zip), [crash-4f85db9f19fe152c0018f6675c3b4c122227058f.zip](https://github.com/pytorch/pytorch/files/12096160/crash-4f85db9f19fe152c0018f6675c3b4c122227058f.zip)):
```
unzip crash-4f85db9f19fe152c0018f6675c3b4c122227058f.zip
/message_deserialize-afl++ crash-4f85db9f19fe152c0018f6675c3b4c122227058f
```

### Heap buffer overflow in torch/csrc/jit/serialization/pickle.cpp:144

[crash-056826339f6da8dbb97c944178e94494369a9e22.zip](https://github.com/pytorch/pytorch/files/12096151/crash-056826339f6da8dbb97c944178e94494369a9e22.zip)

```asan
    "==7614==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60b001b58355 at pc 0x0000005d1147 bp 0x7fffffffa610 sp 0x7fffffff9de0",
    "READ of size 256 at 0x60b001b58355 thread T0",
    "    #0 0x5d1146 in __asan_memcpy /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:22:3",
    "    #1 0xd1cd19f in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&))::$_3::operator()(char*, unsigned long) const /pytorch/torch/csrc/jit/serialization/pickle.cpp:144:9",
    "    #2 0xd1cd19f in unsigned long std::__invoke_impl<unsigned long, torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&))::$_3&, char*, unsigned long>(std::__invoke_other, torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&))::$_3&, char*&&, unsigned long&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14",
    "    #3 0xd27aa48 in std::function<unsigned long (char*, unsigned long)>::operator()(char*, unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14",
    "    pytorch#4 0xd27a61c in torch::jit::Unpickler::readSlowWithBuffer(char*, unsigned long) /pytorch/torch/csrc/jit/serialization/unpickler.cpp:1047:23",
    "    pytorch#5 0xd2698b8 in unsigned char torch::jit::Unpickler::read<unsigned char>() /pytorch/torch/csrc/jit/serialization/unpickler.h:111:7",
    "    pytorch#6 0xd268816 in torch::jit::Unpickler::readOpCode() /pytorch/torch/csrc/jit/serialization/unpickler.h:130:38",
    "    pytorch#7 0xd268816 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:238:17",
    "    pytorch#8 0xd268522 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3",
    "    pytorch#9 0xd1c8502 in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:126:20",
    "    pytorch#10 0xd1c8dbd in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:136:10",
    "    pytorch#11 0xe56b16d in torch::distributed::rpc::readWrappedPayload(std::vector<char, std::allocator<char> >&, torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:515:18",
    "    pytorch#12 0xe3d8f29 in torch::distributed::autograd::RpcWithProfilingReq::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/autograd/rpc_messages/rpc_with_profiling_req.cpp:112:24",
    "    pytorch#13 0xe55f692 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:138:14",
    "    pytorch#14 0x6120a8 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27",
    "    pytorch#15 0x535de1 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15",
    "    pytorch#16 0x51fcec in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6",
    "    pytorch#17 0x525a3b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9",
    "    pytorch#18 0x54eff2 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10",
    "    pytorch#19 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)",
    "    pytorch#20 0x51a60d in _start (/message_deserialize_fuzz+0x51a60d)",
    "",
    "0x60b001b58355 is located 0 bytes to the right of 101-byte region [0x60b001b582f0,0x60b001b58355)",
    "allocated by thread T0 here:",
    "    #0 0x60c7bd in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3",
    "    #1 0x62c7fd in std::_Vector_base<char, std::allocator<char> >::_M_allocate(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:346:20",
    "    #2 0x62c7fd in void std::vector<char, std::allocator<char> >::_M_range_initialize<unsigned char const*>(unsigned char const*, unsigned char const*, std::forward_iterator_tag) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1582:14",
    "    #3 0x612913 in std::vector<char, std::allocator<char> >::vector<unsigned char const*, void>(unsigned char const*, unsigned char const*, std::allocator<char> const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:657:4",
    "    pytorch#4 0x611c4a in LLVMFuzzerTestOneInput /message_deserialize.cc:181:21",
    "    pytorch#5 0x535de1 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15",
    "    pytorch#6 0x51fcec in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6",
    "    pytorch#7 0x525a3b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9",
    "    pytorch#8 0x54eff2 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10",
    "    pytorch#9 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)",
    "",
    "SUMMARY: AddressSanitizer: heap-buffer-overflow /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:22:3 in __asan_memcpy",
    "Shadow bytes around the buggy address:",
    "  0x0c1680363010: 00 00 00 fa fa fa fa fa fa fa fa fa 00 00 00 00",
    "  0x0c1680363020: 00 00 00 00 00 00 00 00 00 00 fa fa fa fa fa fa",
    "  0x0c1680363030: fa fa 00 00 00 00 00 00 00 00 00 00 00 00 00 fa",
    "  0x0c1680363040: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00",
    "  0x0c1680363050: 00 00 00 00 00 fa fa fa fa fa fa fa fa fa 00 00",
    "=>0x0c1680363060: 00 00 00 00 00 00 00 00 00 00[05]fa fa fa fa fa",
    "  0x0c1680363070: fa fa fa fa 00 00 00 00 00 00 00 00 00 00 00 00",
    "  0x0c1680363080: 05 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c1680363090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c16803630a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c16803630b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "Shadow byte legend (one shadow byte represents 8 application bytes):",
    "  Addressable:           00",
    "  Partially addressable: 01 02 03 04 05 06 07",
    "  Heap left redzone:       fa",
    "  Freed heap region:       fd",
    "  Stack left redzone:      f1",
    "  Stack mid redzone:       f2",
    "  Stack right redzone:     f3",
    "  Stack after return:      f5",
    "  Stack use after scope:   f8",
    "  Global redzone:          f9",
    "  Global init order:       f6",
    "  Poisoned by user:        f7",
    "  Container overflow:      fc",
    "  Array cookie:            ac",
    "  Intra object redzone:    bb",
    "  ASan internal:           fe",
    "  Left alloca redzone:     ca",
    "  Right alloca redzone:    cb",
    "==7614==ABORTING"
```

### Heap-buffer-overflow in aten/src/ATen/core/ivalue.h:432

[crash-4f85db9f19fe152c0018f6675c3b4c122227058f.zip](https://github.com/pytorch/pytorch/files/11553011/crash-4f85db9f19fe152c0018f6675c3b4c122227058f.zip)

```asan
    "==60983==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6150001e4108 at pc 0x000000601877 bp 0x7fffffff9fd0 sp 0x7fffffff9fc8",
    "READ of size 4 at 0x6150001e4108 thread T0",
    "    #0 0x601876 in c10::IValue::isTensor() const /pytorch/aten/src/ATen/core/ivalue.h:432:27",
    "    #1 0x601876 in c10::IValue::destroy() /pytorch/aten/src/ATen/core/ivalue.h:1148:9",
    "    #2 0x699f72 in c10::IValue::~IValue() /pytorch/aten/src/ATen/core/ivalue.h:236:5",
    "    #3 0x699f72 in void std::_Destroy<c10::IValue>(c10::IValue*) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_construct.h:140:19",
    "    pytorch#4 0x699f72 in void std::_Destroy_aux<false>::__destroy<c10::IValue*>(c10::IValue*, c10::IValue*) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_construct.h:152:6",
    "    pytorch#5 0x699f72 in void std::_Destroy<c10::IValue*>(c10::IValue*, c10::IValue*) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_construct.h:184:7",
    "    pytorch#6 0x699f72 in void std::_Destroy<c10::IValue*, c10::IValue>(c10::IValue*, c10::IValue*, std::allocator<c10::IValue>&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/alloc_traits.h:738:7",
    "    pytorch#7 0x699f72 in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_erase_at_end(c10::IValue*) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1796:6",
    "    pytorch#8 0x699e4a in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_erase(__gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue, std::allocator<c10::IValue> > >, __gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue, std::allocator<c10::IValue> > >) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:191:4",
    "    pytorch#9 0xea5b11e in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:454:14",
    "    pytorch#10 0xea57d97 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251:27",
    "    pytorch#11 0xea579f1 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3",
    "    pytorch#12 0xe9a435e in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:126:20",
    "    pytorch#13 0xe9a471c in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:136:10",
    "    pytorch#14 0xfcd034b in torch::distributed::autograd::PropagateGradientsReq::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/autograd/rpc_messages/propagate_gradients_req.cpp:54:18",
    "    pytorch#15 0xfe720ff in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:132:14",
    "    pytorch#16 0x5c5c93 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27",
    "    pytorch#17 0x5c2bfd in ExecuteFilesOnyByOne /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255:7",
    "    pytorch#18 0x5c2a08 in LLVMFuzzerRunDriver /AFLplusplus/utils/aflpp_driver/aflpp_driver.c",
    "    pytorch#19 0x5c25c8 in main /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300:10",
    "    pytorch#20 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)",
    "    pytorch#21 0x50237d in _start (/message_deserialize_afl+0x50237d)",
    "",
    "0x6150001e4108 is located 8 bytes to the right of 512-byte region [0x6150001e3f00,0x6150001e4100)",
    "allocated by thread T0 here:",
    "    #0 0x5bfbfa in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3",
    "",
    "SUMMARY: AddressSanitizer: heap-buffer-overflow /pytorch/aten/src/ATen/core/ivalue.h:432:27 in c10::IValue::isTensor() const",
    "Shadow bytes around the buggy address:",
    "  0x0c2a800347d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a800347e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00",
    "  0x0c2a800347f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00",
    "  0x0c2a80034800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00",
    "  0x0c2a80034810: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00",
    "=>0x0c2a80034820: fa[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a80034830: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a80034840: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a80034850: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a80034860: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "  0x0c2a80034870: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa",
    "Shadow byte legend (one shadow byte represents 8 application bytes):",
    "  Addressable:           00",
    "  Partially addressable: 01 02 03 04 05 06 07",
    "  Heap left redzone:       fa",
    "  Freed heap region:       fd",
    "  Stack left redzone:      f1",
    "  Stack mid redzone:       f2",
    "  Stack right redzone:     f3",
    "  Stack after return:      f5",
    "  Stack use after scope:   f8",
    "  Global redzone:          f9",
    "  Global init order:       f6",
    "  Poisoned by user:        f7",
    "  Container overflow:      fc",
    "  Array cookie:            ac",
    "  Intra object redzone:    bb",
    "  ASan internal:           fe",
    "  Left alloca redzone:     ca",
    "  Right alloca redzone:    cb",
    "==60983==ABORTING"
```
Pull Request resolved: pytorch#105537
Approved by: https://github.com/albanD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant