Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a list slicing op #803

Merged
merged 9 commits into from
May 15, 2021
Merged

Add a list slicing op #803

merged 9 commits into from
May 15, 2021

Conversation

benfred
Copy link
Member

@benfred benfred commented May 11, 2021

This adds an operator to slice rows of list columns. This will let us truncate list
column rows to only take the first N or last N items for instance.

Closes #734

This adds an operator to slice rows of list columns. This will let us truncate list
column rows to only take the first N or last N items for instance.

Closes NVIDIA-Merlin#734
@benfred benfred requested a review from rjzamora May 11, 2021 18:13
@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit fc6872997d538d16649b2d77a748a9b89655d0ec, no merge conflicts.
Running as SYSTEM
Setting status of fc6872997d538d16649b2d77a748a9b89655d0ec to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2394/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse fc6872997d538d16649b2d77a748a9b89655d0ec^{commit} # timeout=10
Checking out Revision fc6872997d538d16649b2d77a748a9b89655d0ec (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fc6872997d538d16649b2d77a748a9b89655d0ec # timeout=10
Commit message: "Add a list slicing op"
 > git rev-list --no-walk 362543c3541a826b67eb8fc469f858a47ac76f90 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins4709713148646693552.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-11 18:14:09.186050: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 18:14:10.411709: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-11 18:14:10.411770: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-11 18:14:10.412825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 18:14:10.413812: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 18:14:10.413838: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 18:14:10.413890: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-11 18:14:10.413928: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-11 18:14:10.413966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-11 18:14:10.414003: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-11 18:14:10.414107: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-11 18:14:10.414142: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-11 18:14:10.414161: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-11 18:14:10.418337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88, got 80
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 80 from C header, got 88 from PyObject
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))

Warning, treated as error:
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs/source/resources/api/ops/index.rst:4:toctree contains reference to nonexisting document 'resources/api/ops/listslice'
make: *** [Makefile:20: html] Error 2
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3055368620135297747.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit 84335bc73e254e3ad1bed1d672ec4fb37c331392, no merge conflicts.
Running as SYSTEM
Setting status of 84335bc73e254e3ad1bed1d672ec4fb37c331392 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2395/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse 84335bc73e254e3ad1bed1d672ec4fb37c331392^{commit} # timeout=10
Checking out Revision 84335bc73e254e3ad1bed1d672ec4fb37c331392 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 84335bc73e254e3ad1bed1d672ec4fb37c331392 # timeout=10
Commit message: "Add missing file"
 > git rev-list --no-walk fc6872997d538d16649b2d77a748a9b89655d0ec # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins2362642539300681440.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-11 18:18:03.921895: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 18:18:05.159834: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-11 18:18:05.159894: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-11 18:18:05.161021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 18:18:05.162070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 18:18:05.162097: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 18:18:05.162149: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-11 18:18:05.162185: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-11 18:18:05.162221: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-11 18:18:05.162255: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-11 18:18:05.162360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-11 18:18:05.162396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-11 18:18:05.162413: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-11 18:18:05.166497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88, got 80
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 80 from C header, got 88 from PyObject
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ...............................s. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:17
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:17: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
DESCRIPTOR = _descriptor.FileDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:35
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:35: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:42
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:42: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:49
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:49: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:28
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:28: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_INTEGERSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:80
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:80: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:87
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:87: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:94
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:94: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:73
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:73: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DOUBLESTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:125
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:125: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:132
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:132: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:139
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:139: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:118
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:118: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_STRINGSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:170
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:170: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:163
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:163: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_BUCKETSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:201
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:201: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:208
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:208: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:215
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:215: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:194
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:194: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DECIMALSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:246
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:246: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:253
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:253: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:239
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:239: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DATESTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:284
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:284: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:291
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:291: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:298
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:298: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:305
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:305: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:277
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:277: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_TIMESTAMPSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:336
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:336: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:329
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:329: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_BINARYSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:367
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:367: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:374
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:374: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:381
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:381: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:388
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:388: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:395
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:395: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:402
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:402: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:409
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:409: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:416
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:416: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:423
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:423: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:430
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:430: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:360
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:360: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_COLUMNSTATISTICS = _descriptor.Descriptor(

tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 64 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 7 10 3 76% 53, 57-59, 68-70
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 178 7 68 11 93% 109, 112, 148, 223, 380->378, 408->411, 419, 423->425, 425->421, 430, 432
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 12 116 8 95% 114, 131, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 513 68 296 45 84% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 475-477, 482->484, 557, 595, 624->627, 628-630, 637-638, 651-653, 654->622, 670, 680, 682, 688, 704-705, 710, 713->716, 726, 750, 755, 771-774, 800, 804, 806, 818-821, 936, 938, 980->1001, 986->1001, 1002-1007, 1044, 1060->1065, 1064, 1074->1071, 1079->1071, 1087, 1095-1105
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 65 22 26 1 59% 52-53, 106-120, 128-139
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 79 44 36 6 37% 28-29, 33-34, 47, 56-59, 61-63, 66, 69, 75, 81, 87-123
nvtabular/worker.py 68 1 30 2 97% 73, 83->98
nvtabular/workflow.py 142 9 65 4 93% 39, 125, 137-139, 242, 270-271, 348

TOTAL 5622 1027 2175 234 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.06%
=========== 787 passed, 10 skipped, 42 warnings in 588.55s (0:09:48) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins7464494742587395221.sh

@benfred benfred requested a review from rnyak May 11, 2021 21:33
@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit 1268c376cce48588b40e31d737376b3a3c7c081e, no merge conflicts.
Running as SYSTEM
Setting status of 1268c376cce48588b40e31d737376b3a3c7c081e to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2399/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse 1268c376cce48588b40e31d737376b3a3c7c081e^{commit} # timeout=10
Checking out Revision 1268c376cce48588b40e31d737376b3a3c7c081e (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1268c376cce48588b40e31d737376b3a3c7c081e # timeout=10
Commit message: "Merge branch 'main' into list_slice"
 > git rev-list --no-walk 9b70324e78782d7de9dfc411167e7354639949ad # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins8056416597341607287.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-11 23:07:10.356780: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 23:07:11.592241: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-11 23:07:11.592300: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-11 23:07:11.593432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 23:07:11.594503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-11 23:07:11.594529: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-11 23:07:11.594581: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-11 23:07:11.594617: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-11 23:07:11.594653: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-11 23:07:11.594690: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-11 23:07:11.594795: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-11 23:07:11.594830: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-11 23:07:11.594848: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-11 23:07:11.598897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88, got 80
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 80 from C header, got 88 from PyObject
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ...............................s. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:17
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:17: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
DESCRIPTOR = _descriptor.FileDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:35
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:35: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:42
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:42: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:49
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:49: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:28
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:28: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_INTEGERSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:80
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:80: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:87
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:87: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:94
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:94: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:73
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:73: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DOUBLESTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:125
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:125: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:132
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:132: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:139
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:139: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:118
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:118: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_STRINGSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:170
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:170: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:163
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:163: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_BUCKETSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:201
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:201: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:208
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:208: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:215
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:215: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:194
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:194: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DECIMALSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:246
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:246: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:253
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:253: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:239
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:239: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_DATESTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:284
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:284: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:291
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:291: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:298
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:298: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:305
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:305: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:277
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:277: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_TIMESTAMPSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:336
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:336: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:329
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:329: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_BINARYSTATISTICS = _descriptor.Descriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:367
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:367: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:374
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:374: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:381
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:381: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:388
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:388: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:395
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:395: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:402
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:402: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:409
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:409: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:416
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:416: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:423
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:423: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:430
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:430: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_descriptor.FieldDescriptor(

../../../../../usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:360
/usr/local/lib/python3.8/dist-packages/cudf/utils/metadata/orc_column_statistics_pb2.py:360: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool.
_COLUMNSTATISTICS = _descriptor.Descriptor(

tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 64 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 7 10 3 76% 53, 57-59, 68-70
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 178 7 68 11 93% 109, 112, 148, 223, 380->378, 408->411, 419, 423->425, 425->421, 430, 432
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 12 116 8 95% 114, 131, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 513 68 296 45 84% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 475-477, 482->484, 557, 595, 624->627, 628-630, 637-638, 651-653, 654->622, 670, 680, 682, 688, 704-705, 710, 713->716, 726, 750, 755, 771-774, 800, 804, 806, 818-821, 936, 938, 980->1001, 986->1001, 1002-1007, 1044, 1060->1065, 1064, 1074->1071, 1079->1071, 1087, 1095-1105
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 65 22 26 1 59% 52-53, 106-120, 128-139
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 79 44 36 6 37% 28-29, 33-34, 47, 56-59, 61-63, 66, 69, 75, 81, 87-123
nvtabular/worker.py 68 1 30 2 97% 73, 83->98
nvtabular/workflow.py 142 9 65 4 93% 39, 125, 137-139, 242, 270-271, 348

TOTAL 5622 1027 2175 234 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.06%
=========== 787 passed, 10 skipped, 42 warnings in 588.95s (0:09:48) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins1078450572964627541.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit 6ddcc94ae6f58d314525ab83b00f729dbb305f91, no merge conflicts.
Running as SYSTEM
Setting status of 6ddcc94ae6f58d314525ab83b00f729dbb305f91 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2415/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse 6ddcc94ae6f58d314525ab83b00f729dbb305f91^{commit} # timeout=10
Checking out Revision 6ddcc94ae6f58d314525ab83b00f729dbb305f91 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6ddcc94ae6f58d314525ab83b00f729dbb305f91 # timeout=10
Commit message: "Merge branch 'main' into list_slice"
 > git rev-list --no-walk 5e1c92bdcf31a6c3159ca7044d40ee2c758ead90 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins4206895360457724253.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-13 03:04:53.171276: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-13 03:04:54.528656: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-13 03:04:54.528721: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-13 03:04:54.529873: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-13 03:04:54.530947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-13 03:04:54.530975: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-13 03:04:54.531029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-13 03:04:54.531066: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-13 03:04:54.531114: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-13 03:04:54.531150: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-13 03:04:54.531260: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-13 03:04:54.531298: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-13 03:04:54.531317: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-13 03:04:54.535554: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88, got 80
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 80 from C header, got 88 from PyObject
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ................................. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 4 10 2 86% 53, 57-59
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 178 7 68 11 93% 109, 112, 148, 223, 380->378, 408->411, 419, 423->425, 425->421, 430, 432
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 11 116 7 96% 131, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 516 65 300 44 85% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 487->489, 562, 600, 629->632, 633-635, 642-643, 656-658, 659->627, 675, 685, 687, 693, 709-710, 715, 718->721, 731, 755, 760, 776-779, 805, 809, 811, 823-826, 941, 943, 985->1006, 991->1006, 1007-1012, 1049, 1065->1070, 1069, 1079->1076, 1084->1076, 1092, 1100-1110
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 65 22 26 1 59% 52-53, 106-120, 128-139
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 79 44 36 6 37% 28-29, 33-34, 47, 56-59, 61-63, 66, 69, 75, 81, 87-123
nvtabular/worker.py 68 1 30 2 97% 73, 83->98
nvtabular/workflow.py 142 9 65 4 93% 39, 125, 137-139, 242, 270-271, 348

TOTAL 5626 1020 2179 231 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.26%
============ 788 passed, 9 skipped, 2 warnings in 587.41s (0:09:47) ============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins6060797956032527066.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit f47f6633d44c2cabbc0de8dfe063fecf367eafe7, no merge conflicts.
Running as SYSTEM
Setting status of f47f6633d44c2cabbc0de8dfe063fecf367eafe7 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2416/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse f47f6633d44c2cabbc0de8dfe063fecf367eafe7^{commit} # timeout=10
Checking out Revision f47f6633d44c2cabbc0de8dfe063fecf367eafe7 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f47f6633d44c2cabbc0de8dfe063fecf367eafe7 # timeout=10
Commit message: "Merge branch 'main' into list_slice"
 > git rev-list --no-walk 6ddcc94ae6f58d314525ab83b00f729dbb305f91 # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins730987033621814363.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 0.5.1
    Can't uninstall 'nvtabular'. No files were found to uninstall.
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-13 15:07:03.887648: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-13 15:07:05.118574: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-13 15:07:05.118634: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-13 15:07:05.119782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-13 15:07:05.120824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-13 15:07:05.120851: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-13 15:07:05.120903: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-13 15:07:05.120939: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-13 15:07:05.120976: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-13 15:07:05.121012: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-13 15:07:05.121140: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-13 15:07:05.121177: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-13 15:07:05.121195: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-13 15:07:05.125240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88, got 80
:219: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 80 from C header, got 88 from PyObject
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ................................. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 4 10 2 86% 53, 57-59
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 178 7 68 11 93% 109, 112, 148, 223, 380->378, 408->411, 419, 423->425, 425->421, 430, 432
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 11 116 7 96% 131, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 516 65 300 44 85% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 487->489, 562, 600, 629->632, 633-635, 642-643, 656-658, 659->627, 675, 685, 687, 693, 709-710, 715, 718->721, 731, 755, 760, 776-779, 805, 809, 811, 823-826, 941, 943, 985->1006, 991->1006, 1007-1012, 1049, 1065->1070, 1069, 1079->1076, 1084->1076, 1092, 1100-1110
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 65 22 26 1 59% 52-53, 106-120, 128-139
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 79 44 36 6 37% 28-29, 33-34, 47, 56-59, 61-63, 66, 69, 75, 81, 87-123
nvtabular/worker.py 68 1 30 2 97% 73, 83->98
nvtabular/workflow.py 142 9 65 4 93% 39, 125, 137-139, 242, 270-271, 348

TOTAL 5626 1020 2179 231 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.26%
============ 788 passed, 9 skipped, 2 warnings in 586.98s (0:09:46) ============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins6119763540387897522.sh

nvtabular/ops/list_slice.py Outdated Show resolved Hide resolved
nvtabular/ops/list_slice.py Outdated Show resolved Hide resolved
@karlhigley
Copy link
Contributor

Okay, I had to learn about writing CUDA kernels with Numba in order to understand this, but I read through it and it makes sense to me.

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a, no merge conflicts.
Running as SYSTEM
Setting status of 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2426/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a^{commit} # timeout=10
Checking out Revision 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a # timeout=10
Commit message: "Merge branch 'list_slice' of github.com:benfred/NVTabular into list_slice"
 > git rev-list --no-walk f83237d18db0352c9f6b51a88c803aa4857d975c # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins8139602252343281159.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-14 23:57:05.934993: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-05-14 23:57:07.341606: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-05-14 23:57:07.342745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-14 23:57:07.343749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-14 23:57:07.343777: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-05-14 23:57:07.343830: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-05-14 23:57:07.343865: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-05-14 23:57:07.343901: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-05-14 23:57:07.343935: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-05-14 23:57:07.343985: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-05-14 23:57:07.344018: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-05-14 23:57:07.344035: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-05-14 23:57:07.347899: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ................................. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 4 10 2 86% 53, 57-59
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 178 7 68 11 93% 109, 112, 148, 223, 380->378, 408->411, 419, 423->425, 425->421, 430, 432
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 11 116 7 96% 131, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 516 65 300 44 85% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 487->489, 562, 600, 629->632, 633-635, 642-643, 656-658, 659->627, 675, 685, 687, 693, 709-710, 715, 718->721, 731, 755, 760, 776-779, 805, 809, 811, 823-826, 941, 943, 985->1006, 991->1006, 1007-1012, 1049, 1065->1070, 1069, 1079->1076, 1084->1076, 1092, 1100-1110
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 64 22 26 1 59% 52-53, 105-119, 127-138
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 79 44 36 6 37% 28-29, 33-34, 47, 56-59, 61-63, 66, 69, 75, 81, 87-123
nvtabular/worker.py 68 1 30 2 97% 73, 83->98
nvtabular/workflow.py 142 9 65 4 93% 39, 125, 137-139, 242, 270-271, 348

TOTAL 5625 1020 2179 231 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.25%
============ 788 passed, 9 skipped, 2 warnings in 588.73s (0:09:48) ============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3599570066038614064.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #803 of commit d2f574859685175ebe5da5af06757661e90ff0e6, no merge conflicts.
Running as SYSTEM
Setting status of d2f574859685175ebe5da5af06757661e90ff0e6 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2427/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/803/*:refs/remotes/origin/pr/803/* # timeout=10
 > git rev-parse d2f574859685175ebe5da5af06757661e90ff0e6^{commit} # timeout=10
Checking out Revision d2f574859685175ebe5da5af06757661e90ff0e6 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d2f574859685175ebe5da5af06757661e90ff0e6 # timeout=10
Commit message: "Merge branch 'main' into list_slice"
 > git rev-list --no-walk 7ca8b15d10a1fdfdb3dbbf04f892f05b1bc0de0a # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins971343890510979149.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
WARNING: You are using pip version 21.0.1; however, version 21.1.1 is available.
You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
Running black --check
All done! ✨ 🍰 ✨
106 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-05-15 00:10:43.929512: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-05-15 00:10:46.207454: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-05-15 00:10:46.208865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-15 00:10:46.210066: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-15 00:10:46.210117: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-05-15 00:10:46.210205: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-05-15 00:10:46.210259: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-05-15 00:10:46.210321: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-05-15 00:10:46.210391: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-05-15 00:10:46.210481: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-05-15 00:10:46.210535: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-05-15 00:10:46.210565: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-05-15 00:10:46.215459: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.11.1, xdist-2.2.1, forked-1.3.0
collected 797 items

tests/unit/test_column_group.py . [ 0%]
tests/unit/test_column_similarity.py ...... [ 0%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 15%]
tests/unit/test_dataloader_backend.py . [ 15%]
tests/unit/test_io.py .................................................. [ 21%]
..................................................................ssssss [ 30%]
ss.................................................. [ 37%]
tests/unit/test_notebooks.py ...... [ 37%]
tests/unit/test_ops.py ................................................. [ 43%]
........................................................................ [ 52%]
........................................................................ [ 61%]
........................................... [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ...............................s [ 71%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py ................................. [ 88%]
tests/unit/test_triton_inference.py .. [ 88%]
tests/unit/test_workflow.py ............................................ [ 94%]
............................................... [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6560: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.8.5-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 149 18 80 5 86% 54, 87, 128, 151-164, 191, 278
nvtabular/dispatch.py 81 11 38 5 83% 35, 45->47, 69, 94, 111, 118, 135-138, 167-170
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 83 7 90% 60, 68->49, 104, 112, 192, 244-252, 348->356, 370->373, 376-377, 380
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 27 1 10 1 95% 47
nvtabular/framework_utils/torch/models.py 41 0 22 0 100%
nvtabular/framework_utils/torch/utils.py 32 4 10 2 86% 53, 57-59
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 272 154 116 15 43% 95-144, 187-247, 269-270, 275-278, 301-313, 317-333, 337-340, 344, 363-379, 383-387, 462-484, 488-555, 564->567, 567->563, 596-606, 610-611, 615, 625, 631, 633, 635, 637, 639, 641, 643, 646
nvtabular/inference/triton/model.py 56 56 22 0 0% 27-142
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/inference/triton/model_hugectr.py 56 56 18 0 0% 27-135
nvtabular/inference/triton/model_pytorch.py 37 37 12 0 0% 27-99
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 54 4 20 5 88% 95, 99->103, 104, 106, 120
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 58 3 28 6 90% 47, 66, 85->89, 89->94, 91->94, 94->113, 122
nvtabular/io/dataset.py 263 31 124 21 86% 254, 256, 269, 278, 296-310, 413->482, 418-421, 426->436, 431-432, 443->441, 457->461, 472, 518, 639->641, 641->650, 651, 658-659, 665, 671, 766-767, 879-884, 890, 924
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 486 19 154 12 95% 85-93, 117->119, 206-208, 331-336, 374-379, 495->502, 563->568, 569-570, 690, 694, 698, 736, 753, 757, 764->766, 884->889, 894->904, 931
nvtabular/io/shuffle.py 30 4 12 3 83% 41, 43-44, 48
nvtabular/io/writer.py 168 11 64 5 92% 46, 74, 120, 123, 200, 209, 212, 255, 276-278
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 296 11 116 7 96% 114, 138-139, 224->226, 236-240, 286-287, 326->330, 401, 405-406, 507
nvtabular/loader/tensorflow.py 121 11 48 7 88% 56, 64-67, 77, 87, 283, 298-300, 310->314, 343
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 46 10 8 0 70% 25-27, 30-36
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 24 4 16 2 75% 45, 48-51
nvtabular/ops/categorify.py 516 65 300 44 85% 237, 254, 258, 266, 274, 276, 298, 317-318, 352-353, 415-417, 487->489, 562, 600, 629->632, 633-635, 642-643, 656-658, 659->627, 675, 685, 687, 693, 709-710, 715, 718->721, 731, 755, 760, 776-779, 805, 809, 811, 823-826, 941, 943, 985->1006, 991->1006, 1007-1012, 1049, 1065->1070, 1069, 1079->1076, 1084->1076, 1092, 1100-1110
nvtabular/ops/clip.py 19 2 6 3 80% 45, 53->55, 56
nvtabular/ops/column_similarity.py 88 22 32 5 69% 84, 156-157, 166-168, 176-192, 207->217, 209->212, 213, 223
nvtabular/ops/data_stats.py 57 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 26 0 8 1 97% 67->69
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 58 2 20 1 96% 93, 119
nvtabular/ops/filter.py 21 1 6 1 93% 44
nvtabular/ops/groupby.py 93 4 56 6 92% 72, 81, 83, 93->95, 105->110, 181
nvtabular/ops/hash_bucket.py 32 2 18 2 88% 73, 102
nvtabular/ops/hashed_cross.py 29 3 13 4 83% 51, 64, 78->exit, 79
nvtabular/ops/join_external.py 69 4 28 4 92% 96, 98, 116, 168
nvtabular/ops/join_groupby.py 82 5 28 2 94% 106, 109->116, 185-186, 189-190
nvtabular/ops/lambdaop.py 27 3 10 3 84% 61, 65, 78
nvtabular/ops/list_slice.py 64 22 26 1 59% 52-53, 105-119, 127-138
nvtabular/ops/logop.py 9 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 65 6 14 2 87% 61->60, 67-68, 101-102, 124-125
nvtabular/ops/operator.py 15 1 2 1 88% 24
nvtabular/ops/rename.py 18 3 10 3 71% 41, 54, 58
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 148 11 64 5 91% 143, 163->167, 170->179, 222-223, 226-227, 236-242, 333->336
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 2 99% 321->320, 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 45 44 8 46% 30-31, 35-36, 49, 58-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 68 1 30 2 97% 74->87, 94
nvtabular/workflow.py 143 9 65 4 93% 40, 126, 140-142, 245, 273-274, 351

TOTAL 5642 1021 2187 233 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.28%
============ 788 passed, 9 skipped, 2 warnings in 675.42s (0:11:15) ============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins8001881191948863756.sh

@benfred benfred merged commit f298079 into NVIDIA-Merlin:main May 15, 2021
@benfred benfred deleted the list_slice branch May 15, 2021 02:37
mikemckiernan pushed a commit that referenced this pull request Nov 24, 2022
This adds an operator to slice rows of list columns. This will let us truncate list
column rows to only take the first N or last N items for instance.

Closes #734
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEA] Truncate List columns (sparse tensors) - related to the GroupBy op
3 participants