-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TEST Introduce Comprehensive Pathological Unit Tests for Issue #14409 #14467
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
…idsai#14428) If we pass sort=True to merges we are on the hook to sort the result in order with respect to the key columns. If those key columns have repeated values there is still some space for ambiguity. Currently we get a result back whose order (for the repeated key values) is determined by the gather map that libcudf returns for the join. This does not come with any ordering guarantees. When sort=False, pandas has join-type dependent ordering guarantees which we also do not match. To fix this, in pandas-compatible mode only, reorder the gather maps according to the order of the input keys. When sort=False this means that our result matches pandas ordering. When sort=True, it ensures that (if we use a stable sort) the tie-break for equal sort keys is the input dataframe order. While we're here, switch from argsort + gather to sort_by_key when sorting results. - Closes rapidsai#14001 Authors: - Lawrence Mitchell (https://github.com/wence-) Approvers: - Ashwin Srinath (https://github.com/shwina) - Bradley Dice (https://github.com/bdice) URL: rapidsai#14428
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
…rapidsai#14444) Added the true/false string scalars to `column_to_strings_fn` so they are created once, instead of creating new scalars for each boolean column (using default stream). Authors: - Vukasin Milovanovic (https://github.com/vuule) Approvers: - Nghia Truong (https://github.com/ttnghia) - https://github.com/shrshi URL: rapidsai#14444
Forward-merge branch-23.12 to branch-24.02
`pandas.core` is technically private and methods could be moved at any time. Avoiding places in the codepace where they could be avoided Authors: - Matthew Roeschke (https://github.com/mroeschke) Approvers: - Bradley Dice (https://github.com/bdice) - Lawrence Mitchell (https://github.com/wence-) URL: rapidsai#14421
* add devcontainers * fix tag for CUDA 12.0 * use CUDA 11.8 for now * default to CUDA 12.0 * install cuda-cupti-dev in conda environment * remove MODIFY_PREFIX_PATH so the driver is found * install cuda-nvtx-dev in conda environment * update conda env * add MODIFY_PREFIX_PATH back * temporarily default to my branch with the fix for MODIFY_PREFIX_PATH in conda envs * remove temporary rapids-cmake pin * build all RAPIDS archs to take maximum advantage of sccache * add clangd and nsight vscode customizations * copy in default clangd config * remove options for pip vs. conda unless using the launch script * fix unified mounts * ensure dirs exist before mounting * add compile_commands to .gitignore * allow defining cudf and cudf_kafka include dirs via envvars * add kvikio * use volumes for isolated devcontainer source dirs * update README.md * update to rapidsai/devcontainers 23.10 * update rapids-build-utils version to 23.10 * add .clangd config file * update RAPIDS versions in devcontainer files * ensure the directory for the generated jitify kernels is exists after configuring * add clang and clang-tools 16 * remove isolated and unified devcontainers, make single the default * separate CUDA 11.8 and 12.0 devcontainers * fix version string for requirements.txt * update conda envs * clean up envvars, mounts, and build args, add codespaces post-attach command workaround * consolidate common vscode customizations * enumerate CUDA 11 packages, include up to CUDA 12.2 * include protoc-wheel when generating requirements.txt * default to cuda-python for cu11 * separate devcontainer mounts by CUDA version * add devcontainer build jobs to PR workflow * use pypi.nvidia.com instead of pypi.ngc.nvidia.com * fix venvs mount path * fix lint * ensure rmm-cuXX is included in pip requirements * disable libcudf_kakfa build for now * build dask-cudf * be more explicit in update-versions.sh, make devcontainer build required in pr jobs * revert rename devcontainer job * install librdkafka-dev in pip containers so we can build libcudf_kafka and cudf_kafka * separate cupy, cudf, and cudf_kafka matrices for CUDA 11 and 12 * add fallback include path for RMM * fallback to CUDA_PATH if CUDA_HOME is not set * define envvars in dockerfile * define envvars for cudf_kafka * build verbose * include wheel and setuptools in requirements.txt * switch workflow to branch-23.10 * update clang-tools version to 16.0.6 * fix update-version.sh * Use 24.02 branches. * fix version numbers * Fix dependencies.yaml. * Update .devcontainer/Dockerfile --------- Co-authored-by: Bradley Dice <[email protected]>
Forward-merge branch-23.12 to branch-24.02
Forward-merge branch-23.12 to branch-24.02
`volatile` should no be required in our code, unless there are compiler or synchronization issues. This PR removes the use in Parquet reader and writer. Authors: - Vukasin Milovanovic (https://github.com/vuule) Approvers: - David Wendt (https://github.com/davidwendt) - Nghia Truong (https://github.com/ttnghia) URL: rapidsai#14448
Move `from_delayed` and `concat` to appropriate subsections. - Closes rapidsai#14299 Authors: - Lawrence Mitchell (https://github.com/wence-) - Vyas Ramasubramani (https://github.com/vyasr) Approvers: - Bradley Dice (https://github.com/bdice) - Vyas Ramasubramani (https://github.com/vyasr) URL: rapidsai#14454
…tion (rapidsai#14381) `.column` used to always return `pd.Index([], dtype=object)` even if an empty-dtyped columns was passed into the DataFrame constructor e.g. `DatetimeIndex([])`. Needed to preserved some information about what column dtype was passed in so we can return a correctly type Index Authors: - Matthew Roeschke (https://github.com/mroeschke) Approvers: - Lawrence Mitchell (https://github.com/wence-) URL: rapidsai#14381
github-actions
bot
added
libcudf
Affects libcudf (C++/CUDA) code.
Python
Affects Python cuDF API.
CMake
CMake build issue
conda
Java
Affects Java cuDF API.
labels
Nov 21, 2023
aocsa
changed the title
Introduce Comprehensive Pathological Unit Tests for Issue #14409
TEST Introduce Comprehensive Pathological Unit Tests for Issue #14409
Nov 22, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR addresses the issue at #14409. I would like to propose the addition of unit tests that involve scenarios like having 100 or 1000 elements in a tree, reaching 100 levels of depth, with diferent data types and similar stress tests. The purpose of these tests is to conduct comprehensive testing and stress the Abstract Syntax Tree (AST), ultimately aiding in the identification and resolution of any potential issues.
By introducing these pathological tests, we aim to ensure the robustness and reliability of our codebase. These tests can help us uncover edge cases and performance bottlenecks that might otherwise go unnoticed.
Checklist