Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a reminder for the illegal memory error #3822

Merged
merged 4 commits into from
May 27, 2024

Conversation

Yi-FanLi
Copy link
Collaborator

@Yi-FanLi Yi-FanLi commented May 25, 2024

When using the GPU version of the neighbor stat code, one may encounter the following issue and the training will stop:

[2024-05-24 23:00:42,027] DEEPMD INFO    Adjust batch size from 1024 to 2048
[2024-05-24 23:00:42,139] DEEPMD INFO    Adjust batch size from 2048 to 4096
[2024-05-24 23:00:42,285] DEEPMD INFO    Adjust batch size from 4096 to 8192
[2024-05-24 23:00:42,628] DEEPMD INFO    Adjust batch size from 8192 to 16384
[2024-05-24 23:00:43,180] DEEPMD INFO    Adjust batch size from 16384 to 32768
[2024-05-24 23:00:44,341] DEEPMD INFO    Adjust batch size from 32768 to 65536
[2024-05-24 23:00:46,713] DEEPMD INFO    Adjust batch size from 65536 to 131072
2024-05-24 23:00:52.071120: E tensorflow/compiler/xla/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
2024-05-24 23:00:52.075435: F tensorflow/core/common_runtime/device/device_event_mgr.cc:223] Unexpected Event status: 1
/bin/sh: line 1: 1397100 Aborted 

This should be due to some issue of TensorFlow. One may use the environment variable DP_INFER_BATCH_SIZE to avoid this issue.

This PR remind the user to set a small DP_INFER_BATCH_SIZE to avoid this issue.

Summary by CodeRabbit

  • Bug Fixes
    • Added a log message to guide users on setting the DP_INFER_BATCH_SIZE environment variable to avoid TensorFlow illegal memory access issues.

Yi-FanLi and others added 4 commits May 24, 2024 22:52
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Yifan Li李一帆 <[email protected]>
Copy link
Contributor

coderabbitai bot commented May 25, 2024

Walkthrough

Walkthrough

A log message has been introduced in the BatchSizeManager class within the deepmd/utils/batch_size.py file. This message advises users on how to set the DP_INFER_BATCH_SIZE environment variable to mitigate a TensorFlow issue related to illegal memory access. This change aims to help users avoid potential runtime errors by adjusting the batch size appropriately.

Changes

File Path Change Summary
deepmd/utils/batch_size.py Added a log message in BatchSizeManager.__init__ to guide users on setting DP_INFER_BATCH_SIZE to avoid TensorFlow illegal memory access issues.

Tip

New Features and Improvements

Review Settings

Introduced new personality profiles for code reviews. Users can now select between "Chill" and "Assertive" review tones to tailor feedback styles according to their preferences. The "Assertive" profile posts more comments and nitpicks the code more aggressively, while the "Chill" profile is more relaxed and posts fewer comments.

AST-based Instructions

CodeRabbit offers customizing reviews based on the Abstract Syntax Tree (AST) pattern matching. Read more about AST-based instructions in the documentation.

Community-driven AST-based Rules

We are kicking off a community-driven initiative to create and share AST-based rules. Users can now contribute their AST-based rules to detect security vulnerabilities, code smells, and anti-patterns. Please see the ast-grep-essentials repository for more information.

New Static Analysis Tools

We are continually expanding our support for static analysis tools. We have added support for biome, hadolint, and ast-grep. Update the settings in your .coderabbit.yaml file or head over to the settings page to enable or disable the tools you want to use.

Tone Settings

Users can now customize CodeRabbit to review code in the style of their favorite characters or personalities. Here are some of our favorite examples:

  • Mr. T: "You must talk like Mr. T in all your code reviews. I pity the fool who doesn't!"
  • Pirate: "Arr, matey! Ye must talk like a pirate in all yer code reviews. Yarrr!"
  • Snarky: "You must be snarky in all your code reviews. Snark, snark, snark!"

Revamped Settings Page

We have redesigned the settings page for a more intuitive layout, enabling users to find and adjust settings quickly. This change was long overdue; it not only improves the user experience but also allows our development team to add more settings in the future with ease. Going forward, the changes to .coderabbit.yaml will be reflected in the settings page, and vice versa.

Miscellaneous

  • Turn off free summarization: You can switch off free summarization of PRs opened by users not on a paid plan using the enable_free_tier setting.
  • Knowledge-base scope: You can now set the scope of the knowledge base to either the repository (local) or the organization (global) level using the knowledge_base setting. In addition, you can specify Jira project keys and Linear team keys to limit the knowledge base scope for those integrations.
  • High-level summary placement: You can now customize the location of the high-level summary in the PR description using the high_level_summary_placeholder setting (default @coderabbitai summary).
  • Revamped request changes workflow: You can now configure CodeRabbit to auto-approve or request changes on PRs based on the review feedback using the request_changes_workflow setting.

Recent Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 8cd3cba and e46b232.
Files selected for processing (1)
  • deepmd/utils/batch_size.py (1 hunks)
Files skipped from review due to trivial changes (1)
  • deepmd/utils/batch_size.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Yi-FanLi Yi-FanLi requested a review from njzjz May 25, 2024 23:10
@njzjz njzjz added this to the v2.2.11 milestone May 25, 2024
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue May 27, 2024
Merged via the queue into deepmodeling:devel with commit d754672 May 27, 2024
162 of 164 checks passed
njzjz pushed a commit to njzjz/deepmd-kit that referenced this pull request Jul 2, 2024
When using the GPU version of the neighbor stat code, one may encounter
the following issue and the training will stop:

```
[2024-05-24 23:00:42,027] DEEPMD INFO    Adjust batch size from 1024 to 2048
[2024-05-24 23:00:42,139] DEEPMD INFO    Adjust batch size from 2048 to 4096
[2024-05-24 23:00:42,285] DEEPMD INFO    Adjust batch size from 4096 to 8192
[2024-05-24 23:00:42,628] DEEPMD INFO    Adjust batch size from 8192 to 16384
[2024-05-24 23:00:43,180] DEEPMD INFO    Adjust batch size from 16384 to 32768
[2024-05-24 23:00:44,341] DEEPMD INFO    Adjust batch size from 32768 to 65536
[2024-05-24 23:00:46,713] DEEPMD INFO    Adjust batch size from 65536 to 131072
2024-05-24 23:00:52.071120: E tensorflow/compiler/xla/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
2024-05-24 23:00:52.075435: F tensorflow/core/common_runtime/device/device_event_mgr.cc:223] Unexpected Event status: 1
/bin/sh: line 1: 1397100 Aborted
```

This should be due to some issue of TensorFlow. One may use the
environment variable `DP_INFER_BATCH_SIZE` to avoid this issue.

This PR remind the user to set a small `DP_INFER_BATCH_SIZE` to avoid
this issue.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

- **Bug Fixes**
- Added a log message to guide users on setting the
`DP_INFER_BATCH_SIZE` environment variable to avoid TensorFlow illegal
memory access issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Yifan Li李一帆 <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
(cherry picked from commit d754672)
Signed-off-by: Jinzhe Zeng <[email protected]>
@njzjz njzjz mentioned this pull request Jul 2, 2024
njzjz pushed a commit that referenced this pull request Jul 3, 2024
When using the GPU version of the neighbor stat code, one may encounter
the following issue and the training will stop:

```
[2024-05-24 23:00:42,027] DEEPMD INFO    Adjust batch size from 1024 to 2048
[2024-05-24 23:00:42,139] DEEPMD INFO    Adjust batch size from 2048 to 4096
[2024-05-24 23:00:42,285] DEEPMD INFO    Adjust batch size from 4096 to 8192
[2024-05-24 23:00:42,628] DEEPMD INFO    Adjust batch size from 8192 to 16384
[2024-05-24 23:00:43,180] DEEPMD INFO    Adjust batch size from 16384 to 32768
[2024-05-24 23:00:44,341] DEEPMD INFO    Adjust batch size from 32768 to 65536
[2024-05-24 23:00:46,713] DEEPMD INFO    Adjust batch size from 65536 to 131072
2024-05-24 23:00:52.071120: E tensorflow/compiler/xla/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
2024-05-24 23:00:52.075435: F tensorflow/core/common_runtime/device/device_event_mgr.cc:223] Unexpected Event status: 1
/bin/sh: line 1: 1397100 Aborted
```

This should be due to some issue of TensorFlow. One may use the
environment variable `DP_INFER_BATCH_SIZE` to avoid this issue.

This PR remind the user to set a small `DP_INFER_BATCH_SIZE` to avoid
this issue.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

- **Bug Fixes**
- Added a log message to guide users on setting the
`DP_INFER_BATCH_SIZE` environment variable to avoid TensorFlow illegal
memory access issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Yifan Li李一帆 <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
(cherry picked from commit d754672)
Signed-off-by: Jinzhe Zeng <[email protected]>
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
When using the GPU version of the neighbor stat code, one may encounter
the following issue and the training will stop:

```
[2024-05-24 23:00:42,027] DEEPMD INFO    Adjust batch size from 1024 to 2048
[2024-05-24 23:00:42,139] DEEPMD INFO    Adjust batch size from 2048 to 4096
[2024-05-24 23:00:42,285] DEEPMD INFO    Adjust batch size from 4096 to 8192
[2024-05-24 23:00:42,628] DEEPMD INFO    Adjust batch size from 8192 to 16384
[2024-05-24 23:00:43,180] DEEPMD INFO    Adjust batch size from 16384 to 32768
[2024-05-24 23:00:44,341] DEEPMD INFO    Adjust batch size from 32768 to 65536
[2024-05-24 23:00:46,713] DEEPMD INFO    Adjust batch size from 65536 to 131072
2024-05-24 23:00:52.071120: E tensorflow/compiler/xla/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
2024-05-24 23:00:52.075435: F tensorflow/core/common_runtime/device/device_event_mgr.cc:223] Unexpected Event status: 1
/bin/sh: line 1: 1397100 Aborted 
```

This should be due to some issue of TensorFlow. One may use the
environment variable `DP_INFER_BATCH_SIZE` to avoid this issue.

This PR remind the user to set a small `DP_INFER_BATCH_SIZE` to avoid
this issue.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Added a log message to guide users on setting the
`DP_INFER_BATCH_SIZE` environment variable to avoid TensorFlow illegal
memory access issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Yifan Li李一帆 <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
github-merge-queue bot pushed a commit that referenced this pull request Oct 31, 2024
…under tf (#4283)

#3822 added a reminder for the illegal memory error. However, this
reminder is only needed for tf. This PR moves the illegal memory
reminder from base class AutoBatchSize to the inherited class under tf.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced `AutoBatchSize` class to initialize batch size from an
environment variable, improving user guidance on memory management with
TensorFlow.
- **Bug Fixes**
- Removed redundant logging during initialization to streamline the
process when GPU resources are available.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants