Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ft future #1

Merged
merged 35 commits into from
Sep 6, 2024
Merged

Ft future #1

merged 35 commits into from
Sep 6, 2024

Conversation

2lambda123
Copy link
Owner

@2lambda123 2lambda123 commented Sep 6, 2024

Description

Related Issue

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Chore (non-breaking change that does not add functionality or fix an issue)

Checklist:

  • I have read the Code of Conduct
  • I have updated the documentation accordingly.
  • All commits are GPG signed

Note

I'm currently writing a description for your pull request. I should be done shortly (<1 minute). Please don't edit the description field until I'm finished, or we may overwrite each other.

Summary by Sourcery

Refactor the project structure by updating protobuf paths and package imports, introduce a poison pill mechanism for node termination, and enhance the build and setup scripts. Add new scripts for distributed execution and fault tolerance testing, and remove obsolete files from the old directory structure.

New Features:

  • Introduce a poison pill mechanism to terminate nodes in the data stream pipeline.
  • Add a new command-line flag to specify a node to be terminated in the data stream.
  • Implement a new script to automate the build process for runtime components.
  • Add scripts for distributed execution and fault tolerance testing in the evaluation benchmarks.
  • Introduce a script to repeatedly write a dictionary file to HDFS for testing purposes.

Enhancements:

  • Refactor protobuf file paths and package imports to align with the new directory structure.
  • Improve logging by adding detailed connection and error messages in the DFS split reader.
  • Enhance the setup script to use a centralized build script instead of individual build commands.

Build:

  • Add a Go module file to manage dependencies for the runtime components.
  • Update the setup script to use a new build script for compiling runtime components.

Chores:

  • Remove obsolete files related to the old dspash directory structure.

Summary by CodeRabbit

  • New Features

    • Added scripts for building, reading, and writing data streams, enhancing automation and usability.
    • Introduced check_ft_correctness.sh for automated output comparison in distributed benchmarks.
    • Implemented words.sh to create and upload a repeated dictionary file to HDFS.
  • Improvements

    • Updated URLs in setup.sh for downloading necessary files.
    • Enhanced error logging and configuration handling in the dfs_split_reader.
  • Chores

    • Updated .gitignore files to streamline project management by excluding unnecessary files.
    • Consolidated build commands into a single script for efficiency.

Copy link

Unable to locate .performanceTestingBot config file

Copy link

cr-gpt bot commented Sep 6, 2024

Seems you are using me but didn't get OPENAI_API_KEY seted in Variables/Secrets for this repo. you could follow readme for more information

Copy link

Processing PR updates...

Copy link

git-greetings bot commented Sep 6, 2024

Thanks @2lambda123 for opening this PR!

For COLLABORATOR only :

  • To add labels, comment on the issue
    /label add label1,label2,label3

  • To remove labels, comment on the issue
    /label remove label1,label2,label3

Copy link

korbit-ai bot commented Sep 6, 2024

My review is in progress 📖 - I will have feedback for you in a few minutes!

Copy link

@gitginie gitginie bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2lambda123
Thank you for your contribution to this repository! We appreciate your effort in opening pull request.
Happy coding!

Copy link

sourcery-ai bot commented Sep 6, 2024

Reviewer's Guide by Sourcery

This pull request implements significant changes to the project structure, build process, and functionality. It includes updates to Go package imports, modifications to protobuf files, changes to shell scripts, and the addition of new Go files and shell scripts. The changes appear to be part of a larger refactoring effort to improve the distributed execution capabilities of the system.

File-Level Changes

Change Details Files
Updated Go package imports and file paths
  • Changed import paths from 'dspash/datastream' to 'runtime/pipe/proto'
  • Changed import paths from 'dspash/filereader' to 'runtime/dfs/proto'
  • Updated protobuf file references in generated Go files
runtime/pipe/proto/data_stream.pb.go
runtime/dfs/proto/file_reader.pb.go
runtime/dfs/client/dfs_split_reader.go
runtime/dfs/server/filereader_server.go
runtime/pipe/discovery/discovery_server.go
runtime/pipe/proto/data_stream_grpc.pb.go
Modified datastream functionality
  • Added poison pill functionality for killing nodes
  • Implemented chunked reading and writing
  • Added logging of flag values
runtime/pipe/datastream/datastream.go
Updated build and setup scripts
  • Created a new build script for compiling runtime components
  • Modified setup-dish.sh to use the new build script
  • Added new shell scripts for running specific components
scripts/setup-dish.sh
runtime/scripts/build.sh
runtime/scripts/killall.sh
runtime/scripts/dfs_split_reader.sh
runtime/scripts/remote_read.sh
runtime/scripts/remote_write.sh
Added new evaluation scripts and modified existing ones
  • Created a new script for running distributed fault-tolerant benchmarks
  • Added a script to check correctness of fault-tolerant executions
  • Modified the shortest-scripts.sh benchmark
evaluation/distr_benchmarks/oneliners/run.distr.faults.sh
evaluation/distr_benchmarks/oneliners/check_ft_correctness.sh
evaluation/distr_benchmarks/oneliners/shortest-scripts.sh
Added Go module configuration
  • Created a go.mod file for the runtime package
  • Specified required dependencies and versions
runtime/go.mod
Removed obsolete files and directories
  • Deleted files from the old dspash directory structure
runtime/dspash/socket_pipe.go
runtime/dspash/file_reader/client/client.go
runtime/dspash/file_reader/go.mod
runtime/dspash/go.mod
runtime/dspash/dfs_split_reader.sh
runtime/dspash/remote_read.sh
runtime/dspash/remote_write.sh

Tips
  • Trigger a new Sourcery review by commenting @sourcery-ai review on the pull request.
  • Continue your discussion with Sourcery by replying directly to review comments.
  • You can change your review settings at any time by accessing your dashboard:
    • Enable or disable the Sourcery-generated pull request summary or reviewer's guide;
    • Change the review language;
  • You can always contact us if you have any questions or feedback.

Copy link

quine-bot bot commented Sep 6, 2024

👋 Figuring out if a PR is useful is hard, hopefully this will help.

  • @2lambda123 has been on GitHub since 2019 and in that time has had 2759 public PRs merged
  • They haven't contributed to this repo before
  • Here's a good example of their work: black-forest-labs-flux
  • From looking at their profile, they seem to be good with Shell and Python.

Their most recently public accepted PR is: 2lambda123/gchq-stroom-timeline#2

Copy link

codeautopilot bot commented Sep 6, 2024

Your organization has reached the subscribed usage limit. You can upgrade your account by purchasing a subscription at Stripe payment link

Disclaimer: This comment was entirely generated using AI. Be aware that the information provided may be incorrect.

Current plan usage: 100.84%

Have feedback or need help?
Discord
Documentation
[email protected]

Copy link

git-greetings bot commented Sep 6, 2024

First PR by @2lambda123

PR Details of @2lambda123 in binpash-dish :

OPEN CLOSED TOTAL
1 0 1

@2lambda123 2lambda123 merged commit c4caa02 into main Sep 6, 2024
11 of 17 checks passed
Copy link

@gitginie gitginie bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2lambda123
Thank you for your contribution to this repository! We appreciate your effort in closing pull request.
Happy coding!

Copy link

coderabbitai bot commented Sep 6, 2024

Warning

Rate limit exceeded

@labels-and-badges[bot] has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 20 seconds before requesting another review.

How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Commits

Files that changed from the base of the PR and between 3473140 and 238333c.

Walkthrough

The changes encompass updates to various configuration files, the introduction of new scripts, and modifications to existing code in a project focused on distributed systems and data processing. Key alterations include adjustments to .gitignore files, updates to submodule branches in .gitmodules, and enhancements to shell scripts that facilitate data handling and process management. Additionally, there are significant updates in Go source files, including changes in package paths and configuration handling.

Changes

File(s) Change Summary
.gitignore, runtime/.gitignore Added entries to ignore virtual environment directories and modified ignored entries in the runtime directory.
.gitmodules Updated branch names for submodules "pash" and "docker-hadoop".
docker-hadoop, pash Changed commit references for subprojects, indicating updates to their respective codebases.
evaluation/distr_benchmarks/oneliners/*.sh Introduced new scripts for checking output correctness and running distributed one-liners, along with modifications to existing scripts for downloading files and managing execution.
runtime/dfs/client/dfs_split_reader.go, runtime/dfs/proto/*.proto Updated import paths and modified configuration handling in Go files, including changes to function signatures and package declarations.
runtime/scripts/*.sh Added several new scripts for building, executing, and managing processes, enhancing operational capabilities.
scripts/setup-dish.sh Consolidated build commands into a single script for improved maintainability.
words.sh Created a script to generate a repeated word file and upload it to HDFS.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Script
    participant HDFS
    User->>Script: Execute words.sh
    Script->>Script: Create words-repeated.txt
    Script->>Script: Append words to file
    Script->>HDFS: Upload words-repeated.txt
    Script->>Script: Remove local file
Loading

🐇 "In the code where changes bloom,
New scripts and paths make room.
With hops and jumps, we build anew,
In the world of data, we pursue.
From HDFS to scripts that sing,
A rabbit's joy in every spring!" 🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@labels-and-badges labels-and-badges bot added NO JIRA This PR does not have a Jira Ticket PR:size/XXL Denotes a Pull Request that changes 1000+ lines. release This PR is a release labels Sep 6, 2024
Copy link
Contributor

penify-dev bot commented Sep 6, 2024

Failed to generate code suggestions for PR

Comment on lines +7 to +25
for script_faults_out in "$folder"/*faults.out; do
# Extract the script name without the extension
script_name=$(basename "$script_faults_out" .faults.out)

# Check if there is a corresponding .distr.out file
script_distr_out="$folder/$script_name.distr.out"

if [ -f "$script_distr_out" ]; then
# Perform a diff between the two files
echo "Comparing faults_out and distr_out for script $script_name.sh"
if diff -q "$script_faults_out" "$script_distr_out"; then
echo "Outputs are identical"
else
echo "Files are different. Differences are as follows:"
diff -y "$script_faults_out" "$script_distr_out"
fi
echo "-------------------------------------------"
fi
done

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script does not handle the case where there are no .faults.out files in the specified folder. This could lead to misleading output or no output at all, which might be confusing for users.

Recommended Solution:
Add a check to see if any .faults.out files exist in the folder before entering the loop. If no such files are found, print a message indicating that no files were found.

# Check if there are any .faults.out files in the folder
if compgen -G "$folder/*faults.out" > /dev/null; then
# Loop through the files in the folder
for script_faults_out in "$folder"/*faults.out; do
# Extract the script name without the extension
script_name=$(basename "$script_faults_out" .faults.out)

# Check if there is a corresponding .distr.out file
script_distr_out="$folder/$script_name.distr.out"

if [ -f "$script_distr_out" ]; then
# Perform a diff between the two files
echo "Comparing faults_out and distr_out for script $script_name.sh"
if diff -q "$script_faults_out" "$script_distr_out"; then
echo "Outputs are identical"
else
echo "Files are different. Differences are as follows:"
diff -y "$script_faults_out" "$script_distr_out"
fi
echo "-------------------------------------------"
fi
done
else
echo "No .faults.out files found in the folder."
fi

Comment on lines 18 to 24
hdfs dfs -mkdir -p /oneliners

if [ ! -f ./1M.txt ]; then
curl -sf --connect-timeout 10 'http://ndr.md/data/dummy/1M.txt' > 1M.txt
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/1M.txt' > 1M.txt
if [ $? -ne 0 ]; then
curl -f 'https://zenodo.org/record/7650885/files/1M.txt' > 1M.txt
[ $? -ne 0 ] && eexit 'cannot find 1M.txt'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for downloading the 1M.txt file does not handle the case where both URLs fail to download the file. The script attempts to download from the first URL and, if it fails, tries the second URL. If both attempts fail, it calls eexit, but eexit is not defined in the provided code snippet, which will cause the script to fail silently.

Recommended Solution:
Define the eexit function or replace it with a standard error handling mechanism like echo and exit 1.

eexit() {
echo "$1" >&2
exit 1
}

Or replace the eexit call with:

[ $? -ne 0 ] && { echo 'cannot find 1M.txt' >&2; exit 1; }

Comment on lines 41 to 47
fi

if [ ! -f ./1G.txt ]; then
curl -sf --connect-timeout 10 'http://ndr.md/data/dummy/1G.txt' > 1G.txt
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/1G.txt' > 1G.txt
if [ $? -ne 0 ]; then
touch 1G.txt
for (( i = 0; i < 10; i++ )); do

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for creating the 1G.txt file does not handle the case where the curl command fails to download the file from the specified URL. If the curl command fails, it proceeds to create the file by concatenating 100M.txt ten times. However, if 100M.txt does not exist, this will result in an empty 1G.txt file without any error notification.

Recommended Solution:
Add a check to ensure 100M.txt exists before attempting to concatenate it to create 1G.txt.

if [ ! -f ./1G.txt ]; then
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/1G.txt' > 1G.txt
if [ $? -ne 0 ]; then
if [ ! -f ./100M.txt ]; then
echo "100M.txt not found, cannot create 1G.txt" >&2
exit 1
fi
touch 1G.txt
for (( i = 0; i < 10; i++ )); do
cat 100M.txt >> 1G.txt
done
fi
fi

Comment on lines 68 to 81

# download wamerican-insane dictionary and sort according to machine
if [ ! -f ./dict.txt ]; then
curl -sf --connect-timeout 10 'http://ndr.md/data/dummy/dict.txt' | sort > dict.txt
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/dict.txt' | sort > dict.txt
if [ $? -ne 0 ]; then
sort words > sorted_words
fi
fi

if [ ! -f ./all_cmds.txt ]; then
curl -sf --connect-timeout 10 'http://ndr.md/data/dummy/all_cmds.txt' > all_cmds.txt
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/all_cmds.txt' > all_cmds.txt
if [ $? -ne 0 ]; then
# This should be OK for tests, no need for abort
ls /usr/bin/* > all_cmds.txt

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for downloading the dict.txt and all_cmds.txt files does not handle the case where the curl command fails to download the files from the specified URLs. If the curl command fails, it proceeds to use alternative methods to create the files, but it does not notify the user of the failure.

Recommended Solution:
Add error notifications to inform the user when the curl command fails to download the files.

if [ ! -f ./dict.txt ]; then
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/dict.txt' | sort > dict.txt
if [ $? -ne 0 ]; then
echo "Failed to download dict.txt, using local words file" >&2
sort words > sorted_words
fi
fi

if [ ! -f ./all_cmds.txt ]; then
curl -sf --connect-timeout 10 'atlas-group.cs.brown.edu/data/dummy/all_cmds.txt' > all_cmds.txt
if [ $? -ne 0 ]; then
echo "Failed to download all_cmds.txt, using local /usr/bin/*" >&2
ls /usr/bin/* > all_cmds.txt
fi
append_nl_if_not ./all_cmds.txt
fi

Comment on lines 101 to 105
for file in "${input_files[@]}"; do
hdfs dfs -put $file /oneliners/$file
rm -f $file
done No newline at end of file
done

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for uploading files to HDFS and then deleting them locally does not handle errors that may occur during the hdfs dfs -put command. If the hdfs dfs -put command fails, the script will still proceed to delete the local files, potentially resulting in data loss.

Recommended Solution:
Add error handling to ensure that local files are only deleted if the hdfs dfs -put command succeeds.

for file in "${input_files[@]}"; do
hdfs dfs -put $file /oneliners/$file
if [ $? -eq 0 ]; then
rm -f $file
else
echo "Failed to upload $file to HDFS" >&2
fi
done

# pkill -9 -f worker.sh
# pkill -9 -f hdfs

ps aux | grep -E 'dish|pash|hdfs' | grep -Ev 'killall|dish\|pash\|hdfs|worker.py' | awk '{print $2}' | xargs kill -9

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of kill -9 is very forceful and can lead to data corruption or other unintended side effects because it does not allow the processes to clean up. It is generally better to use a gentler signal like SIGTERM (signal 15) first, and only use SIGKILL (signal 9) if the process does not terminate.

Recommended Solution:
Use kill -15 initially and only escalate to kill -9 if necessary.

ps aux | grep -E 'dish|pash|hdfs' | grep -Ev 'killall|dish\|pash\|hdfs|worker.py' | awk '{print $2}' | xargs kill -15
sleep 5
ps aux | grep -E 'dish|pash|hdfs' | grep -Ev 'killall|dish\|pash\|hdfs|worker.py' | awk '{print $2}' | xargs kill -9

@@ -0,0 +1,3 @@
command="$DISH_TOP/runtime/bin/datastream --type read $@"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using eval to construct and execute commands can be dangerous as it may lead to command injection vulnerabilities. Instead, consider using an array to safely handle arguments.

Recommended Solution:

command=("$DISH_TOP/runtime/bin/datastream" "--type" "read" "$@")
"${command[@]}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Micro-Learning Topic: OS command injection (Detected by phrase)

Matched on "command injection"

What is this? (2min video)

In many situations, applications will rely on OS provided functions, scripts, macros and utilities instead of reimplementing them in code. While functions would typically be accessed through a native interface library, the remaining three OS provided features will normally be invoked via the command line or launched as a process. If unsafe inputs are used to construct commands or arguments, it may allow arbitrary OS operations to be performed that can compromise the server.

Try a challenge in Secure Code Warrior

Helpful references
  • OWASP Command Injection - OWASP community page with comprehensive information about command injection, and links to various OWASP resources to help detect or prevent it.
  • OWASP testing for Command Injection - This article is focused on providing testing techniques for identifying command injection flaws in your applications

@@ -0,0 +1,3 @@
command="$DISH_TOP/runtime/bin/datastream --type write $@"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using eval to construct and execute commands can be dangerous as it may lead to command injection vulnerabilities. Instead, consider using an array to safely handle arguments.

Recommended Solution:

command=("$DISH_TOP/runtime/bin/datastream" "--type" "write" "$@")
"${command[@]}"

@@ -60,10 +60,12 @@ echo -e "\nexport PATH=\$PATH:$(go env GOPATH)/bin" >> ~/.bashrc
export PATH="$PATH:$(go env GOPATH)/bin"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line export PATH="$PATH:$(go env GOPATH)/bin" is redundant as it appears earlier in the script at line 43. This redundancy can lead to confusion and maintenance issues.

Recommended Solution:
Remove the redundant line to ensure the script is clean and maintainable.

Comment on lines +7 to +10
for i in {1..100}
do
cat /usr/share/dict/words >> words-repeated.txt
done

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeatedly using cat in a loop to append the contents of a file can be inefficient, especially for large files. This approach can be optimized to improve performance.

Recommended Solution:
Instead of using a loop, you can use the yes command to repeat the content and then trim it to the desired number of repetitions:

yes "$(cat /usr/share/dict/words)" | head -n 100 > words-repeated.txt

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @2lambda123 - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment to tell me if it was helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NO JIRA This PR does not have a Jira Ticket PR:size/XXL Denotes a Pull Request that changes 1000+ lines. release This PR is a release size/XL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants