-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup kube play workloads if error happens #16750
Conversation
@rhatdan PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM but note that the teardown-on-error had to happen in pkg/domain/infra/api
if we want to include REST-API calls.
9e38fff
to
6ccd312
Compare
@umohnani8 Rebase required. |
Rebased |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One non-blocking nit, LGTM
9ace32f
to
94b6dd1
Compare
If an error happening while playing a kube yaml, clean up any pods, containers, and volumes that might have been created before the error was hit. This improves the user experience for when they go to re-run the same yaml with their fixes and podman doesn't complain about any existing workloads from the previously failed run. Suppress the clean up output when clean up happens after an error as the user doesn't need to see or know about that. Signed-off-by: Urvashi Mohnani <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: flouthoc, umohnani8, vrothberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
If an error happening while playing a kube yaml,
clean up any pods, containers, and volumes that might have been created before the error was hit.
This improves the user experience for when they go to re-run the same yaml with their fixes and podman doesn't complain about any existing workloads from the previously failed run.
Suppress the clean up output when clean up happens after an error as the user doesn't need to see or know about that.
Signed-off-by: Urvashi Mohnani [email protected]
Does this PR introduce a user-facing change?