-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(cloud): emit ns statuses for in-cluster builds #4628
Conversation
It does, yeah—forgot to mention that in the description. |
3ea34d4
to
aa4bfa9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had one small comment.
Otherwise I feel like we need to re-think this in general but don't want to block the fix.
But shouldn't this be handled in a consistent manner at the action or router level with events that include plugin specific information so that it extends beyond just K8s?
AEC would then be responsible for applying the clean up for different plugins (or using the CLI to do it).
In this PR, we're catching
That totally makes sense—we can always change/extend the In general, I understand your reservations about this PR. That said, my thinking was that the Also, both before and after this PR, we're relying on integ tests to ensure that we don't accidentally make changes that result in us no longer sending the I do think that going forward, it would be nice to have a plugin-level mechanism to register event types for the router to listen for. |
547ab7b
to
228270c
Compare
Oh I don’t have reservations about the PR per se. Rather this flow in general which needs to be revisited separately imo so we can extend AEC to more plugins. |
495df1f
to
fd0fd3d
Compare
Should be good to go now, @vvagaytsev. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! I've left one more comment here. If the duplicate events make no harm, then all is good!
We now emit a `namespaceStatus` event whenever one of our internal helpers for checking the status of a Kubernetes namespace is called. This is done whenever we e.g. perform a Deploy, Test or Run in that namespace, or when we perform an in-cluster Build in that namespace (e.g. via BuildKit or Kaniko). Here, we also remove namespace statuses from the relevant result types (e.g. `ServiceStatus`, `RunStatus` and `EnvironmentStatus` and the associated schemas). These added unwanted noise to the interfaces and schemas. Instead, we emit the `namespaceStatus` events on the plugin event broker (`ctx.events`) and re-emit them to `garden.events` in the base router.
3f17bed
to
5bf7193
Compare
What this PR does / why we need it:
We now emit a
namespaceStatus
event whenever one of our internal helpers for checking the status of a Kubernetes namespace is called. This is done whenever we e.g. perform a Deploy, Test or Run in that namespace, or when we perform an in-cluster Build in that namespace (e.g. via BuildKit or Kaniko).Here, we also remove namespace statuses from the relevant result types (e.g.
ServiceStatus
,RunStatus
andEnvironmentStatus
and the associated schemas). These added unwanted noise to the interfaces and schemas.Instead, we emit the
namespaceStatus
events on the plugin event broker (ctx.events
) and re-emit them togarden.events
in the base router.Which issue(s) this PR fixes:
Previously, automatic environment cleanup would not get triggered for namespaces where only Builds had taken place (i.e. no Deploys, Runs or Tests). This fixes that.
Fixes #4609
Special notes for your reviewer:
I wrote a test case verifying that
namespaceStatus
events get emitted during BuildKit builds, but it kept hanging due to something Mutagen-related. I've verified manually that the events do get emitted (and so long as we're using the standard helpers for getting the Kubernetes namespace status in the BuildKit and Kaniko handlers, we should be good).We should investigate this further, but I didn't want to get bogged down in debugging a test case that seemed like it might also introduce flakiness into our integ test suite.