-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate triage dashboard to wg-k8s-infra #1305
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/assign @spiffxp |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove lifecycle-stale |
Ahh right, I had hopes this would be just as simple as
The problem is it's much more complicated because it uses ACLs, so there's no guarantee a delete/recreate as I did last time would recreate all ACLs (and whether there are google-internal things hidden in there)
I'm going to at least start with:
|
I toyed around with trying to create |
For now I'm opting to:
|
The canary job is refusing to schedule: https://testgrid.k8s.io/wg-k8s-infra-canaries#triage So:
|
For some reason the triage image isn't in staging like I would have expected with kubernetes/test-infra#23126
|
... that would be because nothing has landed that would trigger the push to staging after the job config was updated I was originally going to make a dummy change just to push a new image, but in grepping for Next steps:
|
We are very nearly done now:
|
When the final PR merges I will:
|
Will close after one last PR:
Arbitrary old link I verified the redirect with: |
I'm a bit hesitant to shout "success!" because kettle being down (kubernetes/test-infra#23135) makes triage look broken, but it's just as broken as it was before migration. We're reading and clustering data on community infra, but there's no new data to cluster right now... |
/close |
@spiffxp: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Part of migrating away from gcp-project k8s-gubernator: #1308
Triage is made up of a few components
Where should we move things to? My suggestions:
One other wrinkle: visiting https://go.k8s.io/triage redirects to https://storage.googleapis.com/k8s-gubernator/triage/index.html which exposes the bucket. If we change to use a new bucket, we're probably going to break a lot of existing URI's. Ideally we can serve a 301 redirect pointing to the new location. Bonus points if we could mask out the bucket name (eg: have triage.k8s.io be our location)
The text was updated successfully, but these errors were encountered: