-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[frontend] logs and artifacts stored in different Minio locations and can't be accessed by frontent #6428
Comments
might be related to #3818 |
Not sure if related to that issue @Bobgy but looking in
I tried changing this to
but the object path wrong as UI looks for e.g. |
@jonasdebeukelaer any workaround for this particular issue?
which produces: I tried changing the key format in the From: To: |
@andrijaperovic For now we 're unfortunately having to access the artefacts through the MinIO interface by port-forwarding
|
This is quite an annoying bug for us at the moment. I'd be happy to try fix this issue if it still need picking up. If someone can point to where to look that'd be ace |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
It has been one year, did anyone found out how to resolve/workaround this bug that'd be great, Im sure this is one of the major bugs. |
Yes, it is. It is very annoying. |
Having the same issue with Kubeflow Pipelines 0.2.5 - if pods created by workflow are deleted, logs cannot be accessed from MinIO artifact store, even if they are archived there. Has anyone actually found a way to make this work ? |
I think I found the root cause of the problem. It is likely that the schema for Argo Workflow status has changed, as the Argo Workflow fields that are retrieved by pipelines/frontend/server/workflow-helper.ts Line 157 in 4f8cae2
|
According to #8935 (comment), #10568 should have fixed it. Do you still have issues? |
I still have the issue with KFP 2.2.0, which includes #10568 - does not look like the UI was updated to retrieve the S3 log artifacts stored by Argo. I have a fix in my fork that shows what is needed for the UI to work with the archived logs - master...pdettori:pipelines:s3-pod-logs-fix |
Since this issue is about However, as @pdettori (and the other recent comments have raised) there is a similar issue present in Let's continue the discussion on the other issue: /close |
@thesuperzapper: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@pdettori I see in #6428 (comment) that you propose a solution to the V2 issue. Can you please continue this discussion on the V2 issue: #10036 |
Environment
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
Steps to reproduce
Then we see that:
Failed to get object in bucket mlpipeline at path v2/artifacts/pipeline/example pipeline/e154ea68-d59c-46ba-aea4-ead5b9eb0ea8/square/metrics: S3Error: The specified key does not exist.
Error response: Could not get main container logs: Error: Unable to find pod log archive information from workflow status.
Looking in MiniO it seems
the Artifact is stored at
mlpipeline/v2/artifacts/pipeline/example%20pipeline/e154ea68-d59c-46ba-aea4-ead5b9eb0ea8/square/
and the logs and metrics are stored at
mlpipeline/artifacts/example-pipeline-ljzls/2021/08/25/example-pipeline-ljzls-1034021739/
minimal code sample:
Expected result
The logs and metrics can be retrieved from MiniO by the frontend even when pods are deleted
Materials and Reference
Is this a configuration issue? Should I move things to point only to v2 path?
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: