Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[backend] v1 caching broken in KFP 1.8 release branch #7266

Closed
chensun opened this issue Feb 5, 2022 · 0 comments · Fixed by #7267
Closed

[backend] v1 caching broken in KFP 1.8 release branch #7266

chensun opened this issue Feb 5, 2022 · 0 comments · Fixed by #7267

Comments

@chensun
Copy link
Member

chensun commented Feb 5, 2022

The root cause is argoproj/argo-workflows#6022 introduced a behavior change that the content of Argo workflow template is no longer logged under pod.metadata.annotations, instead it puts the template in container env.

The change was released in Argo workflow v3.2.0-rc1. And #6920 upgraded Argo workflow used in KFP from 3.1.14 to 3.2.3. The change broke v1 caching implementation because it relies on Argo workflow template present in pod.metadata.annotations.

@chensun chensun self-assigned this Feb 5, 2022
google-oss-prow bot pushed a commit that referenced this issue Feb 7, 2022
…Fixes #7266 (#7267)

* Fix v1 caching to read argo template from container env.

* fix test
chensun added a commit to chensun/pipelines that referenced this issue Feb 7, 2022
…Fixes kubeflow#7266 (kubeflow#7267)

* Fix v1 caching to read argo template from container env.

* fix test
chensun added a commit that referenced this issue Feb 8, 2022
…Fixes #7266 (#7267)

* Fix v1 caching to read argo template from container env.

* fix test
abaland pushed a commit to abaland/pipelines that referenced this issue May 29, 2022
…Fixes kubeflow#7266 (kubeflow#7267)

* Fix v1 caching to read argo template from container env.

* fix test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant