-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: explicitly apply minio-service with name #151
Conversation
The upstream pipelines project hardcodes the name of the object storage service to minio-service. The current implementation sets the service to just minio, which collides with what pipelines expect. This PR ensures the Service is created with the expected name and ports. Refer to kubeflow/pipelines#9689 for more information
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @DnPlas for the PR.
Deployed mino
from latest/edge
and the one built from the PR and verified that it now creates a service named minio-service
which is identical to the one we had (meaning that we do not change any other configuration).
Note also that the charm still creates the minio
service, thus we end up with 2 identical services (minio
and minio-service
). Since this doesn't create an issue though, I 'll go ahead and approve.
Hey @orfeas-k, thanks for reviewing. You have made a good point, explicitly defining a Service in the charm code will create a second Service with a different name, but identical in all other fields. I have changed the approach to avoid this and instead used the KubernetesServicePatcher to make sure the Service is configured as we want. |
fe949f9
to
a4f0714
Compare
After trying this approach, we noticed that it won't work as the charm has to be trusted. Since this is a podspec charm, that is not possible. Reverting back to the initial approach of defining a Service directly as a Kubernetes Resource to be applied. For now, having two Services should not be an issue as the Service data that is actually used and shared comes from the |
This reverts commit b99aad8.
This reverts commit b99aad8.
This adds to kfp-api a service called `minio-service` which points to the related object-storage's s3 service. This has been added to address a bug in upstream kfp, as explained [here](canonical/minio-operator#151). This service was originally added to the minio charm in [minio pr 151](canonical/minio-operator#151), but has been refactored so it is added here instead as described in [minio issue 153](canonical/minio-operator#153).
This adds to kfp-api a service called `minio-service` which points to the related object-storage's s3 service. This has been added to address a bug in upstream kfp, as explained [here](canonical/minio-operator#151). This service was originally added to the minio charm in [minio pr 151](canonical/minio-operator#151), but has been refactored so it is added here instead as described in [minio issue 153](canonical/minio-operator#153). (cherry picked from commit 12572ca)
This adds to kfp-api a service called `minio-service` which points to the related object-storage's s3 service. This has been added to address a bug in upstream kfp, as explained [here](canonical/minio-operator#151). This service was originally added to the minio charm in [minio pr 151](canonical/minio-operator#151), but has been refactored so it is added here instead as described in [minio issue 153](canonical/minio-operator#153).
The upstream pipelines project hardcodes the name of the object storage service to minio-service. The current implementation sets the service to just minio, which collides with what pipelines expect. This PR ensures the Service is created with the expected name and ports.
Testing
To test this change, you can build and deploy the charm and look for the Service
minio-service
. It should have the following:minio-service
minio
and targetPort:9000console
and targetPort:9001app.kubernetes.io/name: minio
These are the logs we are trying to avoid with the changes in this PR:
Refer to kubeflow/pipelines#9689 for more information