-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operator projects using the removed APIs in k8s 1.22 requires changes. #214
Comments
Related #190 |
@camilamacedo86 I had to use the following options to regen my CRD:
Note the
Controllergen 0.3.0 |
Signed-off-by: Ricardo Zanini <[email protected]>
Hi @ricardozanini,
I am happy that you could solve that. See that the above suggestion is with another version of the controller-gen and that might be the reason for it does not work for you:
Anyway, as described above it is hard to provide direct guidance and we hope that these steps can help out in the biggest part of the scenarios or at least help to figure out how to move forward. Thank you for your attention and commitment in making this project supportable on 1.22/OCP 4.9+. |
* Fix #214 Upgrade CRD to v1 and remove Legacy Ingress Signed-off-by: Ricardo Zanini <[email protected]> * k8s 1.16
Currently running OKD 4.9 with K8S v1.22 and hit by this problem |
oh I Just need to open the PR to the community |
@ricardozanini thanks! |
I already a snapshot ready, running tests :) |
Docker for desktop now runs kubernetes 1.22.4 FYI. |
Opened PRs to upgrade the operator:
I believe that tomorrow the catalog will be updated with this new version. Thanks for the patience, guys. |
the operator is now available on operatorhub and it works perfectly on OKD 4.9/k8s 1.22 |
@LCaparelli FYI |
Failed on docker-for-desktop 4.3.0.
pod logs included
|
Had some challenges on okd4.8 (operator auto update) and fresh install on okd4.9. Adding Other than that, all seems well on both okd versions. Thanks for working on this. |
@kapetre, unfortunately, the official Nexus image requires a root user to run. We didn't want to change the image ourselves, keeping a separate registry used by only our operator. So configuring the SCC is a must. There are instructions in the project's readme. Glad that you made it work. :) |
@tibcoplord it seems to me a problem with permissions in your volume:
Make sure that your user running the container has the necessary permissions in the /nexus-data directory. I don't have experience with docker-for-desktop. :( |
Volume was created by the operator ... nothing I did here. |
I noticed that the nexus process is running as userid nexus -
and most of the permissions in /nexus-data got set correctly -
But etc is not accessible to the nexus process. With 0.5.0 I see etc is owned by user nexus. The only work-around I can think of is to add a second volume to /nexus-data/etc, eg -
If it matters, I used to use 0.5.0 from github. 0.6.0 is so-far only available on operator hub. |
I'm going to upload the assets today. :) I need to take a look at the commits related to the |
@tibcoplord the On OpenShift and Kubernetes the user running in the container has the correct permissions to access this directory. Maybe something we could configure in docker for desktop? |
Any news on 0.6.0 release on github ? Many thanks. |
I'll release it today |
Having the same issue with |
@slenky, I think this is a matter of EKS configuration. If I can do something on the operator side, let me know. Can you help investigate? I don't have much time lately to look into this. |
@tibcoplord I'm investigating the CM privileges issue. Are you using RH or community image? I ask because the So the CM should be mounted with nexus user permission. |
Hello @ricardozanini , we are currently creating an emptyDir volume mounted to |
Ok, I'll add this volume by default then |
Problem Description
Kubernetes has been deprecating API(s), which will be removed and are no longer available in 1.22. Operators projects using these APIs versions will not work on Kubernetes 1.22 or any cluster vendor using this Kubernetes version(1.22), such as OpenShift 4.9+. Following the APIs that are most likely your projects to be affected by:
Therefore, looks like this project distributes solutions in the repository and does not contain any version compatible with k8s 1.22/OCP 4.9. (More info). Following some findings by checking the distributions published:
NOTE: The above findings are only about the manifests shipped inside of the distribution. It is not checking the codebase.
How to solve
It would be very nice to see new distributions of this project that are no longer using these APIs and so they can work on Kubernetes 1.22 and newer and published in the community-operators collection. OpenShift 4.9, for example, will not ship operators anymore that do still use v1beta1 extension APIs.
Due to the number of options available to build Operators, it is hard to provide direct guidance on updating your operator to support Kubernetes 1.22. Recent versions of the OperatorSDK greater than 1.0.0 and Kubebuilder greater than 3.0.0 scaffold your project with the latest versions of these APIs (all that is generated by tools only). See the guides to upgrade your projects with OperatorSDK Golang, Ansible, Helm or the Kubebuilder one. For APIs other than the ones mentioned above, you will have to check your code for usage of removed API versions and upgrade to newer APIs. The details of this depend on your codebase.
If this projects only need to migrate the API for CRDs and it was built with OperatorSDK versions lower than 1.0.0 then, you maybe able to solve it with an OperatorSDK version >= v0.18.x < 1.0.0:
Alternatively, you can try to upgrade your manifests with controller-gen (version >= v0.4.1) :
If this project does not use Webhooks:
If this project is using Webhooks:
Add the markers sideEffects and admissionReviewVersions to your webhook (Example with sideEffects=None and admissionReviewVersions={v1,v1beta1}: memcached-operator/api/v1alpha1/memcached_webhook.go):
Run the command:
For further information and tips see the comment.
The text was updated successfully, but these errors were encountered: