-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to call webhook from API server with AWS EKS 1.19 #1224
Comments
I see a webhook problem as well, my spark configmap is not getting attached to the driver. |
I'm getting this as well after upgrading to EKS 1.19. |
I upgraded to the latest version of the spark operator and the issue was resolved. |
not sure if this issue is addressed already , getting the below error with AWS EKS (K8s version 1.19 and Helm v3.6.3, Operator Tag Image v1beta2-1.2.3-3.1.1 and chart version 1.1.8) , all spark applications fail to load volume, config maps, Please note all configurations work as expected in in-premise K8 deployment.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it. |
Failed to call webhook from API server with AWS EKS 1.19
E0412 13:50:38.660172 1 dispatcher.go:171] failed calling webhook "webhook.sparkoperator.k8s.io": Post "https://spark-operator-webhook.spark-operator.svc:443/webhook?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
The text was updated successfully, but these errors were encountered: