-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s 1.21] BoundServiceAccountTokenVolume refresh token #6845
Comments
Related to #6804 (and fabric8io/kubernetes-client#2271 & fabric8io/kubernetes-client#2112) |
If I understood it correctly, it should have been implemented as part of fabric8io/kubernetes-client#2271 - in Fabric8 6.1.0. That should mean it is fixed in Strimzi 0.31.0 and newer. Can someone affected by this verify if it is the case? |
hey Jakub @scholzj , is it related also to changes in 1.24 ?
I am using strimzi 0.26.1 and after upgrade of the cluster (AKS) the service account for operator has no token mounted anymore -
Is the operator supports the TokenRequest subresource to obtain a token to access the API ? as I do not see a secrets with token anymoore ? |
I did not open this and never had any issues with this, I do not know how ti relates to changes in Kubernetes 1.24. Sorry. |
Discussed on the community call on 18.4.: Does not seem to be an issue anymore with current versions and can be closed. If not, we need a better explanation what the problem is and what needs to be done |
Hi,
Kubernetes version 1.21 graduated BoundServiceAccountTokenVolume feature to beta and enabled it by default. This feature improves security of service account tokens by requiring a one hour expiry time, over the previous default of no expiration. This means that applications that do not refetch service account tokens periodically will receive an HTTP 401 unauthorized error response on requests to Kubernetes API server with expired tokens.
In our kubernetes audit logs we see that system:serviceaccount:kafka-operator:strimzi-cluster-operator still using stale tokens
As per my understanding, that should be fixed by upgrading kafka-operator to latest release Java v9.0.0 and later
To Reproduce
Steps to reproduce the behavior:
install strimzi-kafka-operator in kubernetes cluster with version >= 1.21 and check the kubernetes CloudWatch logs for stale-token:
filter @logstream like 'kube-apiserver-audit'
| filter ispresent(
annotations.authentication.k8s.io/stale-token
)| parse
annotations.authentication.k8s.io/stale-token
"subject: ," as subject| stats count() as staleCount by subject,
user.username
| sort staleCount desc
Expected behavior
No errors regarding kafka SA and stale tokens.
Environment (please complete the following information):
YAML files and logs
$ cat strimzi-operator.yml
name: "Clear for sure chart repo"
kubernetes.core.helm_repository:
name: strimzi
repo_state: absent
ignore_errors: True
name: Add chart repo
kubernetes.core.helm_repository:
name: strimzi
repo_url: "https://strimzi.io/charts/"
name: Upgrade or install Helm chart
kubernetes.core.helm:
release_namespace: "kafka-operator"
create_namespace: "yes"
release_name: "strimzi"
chart_version: "0.28.0"
chart_ref: "strimzi/strimzi-kafka-operator"
wait: true
wait_timeout: "300s"
values:
watchAnyNamespace: true
subject user.username staleCount
1 system:serviceaccount:kafka-operator:strimzi-cluster-operator system:serviceaccount:kafka-operator:strimzi-cluster-operator 4988
The text was updated successfully, but these errors were encountered: