-
Notifications
You must be signed in to change notification settings - Fork 726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make helm charts consistent with how fields in spec are handled. (fleet-server only) #8285
Changes from 5 commits
3f310ed
9e0095a
e739f7d
8217c26
1261b81
b27130a
f07d51d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||
---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,19 @@ | ||||||||
version: 8.17.0-SNAPSHOT | ||||||||
deployment: | ||||||||
replicas: 1 | ||||||||
podTemplate: | ||||||||
spec: | ||||||||
serviceAccountName: fleet-server | ||||||||
automountServiceAccountToken: true | ||||||||
elasticsearchRefs: | ||||||||
- name: eck-elasticsearch | ||||||||
namespace: default | ||||||||
naemono marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
kibanaRef: | ||||||||
name: eck-kibana | ||||||||
namespace: default | ||||||||
naemono marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
http: | ||||||||
service: | ||||||||
spec: | ||||||||
type: ClusterIP | ||||||||
serviceAccount: | ||||||||
name: elastic-fleet-server | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
for consistency with the other charts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
So I went back and looked at the original PR for Agent+Fleet Helm charts to better understand this functionality, and we set the
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will try to take another look to understand better why we did that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. After looking throught the git history I could not find a good argument why we structured In all the other charts In the Agent/Fleet Server and Beats chart we actually need to generate a custom service account bound to a role with the use-case-dependent RBAC permissions for the application pods, so that the integrations the user wants to run work and can access the k8s API as needed. There are three problems with the approach taken:
Given that we already released the all three affected charts I don't see how we can now stop supporting these attributes without potentially breaking customer installations.
tl;dr
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a disconnect here between the name of this service account and the service account that is being generated.
We are also setting the generated service account name as the
.spec.serviceAccountName
which is distinct from the service account in the pod spec. The latter is required for agent to work correctly the former is only relevant for the cross-namespace RBAC feature we have built into ECK ("Am I allowed to associate with an Elasticsearch in namespace x") . I am not sure if it is a problem to combine the two into one service account. Curious to get @barkbay 's perspective.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the very least the two service account names in this example have to be the same for the example to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated this to make them consistent. What I tested was this: https://github.com/naemono/cloud-on-k8s/blob/helm-chart-image-fix-fleet/deploy/eck-stack/examples/agent/fleet-agents.yaml, which worked without issues, which only includes the sa name in the
podTemplate
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that's a problem either, that being said we made the choice to let the user specify a SA which can be different from the one used by the Pods (https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-restrict-cross-namespace-associations.html), and I'm not sure it is a reflected in the chart. This comment suggests that
serviceAccount.name
is used by the Pods while it is actually used by the cross-namespace restriction mechanism?