Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a Helm chart / provide detailed instructions for Kubernetes setup #13

Open
igor-vovk opened this issue Dec 28, 2024 · 3 comments

Comments

@igor-vovk
Copy link

igor-vovk commented Dec 28, 2024

Hi and thank you for your work on the server!

What do you think about making it a little bit easier to set up xds in Kubernetes?

From the potential ideas, what can be done is either:

In both cases those resources can be added:

  • ServiceAccount, ClusterRole, ClusterRoleBinding to allow access to Kubernetes APIs, so customers won't need to figure it out themselves
  • Deployment with GRPC health checks, having ports 5000 and 9000 exposed (documentation doesn't mention those ports very clearly ☺️) and a Service resource in a headless mode
  • Also we can add a preconfigured ConfigMap with GRPC_XDS_BOOTSTRAP_CONFIG key with the correct configuration, that can be then attached to services by clients as an env-map.

I will be happy to work on a PR if you see sense in it ☺️

@igor-vovk igor-vovk changed the title Creating a helm chart / provide detailed instructions for Kubernetes setup Create a Helm chart / provide detailed instructions for Kubernetes setup Dec 28, 2024
@whs
Copy link
Member

whs commented Jan 2, 2025

Hi!

  1. For deployment example we'd welcome a simple deployment guide inline in the README. It should be simple and unopinionated - a good starting point.
  2. Internally we don't use Helm (we use Jsonnet with internal factory functions) so I'd suggest that those should be implemented externally. If people want to link to their Helm repos in the README we might be able to review those PR as well. Those Helm could be opinionated (eg. with hardened PodSecurityContext) as their author wish to be.

I'm not sure how the GRPC_XDS_BOOTSTRAP_CONFIG should be implemented, so currently I don't think they belongs to README. In our use case we inject them using the Jsonnet factory function, and we deploy most things with that function (i.e. lib.Deployment({containers: [...], xds: true}) would inject every containers with GRPC_XDS_BOOTSTRAP_CONFIG) so doesn't work externally.

@igor-vovk
Copy link
Author

Thanks for your response! The reason I've opened this issue, is because it took me a little bit more time to figure things out, and involved a few iterations by running it and observing the errors in the logs. Just for history, this is the needed Kubernetes instructions that I've come up with:

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: xds-service-account
  namespace: default

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: xds-role
  namespace: default
rules:
  - apiGroups: [ "" ]
    resources:
      - "services"
      - "endpoints"
    verbs:
      - "list"
      - "watch"

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: xds-role-binding
  namespace: default
roleRef:
  kind: ClusterRole
  name: xds-role
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: xds-service-account
    namespace: default

---

apiVersion: apps/v1
kind: Deployment

metadata:
  name: xds-server
  labels:
    name: xds-server

spec:
  replicas: 1
  selector:
    matchLabels:
      name: xds-server
  template:
    metadata:
      labels:
        name: xds-server
    spec:
      serviceAccountName: xds-service-account
      containers:
        - name: xds
          image: ghcr.io/wongnai/xds:master
          imagePullPolicy: Always
          ports:
            - containerPort: 5000
              name: grpc
            - containerPort: 9000
              name: http
          livenessProbe:
            grpc:
              port: 5000
  strategy:
    type: Recreate

---

apiVersion: v1
kind: Service

metadata:
  name: xds-server

spec:
  selector:
    name: xds-server
  type: ClusterIP
  clusterIP: None
  ports:
    - name: grpc
      port: 5000
      targetPort: grpc
    - name: http
      port: 9000
      targetPort: http

And the optional ConfigMap that can be attached to other containers straight ahead:

apiVersion: v1
kind: ConfigMap
metadata:
  name: xds-bootstrap-config

data:
  GRPC_XDS_BOOTSTRAP_CONFIG: |
    {
        "xds_servers": [
            {
                "server_uri": "xds-server:5000",
                "channel_creds": [{"type": "insecure"}],
                "server_features": ["xds_v3"]
            }
        ],
        "node": {
            "id": "anything",
            "locality": {
                "zone" : "k8s"
            }
        }
    }

Now when I put it all in a single file, I can say that mostly it was all related to figuring out the needed permissions.

Maybe we could convert it to a file in the repo (maybe without the ConfigMap, but I'd say it doesn't ask to eat so can be included as well), as a default installation method, and provide instructions in the Usage section like this:

kubectl apply -f ...gh_url_to_a_file...

So people can take it and customize to their needs.

That seems to be how others do it, this came to my mind first (not the simples ones 😀):

They expose some default yaml with everything that is needed. This would also mean no need to support any Helm charts or whatsoever.

@igor-vovk
Copy link
Author

igor-vovk commented Jan 7, 2025

In the end I've published what I wanted to have in https://github.com/igor-vovk/wongnai-xds-helm repo. Since we're using ArgoCD it really eases the process of installiing it.

I think this issue can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants