-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a Helm chart / provide detailed instructions for Kubernetes setup #13
Comments
Hi!
I'm not sure how the |
Thanks for your response! The reason I've opened this issue, is because it took me a little bit more time to figure things out, and involved a few iterations by running it and observing the errors in the logs. Just for history, this is the needed Kubernetes instructions that I've come up with: ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: xds-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: xds-role
namespace: default
rules:
- apiGroups: [ "" ]
resources:
- "services"
- "endpoints"
verbs:
- "list"
- "watch"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: xds-role-binding
namespace: default
roleRef:
kind: ClusterRole
name: xds-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: xds-service-account
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: xds-server
labels:
name: xds-server
spec:
replicas: 1
selector:
matchLabels:
name: xds-server
template:
metadata:
labels:
name: xds-server
spec:
serviceAccountName: xds-service-account
containers:
- name: xds
image: ghcr.io/wongnai/xds:master
imagePullPolicy: Always
ports:
- containerPort: 5000
name: grpc
- containerPort: 9000
name: http
livenessProbe:
grpc:
port: 5000
strategy:
type: Recreate
---
apiVersion: v1
kind: Service
metadata:
name: xds-server
spec:
selector:
name: xds-server
type: ClusterIP
clusterIP: None
ports:
- name: grpc
port: 5000
targetPort: grpc
- name: http
port: 9000
targetPort: http And the optional ConfigMap that can be attached to other containers straight ahead: apiVersion: v1
kind: ConfigMap
metadata:
name: xds-bootstrap-config
data:
GRPC_XDS_BOOTSTRAP_CONFIG: |
{
"xds_servers": [
{
"server_uri": "xds-server:5000",
"channel_creds": [{"type": "insecure"}],
"server_features": ["xds_v3"]
}
],
"node": {
"id": "anything",
"locality": {
"zone" : "k8s"
}
}
} Now when I put it all in a single file, I can say that mostly it was all related to figuring out the needed permissions. Maybe we could convert it to a file in the repo (maybe without the ConfigMap, but I'd say it doesn't ask to eat so can be included as well), as a default installation method, and provide instructions in the kubectl apply -f ...gh_url_to_a_file... So people can take it and customize to their needs. That seems to be how others do it, this came to my mind first (not the simples ones 😀): They expose some default yaml with everything that is needed. This would also mean no need to support any Helm charts or whatsoever. |
In the end I've published what I wanted to have in https://github.com/igor-vovk/wongnai-xds-helm repo. Since we're using ArgoCD it really eases the process of installiing it. I think this issue can be closed. |
Hi and thank you for your work on the server!
What do you think about making it a little bit easier to set up
xds
in Kubernetes?From the potential ideas, what can be done is either:
README.md
to thedeploy.yml
file, but it seems to be broken.In both cases those resources can be added:
ServiceAccount
,ClusterRole
,ClusterRoleBinding
to allow access to Kubernetes APIs, so customers won't need to figure it out themselvesDeployment
with GRPC health checks, having ports5000
and9000
exposed (documentation doesn't mention those ports very clearlyService
resource in a headless modeConfigMap
withGRPC_XDS_BOOTSTRAP_CONFIG
key with the correct configuration, that can be then attached to services by clients as an env-map.I will be happy to work on a PR if you see sense in it☺️
The text was updated successfully, but these errors were encountered: