-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds k8s files and CD workflow #11
Conversation
Signed-off-by: Bruno Calza <[email protected]>
|
||
jobs: | ||
deploy: | ||
if: github.event_name == 'release' || github.ref == 'refs/heads/main' || contains(github.event.head_commit.message, '[staging]') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will only be triggered under these conditions:
- new release
- merge on
main
[staging]
in commit message
- name: Prepare | ||
id: prep | ||
run: | | ||
if [[ ${{ github.event_name }} == 'release' ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This if
will later be used to select in which environment the deploy will occur. e.g. staging
or production
. right now, we only have staging
environment
- name: Deploy to GKE | ||
working-directory: ./k8s | ||
run: | | ||
DEPLOYMENT=$DEPLOYMENT IMAGE_BASIN_WORKER=$CONTAINER_REGISTRY/${{ secrets.GCP_PROJECT }}/textile/basin/basin_worker:sha-$SHA_SHORT IMAGE_BASIN_EXPORTER=$CONTAINER_REGISTRY/${{ secrets.GCP_PROJECT }}/textile/basin/basin_exporter:sha-$SHA_SHORT make deploy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
calls make deploy passing the images that were just built and the DEPLOYMENT
kind
- name: Configure Docker | ||
run: gcloud auth configure-docker $CONTAINER_REGISTRY | ||
|
||
- name: Build and push worker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
build the worker
cache-to: type=local,dest=/tmp/.buildx-cache | ||
build-args: CRATE=basin_worker | ||
|
||
- name: Build and push exporter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
build the exporter
cache-to: type=local,dest=/tmp/.buildx-cache | ||
build-args: CRATE=basin_exporter | ||
|
||
- name: Get GKE Credentials |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
connect to GKE
d9a6e02
to
c9c8bd5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question about where / how your deploying the ingress controller but otherwise looks great.
k8s/ingress-nginx-controller.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the past i've always setup the ingress controller in a different namespace using the install guide: https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke. looks like your putting it in the app namespace but not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's inside ingress-nginx
, if you look at the yaml, all resources have a namespace: ingress-nginx
field
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm using the same guide, but I had to add a small tweak because the basin-worker
is using port 3000
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice. If easier, we could just change the port number, but no strong pref
Signed-off-by: Bruno Calza <[email protected]>
c9c8bd5
to
5d28d48
Compare
Signed-off-by: Bruno Calza <[email protected]>
e16198c
to
24acf70
Compare
|
||
deploy: | ||
cd ${DEPLOYMENT} && $(KUSTOMIZE) edit set image textile/basin_worker=${IMAGE_BASIN_WORKER} textile/basin_exporter=${IMAGE_BASIN_EXPORTER} | ||
cd ${DEPLOYMENT} && $(KUSTOMIZE) build . | sed -e 's/\x27"3000"\x27/3000/g' | kubectl apply -f - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sed -e 's/\x27"3000"\x27/3000/g'
is a workaround from a limitation of kustomize
build when creating a config map of configTcpServices.env
. It does not understand integers as part of a key in a yaml. The problem here is that if we change the value in configTcpServices.env
we'd have to change here. I'll leave this way as I think of a better approach
"--database-url", "$(DATABASE_URL)", | ||
"--export-credentials", "$(EXPORT_CREDENTIALS)", | ||
"--bind-health-address", "$(BIND_HEALTH_ADDRESS)", | ||
#"--export-schedule", "$(EXPORT_SCHEDULE)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is commented on because i could not figure out a syntax in the env file that worked. it's just using the default
@@ -0,0 +1,636 @@ | |||
apiVersion: v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file comes from https://kubernetes.github.io/ingress-nginx/deploy/ . I needed to download it and do some tweaks to things that are specific to our setup. Essentially, this file creates an external Google Load Balancer and an nginx
controller.
The tweaks:
- we're opening port 3000
- we're specifying the static IP address where DNS name points to
- also a patch to this file is used to build the final
nginx
configuration
@@ -0,0 +1,14 @@ | |||
apiVersion: networking.k8s.io/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ingress
resource is only used in the phase of issuing a certificate. If we were using HTTP, it would be used to route traffic based on hostname and paths. But right now, the nginx
is doing the proxing directly to the PODs.
@@ -0,0 +1,1609 @@ | |||
{{ $all := . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I needed to tweak the default nginx
configuration to allow SSL termination of a TCP service into a different port and act as a proxy. This is done by mounting this template file into the nginx
POD. This solution came from kubernetes/ingress-nginx#636 (comment).
@@ -0,0 +1,12 @@ | |||
[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this patch specific to the staging
env adds two more args to the nginx
controller:
tcp-services-configmap
: tells nginx how to deal with TCP connections. Take a look at theconfigTcpServices.env
. This config means, any TCP connection to port 3000, will be proxied tobasin-staging/basin-worker
k8s Service at port 3000.--default-ssl-certificate
: tellsnginx
where to find the TLS certificate
@@ -0,0 +1,21 @@ | |||
apiVersion: networking.k8s.io/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a patch to the ingress that is not really used but is needed to issue the certificate
@@ -0,0 +1,61 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we use cronJob
for exporter instead of Deployment
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hum not sure, the exporter
already has a cron-like mechanism inside it. we could move to cronJob
but it would mean changing the current behavior of exporter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, makes sense to leave it as is then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious that if we use kind: Deployment
would it make the exporter pod to run constantly even though it wouldn't be doing anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would. let's revisit the cronJob
option in the next basin iteration... sounds like a nice setup but would require code changes as bruno mentioned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious that if we use
kind: Deployment
would it make the exporter pod to run constantly even though it wouldn't be doing anything?
good point!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work man! Cool to see this come together.
@@ -0,0 +1,61 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would. let's revisit the cronJob
option in the next basin iteration... sounds like a nice setup but would require code changes as bruno mentioned
Summary
This PR adds yaml files for deploying both the
worker
andexporter
in k8s. It makes use of kustomize for dealing with multiple environments. Right now, we only have one:staging
. But it's all setup for adding another one.It also adds a CD pipeline.
Further details
Architecture explanation
This PR introduces the following architecture
Not sure how I feel about this architecture. But it was kind of forced by how we created and how we are storing the certificate ("the k8s way")
A simpler architecture would be a Load Balancer with TLS termination (we'd have to upload the certificate to the LB) proxying traffic to the K8s service directly. Not sure how easy would be to configure all this TCP proxying but would be simpler.
The benefit of this
nginx
architecture is: if we end up with more TCP services we don't have to keep adding load balancers, we simply open new ports, andnginx
does the port routing to the correct service. But TBH, not seeing that much benefit of capnp and TCP services, so moving to HTTP would probably just make things a lot easier.CD pipeline
We currently have only one environment:
staging
. We can deploy tostaging
by:[staging]
to the commit messagemain