Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce Github Actions CI workflow #3339

Merged
merged 4 commits into from
Sep 4, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
246 changes: 246 additions & 0 deletions .github/workflows/workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
name: CI

on:
pull_request: {}
push:
branches:
- master
- refs/tags/edge-[0-9]?.[0-9]?.[0-9]?
- refs/tags/stable-2.[0-9]?.[0-9]?

jobs:
validate_go_deps:
name: Validate go deps
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v1
# for debugging
- name: Dump env
run: |
env | sort
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- name: Dump job context
env:
JOB_CONTEXT: ${{ toJson(job) }}
run: echo "$JOB_CONTEXT"
- name: Validate go deps
run: |
. bin/_tag.sh
for f in $( grep -lR --include=Dockerfile\* go-deps: . ) ; do
validate_go_deps_tag $f
done

go_unit_tests:
name: Go unit tests
runs-on: ubuntu-18.04
container:
image: golang:1.12.9
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Go unit tests
env:
GITCOOKIE_SH: ${{ secrets.GITCOOKIE_SH }}
run: |
echo "$GITCOOKIE_SH" | bash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also had this in Travis, but I've never known what it is for...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! It's to mitigate rate-limiting when pulling go dependencies:
golang/go#12933 (comment)

The good news is we should be able to remove this when Go 1.13 lands, as Google will run a module mirror:
https://proxy.golang.org/

# TODO: validate bin/protoc-go.sh does not dirty the repo
go test -cover -race -v -mod=readonly ./...

go_lint:
name: Go lint
runs-on: ubuntu-18.04
container:
image: golang:1.12.9
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Go lint
env:
GITCOOKIE_SH: ${{ secrets.GITCOOKIE_SH }}
# prevent OOM
GOGC: 20
run: |
echo "$GITCOOKIE_SH" | bash
bin/lint --verbose

js_unit_tests:
name: JS unit tests
runs-on: ubuntu-18.04
container:
image: node:10.16.0-stretch
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Yarn setup
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we wanted to get rid of this step, would we need to use an image with yarn already installed? Build cache it? Copy files from a tarball?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah any of those approaches would work. We run this command today in Travis, so mostly looking to match that setup before optimizing.

run: |
curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.7.0
- name: JS unit tests
run: |
export PATH="$HOME/.yarn/bin:$PATH"
export NODE_ENV=test
bin/web
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this just came over from the existing tooling, but should it be more explicit what's being done? Maybe bin/web setup && bin/web test? I don't think a build is required and it might speed things up some.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's correct this is cribbed from Travis. I'm totally down to optimize further, but for this PR I mostly want to get it into the pipeline as-is.

bin/web test --reporters=jest-dot-reporter

docker_build:
name: Docker build
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Docker SSH setup
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST_PRIVATE_KEY: ${{ secrets.DOCKER_HOST_PRIVATE_KEY }}
run: |
mkdir -p ~/.ssh/
echo "$DOCKER_HOST_PRIVATE_KEY" > ~/.ssh/id_rsa
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This'll put the full key in the logs, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I was using a Makefile and that auto-expanded this for reasons. Ignore me!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw, steps within a job shares the same workspace filesystem. If we ever need to add -x, we can use shell to write the secrets to files on the filesystem, and refer them (e.g. scp -i) in subsequent steps.

chmod 600 ~/.ssh/id_rsa
ssh-keyscan $DOCKER_ADDRESS >> ~/.ssh/known_hosts
- name: Docker build
env:
DOCKER_HOST: ssh://github@${{ secrets.DOCKER_ADDRESS }}
run: |
PATH="`pwd`/bin:$PATH" DOCKER_TRACE=1 bin/docker-build

kind_setup:
strategy:
matrix:
integration_test: [deep, upgrade, helm]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😎

name: Cluster setup (${{ matrix.integration_test }})
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Docker SSH setup
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST_PRIVATE_KEY: ${{ secrets.DOCKER_HOST_PRIVATE_KEY }}
run: |
mkdir -p ~/.ssh/
echo "$DOCKER_HOST_PRIVATE_KEY" > ~/.ssh/id_rsa
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chmod 600 ~/.ssh/id_rsa
ssh-keyscan $DOCKER_ADDRESS >> ~/.ssh/known_hosts
- name: Kind cluster setup
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST: ssh://github@${{ secrets.DOCKER_ADDRESS }}
run: |
TAG="$(CI_FORCE_CLEAN=1 bin/root-tag)"
export KIND_CLUSTER=github-$TAG-${{ matrix.integration_test }}
bin/kind create cluster --name=$KIND_CLUSTER --wait=1m
scp $(bin/kind get kubeconfig-path --name=$KIND_CLUSTER) github@$DOCKER_ADDRESS:/tmp

kind_integration:
strategy:
matrix:
integration_test: [deep, upgrade, helm]
needs: [docker_build, kind_setup]
name: Integration tests (${{ matrix.integration_test }})
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Docker SSH setup
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST_PRIVATE_KEY: ${{ secrets.DOCKER_HOST_PRIVATE_KEY }}
run: |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to make this step more common? Is there a sandbox shared between all the steps?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's unfortunate we're doing this Docker SSH setup step across 4 different jobs. Each job runs in a separate VM so not great shared option.

From a performance perspective, this step only takes 1 second.

From a maintenance perspective, I agree it's less than ideal. We solved this in Travis using YAML aliases, but this is not supported in Github Actions.

mkdir -p ~/.ssh/
echo "$DOCKER_HOST_PRIVATE_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan $DOCKER_ADDRESS >> ~/.ssh/known_hosts
- name: Kind load docker images
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
run: |
TAG="$(CI_FORCE_CLEAN=1 bin/root-tag)"
export KIND_CLUSTER=github-$TAG-${{ matrix.integration_test }}
ssh -T github@$DOCKER_ADDRESS &> /dev/null << EOF
for IMG in controller grafana proxy web ; do
# TODO: This is using the kind binary on the remote host.
kind load docker-image gcr.io/linkerd-io/\$IMG:$TAG --name=$KIND_CLUSTER
done
EOF
- name: Install linkerd CLI
env:
DOCKER_HOST: ssh://github@${{ secrets.DOCKER_ADDRESS }}
run: |
TAG="$(CI_FORCE_CLEAN=1 bin/root-tag)"
image="gcr.io/linkerd-io/cli-bin:$TAG"
id=$(bin/docker create $image)
mkdir -p ./target/cli/linux
bin/docker cp "$id:/out/linkerd-linux" "./target/cli/linux/linkerd"
# validate CLI version matches the repo
[[ "$TAG" == "$(bin/linkerd version --short --client)" ]]
echo "Installed Linkerd CLI version: $TAG"
- name: Run integration tests
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST: ssh://github@${{ secrets.DOCKER_ADDRESS }}
GITCOOKIE_SH: ${{ secrets.GITCOOKIE_SH }}
run: |
Copy link
Contributor

@ihcsim ihcsim Aug 29, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about the error reporting in run. So if e.g., the scp command fail, does run terminate immediately? And does the UI show which shell command fail? Or do we need -e? Setting -x is probably not a good idea here, because of the secret env var.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fail fast is on by default:
https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstepsrunshell

I believe the UI highlights which command fails.

echo "$GITCOOKIE_SH" | bash
# TODO: pin Go version
go version
export PATH="`pwd`/bin:$PATH"
TAG="$(CI_FORCE_CLEAN=1 bin/root-tag)"
export KIND_CLUSTER=github-$TAG-${{ matrix.integration_test }}
# Restore kubeconfig from remote docker host.
mkdir -p $HOME/.kube
scp github@$DOCKER_ADDRESS:/tmp/kind-config-$KIND_CLUSTER $HOME/.kube
export KUBECONFIG=$(bin/kind get kubeconfig-path --name=$KIND_CLUSTER)
# Start ssh tunnel to allow kubectl to connect via localhost.
export KIND_PORT=$(bin/kubectl config view -o jsonpath="{.clusters[?(@.name=='$KIND_CLUSTER')].cluster.server}" | cut -d':' -f3)
ssh -4 -N -L $KIND_PORT:localhost:$KIND_PORT github@$DOCKER_ADDRESS &
sleep 2 # Wait for ssh tunnel to come up.
bin/kubectl version --short # Test connection to kind cluster.
(
. bin/_test-run.sh
init_test_run `pwd`/bin/linkerd
${{ matrix.integration_test }}_integration_tests
)

kind_cleanup:
ihcsim marked this conversation as resolved.
Show resolved Hide resolved
if: always()
strategy:
fail-fast: false # always attempt to cleanup all clusters
matrix:
integration_test: [deep, upgrade, helm]
needs: [kind_integration]
name: Cluster cleanup (${{ matrix.integration_test }})
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v1
# for debugging
- name: Dump env
run: |
env | sort
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- name: Dump job context
env:
JOB_CONTEXT: ${{ toJson(job) }}
run: echo "$JOB_CONTEXT"
- name: Docker SSH setup
env:
DOCKER_ADDRESS: ${{ secrets.DOCKER_ADDRESS }}
DOCKER_HOST_PRIVATE_KEY: ${{ secrets.DOCKER_HOST_PRIVATE_KEY }}
run: |
mkdir -p ~/.ssh/
echo "$DOCKER_HOST_PRIVATE_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan $DOCKER_ADDRESS >> ~/.ssh/known_hosts
- name: Kind cluster cleanup
env:
DOCKER_HOST: ssh://github@${{ secrets.DOCKER_ADDRESS }}
run: |
TAG="$(CI_FORCE_CLEAN=1 bin/root-tag)"
export KIND_CLUSTER=github-$TAG-${{ matrix.integration_test }}
bin/kind delete cluster --name=$KIND_CLUSTER
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the kind cluster doesn't exist, does this succeed? Or, does it even matter?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the cluster doesn't exist, this command will fail. This is probably fine because if we've gotten this far and the cluster does not exist, a previous job must have failed.