Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace EKS test-infra with example #1192

Merged
merged 4 commits into from
Mar 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .go-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.16
1.16.0
63 changes: 32 additions & 31 deletions kubernetes/test-infra/eks/README.md
Original file line number Diff line number Diff line change
@@ -1,56 +1,57 @@
# Amazon EKS Clusters
# EKS test infrastructure

You will need the standard AWS environment variables to be set, e.g.
This directory contains files used for testing the Kubernetes provider in our internal CI system. See the [examples](https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/eks) directory instead, if you're looking for example code.

To run this test infrastructure, you will need the following environment variables to be set:

- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`

See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables
and alternatives, like `AWS_PROFILE`.
See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables and alternatives, like `AWS_PROFILE`.

## Versions
Ensure that `KUBE_CONFIG_PATH` and `KUBE_CONFIG_PATHS` environment variables are NOT set, as they will interfere with the cluster build.

You can set the desired version of Kubernetes via the `kubernetes_version` TF variable.
```
unset KUBE_CONFIG_PATH
unset KUBE_CONFIG_PATHS
```

See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.
To install the EKS cluster using default values, run terraform init and apply from the directory containing this README.

You can set the desired version of Kubernetes via the `kubernetes_version` TF variable, like this:
```
export TF_VAR_kubernetes_version="1.11"
terraform init
terraform apply
```
Alternatively you can pass it to the `apply` command line, like below.

## Worker node count and instance type
## Kubeconfig for manual CLI access

You can control the amount of worker nodes in the cluster as well as their machine type, using the following variables:
The token contained in the kubeconfig expires in 15 minutes. The token can be refreshed by running `terraform apply` again. Export the KUBECONFIG to manually access the cluster:

- `TF_VAR_workers_count`
- `TF_VAR_workers_type`
```
terraform apply
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get pods -n test
```

Export values for them or pass them to the apply command line.
## Optional variables

## Build the cluster
The Kubernetes version can be specified at apply time:

```
terraform init
terraform apply -var=kubernetes_version=1.11
terraform apply -var=kubernetes_version=1.18
```

## Exporting K8S variables
To access the cluster you need to export the `KUBECONFIG` variable pointing to the `kubeconfig` file for the current cluster.
```
export KUBECONFIG="$(terraform output kubeconfig_path)"
```
See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.

Now you can access the cluster via `kubectl` and you can run acceptance tests against it.

To run acceptance tests, your the following command in the root of the repository.
```
TESTARGS="-run '^TestAcc'" make testacc
```
### Worker node count and instance type

The number of worker nodes, and the instance type, can be specified at apply time:

To run only a specific set of tests, you can replace `^TestAcc` with any regular expression to filter tests by name.
For example, to run tests for Pod resources, you can do:
```
TESTARGS="-run '^TestAccKubernetesPod_'" make testacc
terraform apply -var=workers_count=4 -var=workers_type=m4.xlarge
```

## Additional configuration of EKS

To view all available configuration options for the EKS module used in this example, see [terraform-aws-modules/eks docs](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest).
77 changes: 77 additions & 0 deletions kubernetes/test-infra/eks/kubernetes-config/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
resource "kubernetes_config_map" "name" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}

data = {
mapRoles = join(
"\n",
formatlist(local.mapped_role_format, var.k8s_node_role_arn),
)
}
}

# Optional: this kubeconfig file is only used for manual CLI access to the cluster.
resource "null_resource" "generate-kubeconfig" {
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name ${var.cluster_name} --kubeconfig ${path.root}/kubeconfig"
}
}

resource "kubernetes_namespace" "test" {
metadata {
name = "test"
}
}

resource "kubernetes_deployment" "test" {
metadata {
name = "test"
namespace= kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = 2
selector {
match_labels = {
app = "test"
}
}
template {
metadata {
labels = {
app = "test"
}
}
spec {
container {
image = "nginx:1.19.4"
name = "nginx"

resources {
limits = {
memory = "512M"
cpu = "1"
}
requests = {
memory = "256M"
cpu = "50m"
}
}
}
}
}
}
}

resource helm_release nginx_ingress {
name = "nginx-ingress-controller"

repository = "https://charts.bitnami.com/bitnami"
chart = "nginx-ingress-controller"

set {
name = "service.type"
value = "ClusterIP"
}
}
18 changes: 18 additions & 0 deletions kubernetes/test-infra/eks/kubernetes-config/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
variable "k8s_node_role_arn" {
type = string
}

variable "cluster_name" {
type = string
}

locals {
mapped_role_format = <<MAPPEDROLE
- rolearn: %s
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
MAPPEDROLE

}
73 changes: 60 additions & 13 deletions kubernetes/test-infra/eks/main.tf
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "1.13"
source = "hashicorp/kubernetes"
version = ">= 2.0.2"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.2"
}
aws = {
source = "hashicorp/aws"
Expand All @@ -11,6 +15,48 @@ terraform {
}
}

data "aws_eks_cluster" "default" {
name = module.cluster.cluster_id
}

# This configuration relies on a plugin binary to fetch the token to the EKS cluster.
# The main advantage is that the token will always be up-to-date, even when the `terraform apply` runs for
# a longer time than the token TTL. The downside of this approach is that the binary must be present
# on the system running terraform, either in $PATH as shown below, or in another location, which can be
# specified in the `command`.
# See the commented provider blocks below for alternative configuration options.
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", module.vpc.cluster_name]
command = "aws"
}
}

# This configuration is also valid, but the token may expire during long-running applies.
# data "aws_eks_cluster_auth" "default" {
# name = module.cluster.cluster_id
#}
#provider "kubernetes" {
# host = data.aws_eks_cluster.default.endpoint
# cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
# token = data.aws_eks_cluster_auth.default.token
#}

provider "helm" {
kubernetes {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", module.vpc.cluster_name]
command = "aws"
}
}
}

provider "aws" {
region = var.region
}
Expand All @@ -21,22 +67,26 @@ module "vpc" {

module "cluster" {
source = "terraform-aws-modules/eks/aws"
version = "v13.2.1"
version = "14.0.0"

vpc_id = module.vpc.vpc_id
subnets = module.vpc.subnets

cluster_name = module.vpc.cluster_name
cluster_version = var.kubernetes_version
manage_aws_auth = false
# This kubeconfig expires in 15 minutes, so we'll use another method.
manage_aws_auth = false # Managed in ./kubernetes-config/main.tf instead.
# This kubeconfig expires in 15 minutes, so we'll use an exec block instead.
# See ./kubernetes-config/main.tf provider block for details.
write_kubeconfig = false

workers_group_defaults = {
root_volume_type = "gp2"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GP2 volumes are generally more expensive.
Is there a technical need for nodes to run on GP2 volumes? We're likely not putting much workload on the nodes during our tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gp2 was the default in the EKS module, but they recently updated it to gp3. I put it back to the old default because gp3 isn't available in all regions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the commit where it was added. terraform-aws-modules/terraform-aws-eks@76537d1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, if that's the case then disregard my comment. I must have based it on old information and likely GPx volumes are now the only choice. Sorry for the confusion :)

}
worker_groups = [
{
instance_type = var.workers_type
asg_desired_capacity = var.workers_count
asg_max_size = "10"
asg_max_size = 4
},
]

Expand All @@ -45,11 +95,8 @@ module "cluster" {
}
}

module "node-config" {
source = "./node-config"
k8s_node_role_arn = list(module.cluster.worker_iam_role_arn)
cluster_ca = module.cluster.cluster_certificate_authority_data
cluster_name = module.cluster.cluster_id # creates dependency on cluster creation
cluster_endpoint = module.cluster.cluster_endpoint
cluster_oidc_issuer_url = module.cluster.cluster_oidc_issuer_url
module "kubernetes-config" {
cluster_name = module.cluster.cluster_id # creates dependency on cluster creation
source = "./kubernetes-config"
k8s_node_role_arn = module.cluster.worker_iam_role_arn
}
Loading