Skip to content

Commit

Permalink
feat: Eliminated the use of temporary keys for tests with kubectl in …
Browse files Browse the repository at this point in the history
…privatelink-access pattern (#1806)

Co-authored-by: Bryant Biggs <[email protected]>
  • Loading branch information
vchintal and bryantbiggs authored Oct 25, 2023
1 parent 91cf9fe commit 4856ec8
Show file tree
Hide file tree
Showing 18 changed files with 185 additions and 106 deletions.
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: trailing-whitespace
args: ['--markdown-linebreak-ext=md']
Expand All @@ -10,7 +10,7 @@ repos:
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.83.4
rev: v1.83.5
hooks:
- id: terraform_fmt
- id: terraform_docs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ apiVersion: v2
name: demo-application
description: A Helm chart to deploy the demo-application
type: application
version: 1.0.0
version: 1.0.0
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
name: backend
namespace: stars
spec:
replicas: 1
Expand All @@ -11,15 +11,15 @@ spec:
template:
metadata:
labels:
role: backend
role: backend
spec:
containers:
- name: backend
- name: backend
image: calico/star-probe:v0.1.0
imagePullPolicy: Always
command:
- probe
- --http-port=6379
- --urls=http://frontend.stars:80/status,http://backend.stars:6379/status,http://client.client:9000/status
ports:
- containerPort: 6379
- containerPort: 6379
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: backend
name: backend
namespace: stars
spec:
ports:
- port: 6379
targetPort: 6379
targetPort: 6379
selector:
role: backend
role: backend
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
name: client
namespace: client
spec:
replicas: 1
Expand All @@ -11,14 +11,14 @@ spec:
template:
metadata:
labels:
role: client
role: client
spec:
containers:
- name: client
- name: client
image: calico/star-probe:v0.1.0
imagePullPolicy: Always
command:
- probe
- --urls=http://frontend.stars:80/status,http://backend.stars:6379/status
ports:
- containerPort: 9000
- containerPort: 9000
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ kind: Namespace
metadata:
name: client
labels:
role: client
role: client
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
namespace: client
spec:
ports:
- port: 9000
- port: 9000
targetPort: 9000
selector:
role: client
role: client
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
name: frontend
namespace: stars
spec:
replicas: 1
Expand All @@ -11,15 +11,15 @@ spec:
template:
metadata:
labels:
role: frontend
role: frontend
spec:
containers:
- name: frontend
- name: frontend
image: calico/star-probe:v0.1.0
imagePullPolicy: Always
command:
- probe
- --http-port=80
- --urls=http://frontend.stars:80/status,http://backend.stars:6379/status,http://client.client:9000/status
ports:
- containerPort: 80
- containerPort: 80
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: frontend
name: frontend
namespace: stars
spec:
ports:
- port: 80
targetPort: 80
- port: 80
targetPort: 80
selector:
role: frontend
role: frontend
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: management-ui
namespace: management-ui
name: management-ui
namespace: management-ui
spec:
replicas: 1
selector:
Expand All @@ -11,11 +11,11 @@ spec:
template:
metadata:
labels:
role: management-ui
role: management-ui
spec:
containers:
- name: management-ui
- name: management-ui
image: calico/star-collect:v0.1.0
imagePullPolicy: Always
ports:
- containerPort: 9001
- containerPort: 9001
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: management-ui
name: management-ui
labels:
role: management-ui
role: management-ui
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: management-ui
namespace: management-ui
name: management-ui
namespace: management-ui
spec:
type: LoadBalancer
ports:
- port: 80
- port: 80
targetPort: 9001
selector:
role: management-ui
role: management-ui
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: stars
name: stars
136 changes: 70 additions & 66 deletions patterns/privatelink-access/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,37 @@ for further details on `AWS PrivateLink`.

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and follow the steps below to deploy this pattern.

## Validate
```sh
terraform init
terraform apply -target=module.eventbridge -target=module.nlb --auto-approve
terraform apply --auto-approve
```

### Network Connectivity
Once the pattern has successfully deployed, you will be provided with multiple
output values.

An output `ssm_test` has been provided to aid in quickly testing the
connectivity from the client EC2 instance to the private EKS cluster via AWS
PrivateLink. Copy the output value and paste it into your terminal to execute
and check the connectivity. If configured correctly, the value returned should
be `ok`.
Review the output value for `cluster_endpoint_private`, it should look similar
to snippet below:

```sh
aws eks update-cluster-config \
--region us-west-2 \
--name privatelink-access \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
```

Copy the command and run it in a terminal session to take cluster API
endpoint private.

## Test access to EKS Kubernetes API server endpoint

Of the other output values, the value `ssm_test` is provided to aid in quickly
testing the connectivity from the client EC2 instance to the private EKS cluster
via AWS PrivateLink. Copy the output value, which looks like the snippet shown
below (as an example) and paste it into your terminal to execute and check the
connectivity. If configured correctly, the value returned should be `ok`.

```sh
COMMAND="curl -ks https://9A85B21811733524E3ABCDFEA8714642.gr7.us-west-2.eks.amazonaws.com/readyz"
Expand All @@ -36,78 +56,62 @@ aws ssm get-command-invocation --region us-west-2 \
--output text
```

### Cluster Access

To test access to the cluster, you will need to execute Kubernetes API calls
from within the private network to access the cluster. An EC2 instance has been
deployed into a "client" VPC to simulate this scenario. However, since the EKS
cluster was created with your local IAM identity, the `aws-auth` ConfigMap will
only have your local identity that is permitted to access the cluster. Since
cluster's API endpoint is private, we cannot use Terraform to reach it to
add additional entries to the ConfigMap; we can only access the cluster from
within the private network of the cluster's VPC or from the client VPC using AWS
PrivateLink access.

!!! info
The "client" EC2 instance provided and copying of AWS credentials to
that instance are merely for demonstration purposes only. Please consider
alternate methods of network access such as AWS Client VPN to provide more
secure access.
## Test access to EKS Kubernetes API with `kubectl`

Perform the following steps to access the cluster with `kubectl` from the
provided "client" EC2 instance.

1. Execute the command below on your local machine to get temporary credentials
that will be used on the "client" EC2 instance:
provided Client EC2 instance.

```sh
aws sts get-session-token --duration-seconds 3600 --output yaml
```
### Log into the Client EC2 instance
Start a new SSM session on the Client EC2 instance using the provided
`ssm_start_session` output value. It should look similar to the snippet
shown below. Copy the output value and paste it into your terminal to execute.
Your terminal will now be connected to the Client EC2 instance.

2. Start a new SSM session on the "client" EC2 instance using the provided
`ssm_start_session` output value. Copy the output value and paste it into your
terminal to execute. Your terminal will now be connected to the "client" EC2
instance.
```sh
aws ssm start-session --region us-west-2 --target i-0280cf604085f4a44
```

```sh
aws ssm start-session --region us-west-2 --target i-0280cf604085f4a44
```
### Update Kubeconfig
On the Client EC2 machine, run the following command to update the local
`~/.kube/config` file to enable access to the cluster:

3. Once logged in, export the following environment variables from the output
of step #1:
```sh
aws eks update-kubeconfig --region us-west-2 --name privatelink-access
```

!!! warning
The session credentials are only valid for 1 hour; you can
adjust the session duration in the command provided in step #1
### Test complete access with `kubectl`
Test access by listing the pods running on the cluster:

```sh
export AWS_ACCESS_KEY_ID=XXXX
export AWS_SECRET_ACCESS_KEY=YYYY
export AWS_SESSION_TOKEN=ZZZZ
```
```sh
kubectl get pods -A
```

4. Run the following command to update the local `~/.kube/config` file to enable
access to the cluster:
```text
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-4f8g8 1/1 Running 0 1m
kube-system coredns-6ff9c46cd8-59sqp 1/1 Running 0 1m
kube-system coredns-6ff9c46cd8-svnpb 1/1 Running 0 2m
kube-system kube-proxy-mm2zc 1/1 Running 0 1m
```

```sh
aws eks update-kubeconfig --region us-west-2 --name privatelink-access
```
## Destroy

5. Test access by listing the pods running on the cluster:
Before we could destroy/teardown all the resources created, we need to ensure
that the cluster state is restored for the Terraform to do a complete cleanup.
This would mean that we make cluster API endpoint public again.

```sh
kubectl get pods -A
```
Review the output value for `cluster_endpoint_public`, it should look similar
to snippet below:

```text
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-4f8g8 1/1 Running 0 1m
kube-system coredns-6ff9c46cd8-59sqp 1/1 Running 0 1m
kube-system coredns-6ff9c46cd8-svnpb 1/1 Running 0 2m
kube-system kube-proxy-mm2zc 1/1 Running 0 1m
```
```sh
aws eks update-cluster-config \
--region us-west-2 \
--name privatelink-access \
--resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true
```

## Destroy
Copy the command and run it in a terminal session to take cluster API
endpoint public.

{%
include-markdown "../../docs/_partials/destroy.md"
Expand Down
Loading

0 comments on commit 4856ec8

Please sign in to comment.