Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the Pulumi Kubernetes Operator #141

Merged
merged 8 commits into from
Sep 21, 2023

Conversation

jkodroff
Copy link
Contributor

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@jkodroff jkodroff marked this pull request as draft September 19, 2023 18:51
Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkodroff Thankyou for a very quick work on the PR. Couple of important things :

  1. You are missing an External Secret resource to reference your secret. here is an example.
  2. We need a CronJob which runs in schedule vs a Job. Check this for an example.

eks-anywhere-common/Testers/Pulumi/pulumi-tester-job.yaml Outdated Show resolved Hide resolved
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: pulumi-kubernetes-operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider naming this pulumi rather than pulumi-kubernetes-operator since, in concept, it could serve other charts too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pulumi-tester-role
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it atypical for an object name to have its type as a suffix. In other words, I would name this pulumi-tester. Likewise for the binding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I do not like smurf naming.

apiVersion: v1
kind: ConfigMap
metadata:
name: pulumi-tester-configmap

This comment was marked as resolved.

@jkodroff jkodroff marked this pull request as ready for review September 19, 2023 20:57
@jkodroff
Copy link
Contributor Author

@elamaran11 Did my best to get the ExternalSecret right, but won't have time to test. It should should require a minor tweak at worst.

@elamaran11
Copy link
Contributor

@jkodroff Who will fix the above changes pointed by @EronWright. let me know when done i can test it.

Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkodroff One comment on the token reference.

valueFrom:
secretKeyRef:
name: pulumi-access-token
key: value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkodroff This doesnt seem to match with your ExternalSecret k8s rsource. If the key is pulumi-access-token, it should match.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is now fixed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend giving a meaningful name like accessToken for key.

Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor feedback but i can work on the testing tomorrow.

@@ -0,0 +1 @@
kubeconfig.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this file.

Copy link
Contributor

@EronWright EronWright Sep 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the .gitignore file? What is the rationale?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this .gitignore file. We have one at root.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

targetNamespace: pulumi
chart:
spec:
chart: pulumi-kubernetes-operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we specify a chart version, e.g. 0.3.0?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes thats mandatory, nice catch

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test the helm release did it work?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this to use 0.3.*, does that seem right @elamaran11 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the change? I dont see the version.

Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkodroff @EronWright Have some feedback on the PR which needs to be fixed.

  1. ExternalSecret KeyName and SecretRef is not correct so the installation fails. Also having meaningful name for secretKey
  2. Secret Reference in cronjob should have a meaningful name as aligned to comment 1.

data:
- secretKey: pulumi-access-token # which key it's going to be stored
remoteRef:
key: value # Our secret-name goes here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather have a specific name for the KVP than value

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

secretKey should be a something reasonable like accessToken and key should the keyname in secrets manager which is pulumi-access-token

targetNamespace: pulumi
chart:
spec:
chart: pulumi-kubernetes-operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test the helm release did it work?

@@ -0,0 +1,7 @@
apiVersion: v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this configmap is for tester, this should be moved to Testers/Pulumi folder.

@@ -0,0 +1,10 @@
---
apiVersion: rbac.authorization.k8s.io/v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this configmap is for tester, this should be moved to Testers/Pulumi folder.

@@ -0,0 +1,13 @@
---
apiVersion: rbac.authorization.k8s.io/v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this configmap is for tester, this should be moved to Testers/Pulumi folder.

@@ -0,0 +1,6 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this configmap is for tester, this should be moved to Testers/Pulumi folder.

@@ -0,0 +1 @@
kubeconfig.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this file

Use alpine:k8s:1.26.2 image.
Use kubectl wait rather than sleep.
Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkodroff @EronWright I would recommend making all changes listed in the PR and also test it locally once along with sharing a screen shot on the test job working locally. Once you have it, i can proceed with testing. At this point the testing is blocked due to errors with test job secrets.

targetNamespace: pulumi
chart:
spec:
chart: pulumi-kubernetes-operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the change? I dont see the version.

@EronWright
Copy link
Contributor

@elamaran11 Thanks for your patience, I encountered a small delay in pushing the changes, due to a permissions issue on the fork. I believe I've addressed the feedback; of course we'll discuss further once you can see them.

Meanwhile, here's the local test output (tested using EKS):

pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Writing out test manifest file '/tmp/pulumi-test-stack-1695257760-31679.yaml'
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Deploying sample stack and program.
pulumi-k8s-operator-test-28254296-wd2c4 script program.pulumi.com/eks-pulumi-operator-test unchanged
pulumi-k8s-operator-test-28254296-wd2c4 script stack.pulumi.com/eks-pulumi-operator-test created
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Waiting for the operator to deploy the stack.
pulumi-k8s-operator-test-28254296-wd2c4 script stack.pulumi.com/eks-pulumi-operator-test condition met
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Verifying that the stack exists.
pulumi-k8s-operator-test-28254296-wd2c4 script {"orgName":"eron-pulumi-corp","projectName":"eks-pulumi-operator-test","stackName":"test-1695257760-31679","activeUpdate":"687b45a4-2d70-47b6-8031-3073fb35590e","tags":{"gitHub:owner":"eron-pulumi-corp","pulumi:project":"eks-pulumi-operator-test","pulumi:runtime":"yaml","pulumi:secrets_provider":"service"},"version":1}
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Destroying K8s Stack resource
pulumi-k8s-operator-test-28254296-wd2c4 script stack.pulumi.com "eks-pulumi-operator-test" deleted
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Waiting for the operator to remove the stack
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Verifying the stack no longer exists
pulumi-k8s-operator-test-28254296-wd2c4 script 
pulumi-k8s-operator-test-28254296-wd2c4 script Deleting test manifest file '/tmp/pulumi-test-stack-1695257760-31679.yaml'

@EronWright
Copy link
Contributor

@elamaran11 the changes have been pushed, please take another look.

Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EronWright Code changes look good for both the helm deployment and the test job. I will start to test this in my lab today. Thankyou

@elamaran11
Copy link
Contributor

Hi @EronWright I see the stack is created and destroyed and test job concluded successfully. This is good news. I can say my test is concluded in EKS-A Baremetal environment. I will continue my tests in other environment. jc though, i dont see the stack in the Pulumi console is it because of instant create and delete?

❯ k logs pulumi-k8s-operator-test-002-df9kq -n pulumi                                                             ─╯

Writing out test manifest file '/tmp/pulumi-test-stack-1695303142-10787.yaml'

Deploying sample stack and program.
program.pulumi.com/eks-pulumi-operator-test unchanged
stack.pulumi.com/eks-pulumi-operator-test created

Waiting for the operator to deploy the stack.
stack.pulumi.com/eks-pulumi-operator-test condition met

Verifying that the stack exists.
{"orgName":"aws-partnership","projectName":"eks-pulumi-operator-test","stackName":"test-1695303142-10787","activeUpdate":"b9968d89-db2f-4634-b3c2-68a3da3f55a9","tags":{"gitHub:owner":"aws-partnership","pulumi:project":"eks-pulumi-operator-test","pulumi:runtime":"yaml","pulumi:secrets_provider":"service"},"version":1}

Destroying K8s Stack resource
stack.pulumi.com "eks-pulumi-operator-test" deleted

Waiting for the operator to remove the stack

Verifying the stack no longer exists

Deleting test manifest file '/tmp/pulumi-test-stack-1695303142-10787.yaml'

@elamaran11
Copy link
Contributor

@EronWright I got to understand its instant and also i saw the same in the console for quick few seconds. The product is validated across EKS-A BareMetal, vSphere and EKS Local cluster. EKS-A Snowball is still pending, i will complete this testing today or tomorrow when i get to the lab.

Copy link
Contributor

@elamaran11 elamaran11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything works fine. Congratulations and Thankyou @EronWright @jkodroff . Please continue to maintain your software for version upgrades via PRs.

@elamaran11 elamaran11 merged commit b6d522c into aws-samples:main Sep 21, 2023
@EronWright EronWright deleted the pulumi-k8s-operator branch September 22, 2023 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants