-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chore/service certs #113
Chore/service certs #113
Conversation
also tweak kube*.sh generation to avoid confusion between terraform template vars and bash vars
give limitted access to k8s cluster to a 'worker' user
* use the kube-aws CA * deployments mount the appropriate cert and ca * need to wire up each service to use the cert, listen on https, connect over https, and register the CA with the trust store * still need to test and document more
* tweak permissions in role attached to kube provisioner * no_proxy for AWS metadata service at 169.256.169.254 * encrypt backup * no quotes around "~/" in .sh scripts ... that kind of thing
tf_files/aws/data.tf
Outdated
} | ||
|
||
statement { | ||
actions = [ "ec2:*" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this thing above redundant then?
tf_files/aws/data.tf
Outdated
statement { | ||
actions = [ | ||
"rds:*", | ||
"cloudwatch:DescribeAlarms", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already allowed cloudwatch:*?
tf_files/aws/data.tf
Outdated
} | ||
|
||
statement { | ||
effect = "Allow" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we get rid of this? Or at least reduce the scope of it? This will allow you to do everything...?
Can you add port 443 to the -service.yamls in kube/services/ too please? |
Thanks, Zac!
|
provisioner permissions are still too broad - kube-aws has an open issue to more clearly specify the permissions needed for kube-aws: kubernetes-retired/kube-aws#90
services and deployments .yaml ready for TLS listeners
Hey Zac, I just pushed a patch:
The permissions on the provisioner are still very broad - basically 'admin', since it gives IAM*, so it's not really any better than what we were doing before (copying up admin credentials.json) except that it gives us a path forward to trim down the permissions on the role (if we update the permissions, then terraform apply updates the inline policy in place). It's a project to get the right set of permissions for kube-aws - the kube-aws project actually has an open issue: Anyway - I'd like to just leave the provisioner permissions the way they are, since it's no worse than what we had been doing, and create a separate issue to narrow down the permissions. Another option is to wire up 'kube-up.sh' to run 'aws iam delete-role-policy ...' after kube-aws up finishes - which will just delete the permission on the provisioner down to nothing, or put a more limited set of permissions in place after kube-aws has done its thing. What do you think? |
rename to kube-vars.sh.tpl to clarify it is a terraform template - not a .sh script - which should make codacy style-check happier :-)
resolve #105
terraform launches the kube provisioner with an AWS instance profile that maps to a role with less than admin permissions. kube-aws and the aws cli acquire temporary creds from the AWS metadata service associated with the kube-provisioner's EC2 instance. We no longer copy ~/.aws/credentials to the kube provisioner
resolve #110
the tf_files/configs/kube-certs.sh script runs with kube-services.sh, and can also be run independently at any time to automatically create certificates (via the k8s CA configs in ~/VPC_NAME/credentials) and k8s secrets for k8s services discovered via grep'ing into the services/ folder.
This patch also updates the various kube//-deployment.yaml files to mount the appropriate SSL secrets under /mnt/ssl. The kube/README.md has more details
resolve planx-misc-issue-tracker issue 15 - https://github.com/uc-cdis/planx-misc-issue-tracker/issues/15
the services/workspace/deploy_workspace.sh script creates
There's a workspace/README.md with an overview.