Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RDS service controller #237

Closed
tabern opened this issue Aug 25, 2020 · 15 comments
Closed

RDS service controller #237

tabern opened this issue Aug 25, 2020 · 15 comments
Labels
kind/new-service Categorizes issue or PR as related to a new service. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@tabern
Copy link
Contributor

tabern commented Aug 25, 2020

New ACK Service Controller

Support for Amazon RDS

List of API resources

List the API resources in order of importance to you:

@tabern tabern added the kind/new-service Categorizes issue or PR as related to a new service. label Aug 25, 2020
@tabern tabern changed the title [name] service controller RDS service controller Aug 25, 2020
@jaypipes jaypipes added the RDS label Sep 4, 2020
@rektide
Copy link

rektide commented Sep 21, 2020

There was an interesting ask that crossed my feeds, about using cloud-native infrastructure to manage users & databases on a legacy/non-cloud postgres.

That seems like a pretty wild use case, but I do think it's important to surface the question of how, if at all, legacy RDS users can migrate to ACK-managed RDS. I proposed that perhaps RDS service controller might be able to seed a new database from an existing database's backup. That way there's some migration into ACK-managed RDS available.

@jaypipes
Copy link
Collaborator

There was an interesting ask that crossed my feeds, about using cloud-native infrastructure to manage users & databases on a legacy/non-cloud postgres.

That seems like a pretty wild use case, but I do think it's important to surface the question of how, if at all, legacy RDS users can migrate to ACK-managed RDS. I proposed that perhaps RDS service controller might be able to seed a new database from an existing database's backup. That way there's some migration into ACK-managed RDS available.

Hi @rektide! :) Here is a related GH issue around this: #41

@tabern tabern added this to the Phase 3 Dev Preview milestone Oct 9, 2020
@jaypipes jaypipes modified the milestones: Phase 3 Dev Preview, RDS developer preview Nov 10, 2020
@jaypipes jaypipes removed this from the RDS developer preview milestone Jan 8, 2021
jaypipes added a commit to jaypipes/ack-rds-controller that referenced this issue Mar 22, 2021
check in the initial RDS controller with support for only the
DBSubnetGroup resource for now. e2e test upcoming to the community
repository.

aws-controllers-k8s/community#237
jaypipes added a commit to jaypipes/aws-controllers-k8s that referenced this issue Mar 22, 2021
This also includes a quick fix to tests/e2e/run-tests.sh that was
referring to the wrong variable name.

``
[jaypipes@thelio community]$ TEST_HELM_CHARTS=false make kind-test SERVICE=rds AWS_ROLE_ARN=$ROLE_ARN
checking AWS credentials ... ok.
creating kind cluster ack-test-7444685b-c9f2a7d8 ... ok.
<snip>
loading the images into the cluster ... ok.
loading CRD manifests for rds into the cluster ... ok.
loading RBAC manifests for rds into the cluster ... ok.
loading service controller Deployment for rds into the cluster ...ok.
generating AWS temporary credentials and adding to env vars map ... ok.
======================================================================================================
To poke around your test cluster manually:
export KUBECONFIG=/home/jaypipes/go/src/github.com/aws-controllers-k8s/community/scripts/lib/../../build/tmp-ack-test-7444685b-c9f2a7d8/kubeconfig
kubectl get pods -A
======================================================================================================
running python tests in Docker...
running python tests locally...
INFO:root:Created VPC vpc-02d017b23f7443c8e
INFO:root:Created VPC Subnet subnet-0c2e1b19c1bd32133 in AZ us-west-2a
INFO:root:Created VPC Subnet subnet-0f5a98b367898b437 in AZ us-west-2b
INFO:root:Wrote bootstrap to /root/tests/rds/bootstrap.yaml
============================================================================================================ test session starts ============================================================================================================
platform linux -- Python 3.8.8, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /root/tests
plugins: forked-1.3.0, xdist-2.2.0
[gw0] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw1] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw2] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw3] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw6] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw5] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw4] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw7] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
gw0 [1] / gw1 [1] / gw2 [1] / gw3 [1] / gw4 [1] / gw5 [1] / gw6 [1] / gw7 [1]
scheduling tests via LoadFileScheduling

rds/tests/test_db_subnet_group.py::TestDBSubnetgroup::test_create_delete_2az
[gw0] [100%] PASSED rds/tests/test_db_subnet_group.py::TestDBSubnetgroup::test_create_delete_2az

============================================================================================================ 1 passed in 37.52s =============================================================================================================
INFO:root:Deleted VPC Subnet subnet-0c2e1b19c1bd32133
INFO:root:Deleted VPC Subnet subnet-0f5a98b367898b437
INFO:root:Deleted VPC vpc-02d017b23f7443c8e
To resume test with the same cluster use: " TMP_DIR=/home/jaypipes/go/src/github.com/aws-controllers-k8s/community/scripts/lib/../../build/tmp-ack-test-7444685b-c9f2a7d8
    AWS_SERVICE_DOCKER_IMG=aws-controllers-k8s:rds-v0.0.2-78-ge78e3cd-dirty "
[jaypipes@thelio community]$
```

Issue aws-controllers-k8s#237
jaypipes added a commit to jaypipes/aws-controllers-k8s that referenced this issue Mar 23, 2021
This also includes a quick fix to tests/e2e/run-tests.sh that was
referring to the wrong variable name.

```
[jaypipes@thelio community]$ TEST_HELM_CHARTS=false make kind-test SERVICE=rds AWS_ROLE_ARN=$ROLE_ARN
checking AWS credentials ... ok.
creating kind cluster ack-test-7444685b-c9f2a7d8 ... ok.
<snip>
loading the images into the cluster ... ok.
loading CRD manifests for rds into the cluster ... ok.
loading RBAC manifests for rds into the cluster ... ok.
loading service controller Deployment for rds into the cluster ...ok.
generating AWS temporary credentials and adding to env vars map ... ok.
======================================================================================================
To poke around your test cluster manually:
export KUBECONFIG=/home/jaypipes/go/src/github.com/aws-controllers-k8s/community/scripts/lib/../../build/tmp-ack-test-7444685b-c9f2a7d8/kubeconfig
kubectl get pods -A
======================================================================================================
running python tests in Docker...
running python tests locally...
INFO:root:Created VPC vpc-02d017b23f7443c8e
INFO:root:Created VPC Subnet subnet-0c2e1b19c1bd32133 in AZ us-west-2a
INFO:root:Created VPC Subnet subnet-0f5a98b367898b437 in AZ us-west-2b
INFO:root:Wrote bootstrap to /root/tests/rds/bootstrap.yaml
============================================================================================================ test session starts ============================================================================================================
platform linux -- Python 3.8.8, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /root/tests
plugins: forked-1.3.0, xdist-2.2.0
[gw0] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw1] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw2] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw3] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw6] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw5] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw4] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
[gw7] Python 3.8.8 (default, Feb 19 2021, 18:07:06)  -- [GCC 8.3.0]
gw0 [1] / gw1 [1] / gw2 [1] / gw3 [1] / gw4 [1] / gw5 [1] / gw6 [1] / gw7 [1]
scheduling tests via LoadFileScheduling

rds/tests/test_db_subnet_group.py::TestDBSubnetgroup::test_create_delete_2az
[gw0] [100%] PASSED rds/tests/test_db_subnet_group.py::TestDBSubnetgroup::test_create_delete_2az

============================================================================================================ 1 passed in 37.52s =============================================================================================================
INFO:root:Deleted VPC Subnet subnet-0c2e1b19c1bd32133
INFO:root:Deleted VPC Subnet subnet-0f5a98b367898b437
INFO:root:Deleted VPC vpc-02d017b23f7443c8e
To resume test with the same cluster use: " TMP_DIR=/home/jaypipes/go/src/github.com/aws-controllers-k8s/community/scripts/lib/../../build/tmp-ack-test-7444685b-c9f2a7d8
    AWS_SERVICE_DOCKER_IMG=aws-controllers-k8s:rds-v0.0.2-78-ge78e3cd-dirty "
[jaypipes@thelio community]$
```

Issue aws-controllers-k8s#237
jaypipes added a commit to jaypipes/ack-rds-controller that referenced this issue Mar 23, 2021
check in the initial RDS controller with support for only the
DBSubnetGroup resource for now. e2e test upcoming to the community
repository.

aws-controllers-k8s/community#237
jaypipes added a commit to jaypipes/aws-controllers-k8s that referenced this issue Mar 23, 2021
Adds a super simple create, read, and delete test for a DB security
group in the RDS API.

Handling allow and revoke of an EC2 security group or IPRange member of
a DB Security Group will be handled in a followup patch after the
rds-controller's DBSecurityGroup resource manager gets custom code that
handles those attributes.

Issue aws-controllers-k8s#237
@stevehipwell
Copy link

I'm interested if there has been any discussions around managing multiple DBs and users on a RDS instance? We currently manage this via Terraform but supporting this workflow in a controller would be really useful.

@PatTheSilent
Copy link

@stevehipwell you could argue that that shouldn't really be a use case for an RDS controller. Mostly because it would mean managing multiple, completely different APIs by the controller. I use a custom PostgesDB operator to do that - users declare what database/user they want created on what Postgres host and the operator manages that. I'd generally recommend a split like that, I've been very happy to use it.

@stevehipwell
Copy link

@PatTheSilent I don't disagree; which operator are you using?

@PatTheSilent
Copy link

@stevehipwell I wrote one myself using the Ansible Operator Framework https://sdk.operatorframework.io/docs/building-operators/ansible/tutorial/ (not affiliated with them :) ). It wasn't really hard to throw something usable together in a day or two.

@stevehipwell
Copy link

@PatTheSilent I suspected you might say that. I've been looking for a good community PSQL DB operator for a while now and as Terraform works well enough I haven't been able to prioritise writing my own. Crossplane and the provider-sql look like promising leads.

@PatTheSilent
Copy link

@stevehipwell oh, I didn't know about that. I use crossplane to manage my AWS stuff so that would be a perfect fit (and less maintenance on the long run). Thanks!

@ack-bot
Copy link
Collaborator

ack-bot commented Sep 23, 2021

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale

@ack-bot ack-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 23, 2021
@vijtrip2
Copy link
Contributor

/remove-lifecycle stale

@ack-bot ack-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 23, 2021
@vijtrip2
Copy link
Contributor

/lifecycle frozen

@ack-bot ack-bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 23, 2021
@gazal-k
Copy link

gazal-k commented Aug 18, 2022

The RDS resources are looking pretty great. Are there any milestones to achieve before it becomes GA?

@jkatz
Copy link
Contributor

jkatz commented Aug 18, 2022

@gazal-k Thanks for the feedback! You can track the roadmap for the RDS controller here: https://github.com/orgs/aws-controllers-k8s/projects/4/views/5

We are in the process for prepping for GA, but please feel to test out the RDS controller and give us feedback! To help you get started, there are a few tutorials for managing RDS resources in the ACK documentation:

https://aws-controllers-k8s.github.io/community/docs/tutorials/rds-example/
https://aws-controllers-k8s.github.io/community/docs/tutorials/aurora-serverless-v2/

There is also a blog that walks through an end-to-end example of deploying an application with the RDS controller:

https://aws.amazon.com/blogs/database/deploy-amazon-rds-databases-for-applications-in-kubernetes/

@gazal-k
Copy link

gazal-k commented Aug 18, 2022

Looks like it's pretty close 🙂. If I'm reading that right. Just 1 in progress item and 1 to-do.

We are just waiting on this to be GA to use it in our applications. Right now, cloud infra like RDS and S3 are provisioned separately for our k8s applications.

@mikestef9
Copy link
Collaborator

Closing as this service controller has graduated to GA. Separate issues can be opened to discuss specific follow on topics on the controller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/new-service Categorizes issue or PR as related to a new service. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
Status: Generally Available
Development

No branches or pull requests

10 participants