Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into njlynch/rds-standar…
Browse files Browse the repository at this point in the history
…dize
  • Loading branch information
njlynch committed Sep 18, 2020
2 parents affa058 + 80a2ac9 commit eccf34a
Show file tree
Hide file tree
Showing 21 changed files with 1,389 additions and 120 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/yarn-upgrade.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ jobs:
# Upgrade dependencies at repository root
ncu --upgrade --filter=@types/node,@types/fs-extra --target=minor
ncu --upgrade --filter=typescript --target=patch
ncu --upgrade --reject=@types/node,@types/fs-extra,typescript
ncu --upgrade --reject=@types/node,@types/fs-extra,typescript --target=minor
# Upgrade all the packages
lerna exec --parallel ncu -- --upgrade --filter=@types/node,@types/fs-extra --target=minor
lerna exec --parallel ncu -- --upgrade --filter=typescript --target=patch
lerna exec --parallel ncu -- --upgrade --reject='@types/node,@types/fs-extra,typescript,${{ steps.list-packages.outputs.list }}'
lerna exec --parallel ncu -- --upgrade --reject='@types/node,@types/fs-extra,typescript,${{ steps.list-packages.outputs.list }}' --target=minor
# This will create a brand new `yarn.lock` file (this is more efficient than `yarn install && yarn upgrade`)
- name: Run "yarn install --force"
run: yarn install --force
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ You may also find help on these community resources:
and tag it with `aws-cdk`
* Come join the AWS CDK community on [Gitter](https://gitter.im/awslabs/aws-cdk)
* Talk in the CDK channel of the [AWS Developers Slack workspace](https://awsdevelopers.slack.com) (invite required)
* Check out the [partitions.io board](https://partitions.io/cdk)
* A community-driven Slack channel is also available, invite at [cdk.dev](https://cdk.dev)

### Roadmap

Expand Down
18 changes: 10 additions & 8 deletions packages/@aws-cdk/aws-rds/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,20 +268,22 @@ const cpuUtilization = cluster.metricCPUUtilization();
const readLatency = instance.metric('ReadLatency', { statistic: 'Average', periodSec: 60 });
```

### Enabling S3 integration to a cluster (non-serverless Aurora only)
### Enabling S3 integration

Data in S3 buckets can be imported to and exported from Aurora databases using SQL queries. To enable this
Data in S3 buckets can be imported to and exported from certain database engines using SQL queries. To enable this
functionality, set the `s3ImportBuckets` and `s3ExportBuckets` properties for import and export respectively. When
configured, the CDK automatically creates and configures IAM roles as required.
Additionally, the `s3ImportRole` and `s3ExportRole` properties can be used to set this role directly.

For Aurora MySQL, read more about [loading data from
S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html) and [saving
data into S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html).
You can read more about loading data to (or from) S3 here:

For Aurora PostgreSQL, read more about [loading data from
S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html) and [saving
data into S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html).
* Aurora MySQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html)
and [export](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html).
* Aurora PostgreSQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html)
and [export](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html).
* Microsoft SQL Server - [import & export](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html)
* PostgreSQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html)
* Oracle - [import & export](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html)

The following snippet sets up a database cluster with different S3 buckets where the data is imported and exported -

Expand Down
36 changes: 2 additions & 34 deletions packages/@aws-cdk/aws-rds/lib/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import { DatabaseClusterAttributes, IDatabaseCluster } from './cluster-ref';
import { DatabaseSecret } from './database-secret';
import { Endpoint } from './endpoint';
import { IParameterGroup } from './parameter-group';
import { applyRemovalPolicy, defaultDeletionProtection } from './private/util';
import { applyRemovalPolicy, defaultDeletionProtection, setupS3ImportExport } from './private/util';
import { BackupProps, InstanceProps, Login, PerformanceInsightRetention, RotationMultiUserOptions } from './props';
import { DatabaseProxy, DatabaseProxyOptions, ProxyTarget } from './proxy';
import { CfnDBCluster, CfnDBClusterProps, CfnDBInstance, CfnDBSubnetGroup } from './rds.generated';
Expand Down Expand Up @@ -306,7 +306,7 @@ abstract class DatabaseClusterNew extends DatabaseClusterBase {
}),
];

let { s3ImportRole, s3ExportRole } = this.setupS3ImportExport(props);
let { s3ImportRole, s3ExportRole } = setupS3ImportExport(this, props);
// bind the engine to the Cluster
const clusterEngineBindConfig = props.engine.bindToCluster(this, {
s3ImportRole,
Expand Down Expand Up @@ -344,38 +344,6 @@ abstract class DatabaseClusterNew extends DatabaseClusterBase {
enableCloudwatchLogsExports: props.cloudwatchLogsExports,
};
}

private setupS3ImportExport(props: DatabaseClusterBaseProps): { s3ImportRole?: IRole, s3ExportRole?: IRole } {
let s3ImportRole = props.s3ImportRole;
if (props.s3ImportBuckets && props.s3ImportBuckets.length > 0) {
if (props.s3ImportRole) {
throw new Error('Only one of s3ImportRole or s3ImportBuckets must be specified, not both.');
}

s3ImportRole = new Role(this, 'S3ImportRole', {
assumedBy: new ServicePrincipal('rds.amazonaws.com'),
});
for (const bucket of props.s3ImportBuckets) {
bucket.grantRead(s3ImportRole);
}
}

let s3ExportRole = props.s3ExportRole;
if (props.s3ExportBuckets && props.s3ExportBuckets.length > 0) {
if (props.s3ExportRole) {
throw new Error('Only one of s3ExportRole or s3ExportBuckets must be specified, not both.');
}

s3ExportRole = new Role(this, 'S3ExportRole', {
assumedBy: new ServicePrincipal('rds.amazonaws.com'),
});
for (const bucket of props.s3ExportBuckets) {
bucket.grantReadWrite(s3ExportRole);
}
}

return { s3ImportRole, s3ExportRole };
}
}

/**
Expand Down
Loading

0 comments on commit eccf34a

Please sign in to comment.