Export existing AWS resources to Terraform style (tf, tfstate)
- Supported version
- Installation
- Prerequisites
- Usage
- Run as Docker container
- Development
- Contributing
- License
- Ruby 2.1 or higher is required
- Terraform v0.9.3 or higher is recommended
- Some resources (e.g.
iam_instance_profile
) uses newer resource specification
- Some resources (e.g.
Add this line to your application's Gemfile:
gem 'terraforming'
And then execute:
$ bundle
Or install it yourself as:
$ gem install terraforming
You need to set AWS credentials.
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export AWS_REGION=xx-yyyy-0
You can also specify credential profile in ~/.aws/credentials
by --profile
option.
$ cat ~/.aws/credentials
[hoge]
aws_access_key_id = Hoge
aws_secret_access_key = FugaFuga
# Pass profile name by --profile option
$ terraforming s3 --profile hoge
You can assume a role by using the --assume
option.
$ terraforming s3 --assume arn:aws:iam::123456789123:role/test-role
You can force the AWS SDK to utilize the CA certificate that is bundled with the SDK for systems where the default OpenSSL certificate is not installed (e.g. Windows) by utilizing the --use-bundled-cert
option.
PS C:\> terraforming ec2 --use-bundled-cert
$ terraforming
Commands:
terraforming alb # ALB
terraforming asg # AutoScaling Group
terraforming cwa # CloudWatch Alarm
terraforming dbpg # Database Parameter Group
terraforming dbsg # Database Security Group
terraforming dbsn # Database Subnet Group
terraforming ec2 # EC2
terraforming ecc # ElastiCache Cluster
terraforming ecsn # ElastiCache Subnet Group
terraforming efs # EFS File System
terraforming eip # EIP
terraforming elb # ELB
terraforming help [COMMAND] # Describe available commands or one specific command
terraforming iamg # IAM Group
terraforming iamgm # IAM Group Membership
terraforming iamgp # IAM Group Policy
terraforming iamip # IAM Instance Profile
terraforming iamp # IAM Policy
terraforming iampa # IAM Policy Attachment
terraforming iamr # IAM Role
terraforming iamrp # IAM Role Policy
terraforming iamu # IAM User
terraforming iamup # IAM User Policy
terraforming igw # Internet Gateway
terraforming kmsa # KMS Key Alias
terraforming kmsk # KMS Key
terraforming lc # Launch Configuration
terraforming nacl # Network ACL
terraforming nat # NAT Gateway
terraforming nif # Network Interface
terraforming r53r # Route53 Record
terraforming r53z # Route53 Hosted Zone
terraforming rds # RDS
terraforming rs # Redshift
terraforming rt # Route Table
terraforming rta # Route Table Association
terraforming s3 # S3
terraforming sg # Security Group
terraforming sn # Subnet
terraforming snst # SNS Topic
terraforming snss # SNS Subscription
terraforming sqs # SQS
terraforming vgw # VPN Gateway
terraforming vpc # VPC
Options:
[--merge=MERGE] # tfstate file to merge
[--overwrite], [--no-overwrite] # Overwrite existng tfstate
[--tfstate], [--no-tfstate] # Generate tfstate
[--profile=PROFILE] # AWS credentials profile
[--region=REGION] # AWS region
[--use-bundled-cert], [--no-use-bundled-cert] # Use the bundled CA certificate from AWS SDK
$ terraforming <resource> [--profile PROFILE]
(e.g. S3 buckets):
$ terraforming s3
resource "aws_s3_bucket" "hoge" {
bucket = "hoge"
acl = "private"
}
resource "aws_s3_bucket" "fuga" {
bucket = "fuga"
acl = "private"
}
$ terraforming <resource> --tfstate [--merge TFSTATE_PATH] [--overwrite] [--profile PROFILE]
(e.g. S3 buckets):
$ terraforming s3 --tfstate
{
"version": 1,
"serial": 1,
"modules": {
"path": [
"root"
],
"outputs": {
},
"resources": {
"aws_s3_bucket.hoge": {
"type": "aws_s3_bucket",
"primary": {
"id": "hoge",
"attributes": {
"acl": "private",
"bucket": "hoge",
"id": "hoge"
}
}
},
"aws_s3_bucket.fuga": {
"type": "aws_s3_bucket",
"primary": {
"id": "fuga",
"attributes": {
"acl": "private",
"bucket": "fuga",
"id": "fuga"
}
}
}
}
}
}
If you want to merge exported tfstate to existing terraform.tfstate
, specify --tfstate --merge=/path/to/terraform.tfstate
option.
You can overwrite existing terraform.tfstate
by specifying --overwrite
option together.
Existing terraform.tfstate
:
# /path/to/terraform.tfstate
{
"version": 1,
"serial": 88,
"remote": {
"type": "s3",
"config": {
"bucket": "terraforming-tfstate",
"key": "tf"
}
},
"modules": {
"path": [
"root"
],
"outputs": {
},
"resources": {
"aws_elb.hogehoge": {
"type": "aws_elb",
"primary": {
"id": "hogehoge",
"attributes": {
"availability_zones.#": "2",
"connection_draining": "true",
"connection_draining_timeout": "300",
"cross_zone_load_balancing": "true",
"dns_name": "hoge-12345678.ap-northeast-1.elb.amazonaws.com",
"health_check.#": "1",
"id": "hogehoge",
"idle_timeout": "60",
"instances.#": "1",
"listener.#": "1",
"name": "hoge",
"security_groups.#": "2",
"source_security_group": "default",
"subnets.#": "2"
}
}
}
}
}
}
To generate merged tfstate:
$ terraforming s3 --tfstate --merge=/path/to/tfstate
{
"version": 1,
"serial": 89,
"remote": {
"type": "s3",
"config": {
"bucket": "terraforming-tfstate",
"key": "tf"
}
},
"modules": {
"path": [
"root"
],
"outputs": {
},
"resources": {
"aws_elb.hogehoge": {
"type": "aws_elb",
"primary": {
"id": "hogehoge",
"attributes": {
"availability_zones.#": "2",
"connection_draining": "true",
"connection_draining_timeout": "300",
"cross_zone_load_balancing": "true",
"dns_name": "hoge-12345678.ap-northeast-1.elb.amazonaws.com",
"health_check.#": "1",
"id": "hogehoge",
"idle_timeout": "60",
"instances.#": "1",
"listener.#": "1",
"name": "hoge",
"security_groups.#": "2",
"source_security_group": "default",
"subnets.#": "2"
}
}
},
"aws_s3_bucket.hoge": {
"type": "aws_s3_bucket",
"primary": {
"id": "hoge",
"attributes": {
"acl": "private",
"bucket": "hoge",
"id": "hoge"
}
}
},
"aws_s3_bucket.fuga": {
"type": "aws_s3_bucket",
"primary": {
"id": "fuga",
"attributes": {
"acl": "private",
"bucket": "fuga",
"id": "fuga"
}
}
}
}
}
}
After writing exported tf and tfstate to files, execute terraform plan
and check the result.
There should be no diff.
$ terraform plan
No changes. Infrastructure is up-to-date. This means that Terraform
could not detect any differences between your configuration and
the real physical resources that exist. As a result, Terraform
doesn't need to do anything.
Example assuming you want to export everything from us-west-2 and you are using ~/.aws/credentials with a default
profile
export AWS_REGION=us-west-2
terraforming help | grep terraforming | grep -v help | awk '{print "terraforming", $2, "--profile", "default", ">", $2".tf";}' | bash
# find files that only have 1 empty line (likely nothing in AWS)
find . -type f -name '*.tf' | xargs wc -l | grep ' 1 .'
terraforming kmsk
does not export EXTERNAL origin key, bacause Terraform does not support it.
Terraforming Docker Image is available at quay.io/dtan4/terraforming and developed at dtan4/dockerfile-terraforming.
Pull the Docker image:
$ docker pull quay.io/dtan4/terraforming:latest
And then run Terraforming as a Docker container:
$ docker run \
--rm \
--name terraforming \
-e AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX \
-e AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
-e AWS_REGION=xx-yyyy-0 \
quay.io/dtan4/terraforming:latest \
terraforming s3
After checking out the repo, run script/setup
to install dependencies. Then, run script/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
. To release a new version, update the version number in version.rb
, and then run bundle exec rake release
to create a git tag for the version, push git commits and tags, and push the .gem
file to rubygems.org.
Please read Contribution Guide at first.
- Fork it ( https://github.com/dtan4/terraforming/fork )
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request