If you don't already have an AWS account follow the Setup Your Environment getting started guide for a quick overview.
Before you start using AWS CloudFormation, you might need to know what IAM permissions you need, how to start logging AWS CloudFormation API calls, or what endpoints to use. Refer to this guide to get started using AWS CloudFormation.
Note: If you are just going to use the sample demo template you can skip this section.
The AWS Cloud Development Kit (CDK) is an open-source software development framework that lets you define your cloud infrastructure as code in one of its supported programming languages. It is intended for moderately to highly experienced AWS users. Refer to this guide to get started with AWS CDK.
A template is a JSON or YAML text file that contains the configuration information about the AWS resources you want to create in the stack. To learn more about how to work with CloudFormation templates refer to Working with templates guide.
You can either use the provided demo template and deploy it directly to the console or customize the template’s resources before deployment using AWS CDK. Based on your decision follow the respective section below.
By using the sample JSON template that is provided under the demo_templates
directory, you do not need to take any further actions except creating the stack by uploading it. You only need to make sure that your IoT device sends a similar payload as below once connected to the cloud:
{
"Location = <string>",
your other key-value pairs
}
For simplicity’s sake, a sample code is provided that you can run on your device to send data to IoT Core. It is an example of multiple devices sending their weather measurements. You can follow the guide under demo_templates
to learn about how to get the sample code working. However, if you already have your own setup, you can simply work with your own program but make sure to send a JSON payload similar to what is mentioned above and you should be good to continue with the demo.
Follow the steps below to create the CloudFormation stack using the sample template file.
- Sign in to the AWS Management Console and open the AWS CloudFormation console.
- If this is a new CloudFormation account, select Create New Stack. Otherwise, choose Create Stack and then select with new resources.
- In the Template section, select Upload a template file and upload the JSON template file. Choose Next.
- In the Specify Details section, enter a stack name in the Name field.
- If you want you can add tags to your stack. Otherwise, choose Next.
- Review the stack’s settings and then select Create.
- At this point, you will find the status of your stack to be
CREATE_IN_PROGRESS
. Your stack might take several minutes to get created. See the next sections to learn about monitoring your stack creation.
If you are interested in using the CloudFormation templates more than just for demo purposes, you need to customize the stack’s resources based on your specific use case. Follow the steps below to do so:
- Make sure that you already set up your AWS CDK environment.
- Starting in your current directory, change your directory and go to
aws_cdk/TimestreamPattern
directory. - Just to verify everything is working correctly, list the stacks in your app by running the
cdk ls
command. If you don't seeTimestreamPatternStack
, make sure you are currently in theTimestreamPattern
directory. - The structure of the files inside
TimestreamPattern
is as below:
timestream_pattern_stack.py
is the main code of the stack. It is here where the required resources are created.tests/unit/test_timestream_pattern_stack.py
is where the unit tests of the stack are written. The unit tests check- Right creation of the resources in addition to their properties
- Dependencies between the resources
- Right error handlings in case of input violations
cdk.json
tells the CDK Toolkit how to execute your app. Context values are key-value pairs that can be associated with an app, stack, or construct. You can add the context key-values to this file or in the command line before synthesizing the template.README.md
is where you can find detailed instructions on how to get started with the code including how to synthesize the template, a set of useful commands, the stack’s context parameters, and details about the code.cdk.out
is where the synthesized template (in a JSON format) will be located in.
- Run
source .venv/bin/activate
to activate the app's Python virtual environment. - Run
python -m pip install -r requirements.txt
andpython -m pip install -r requirements.txt
to install the dependencies. - Go through the
README.md
file to learn about the context parameters that need to be set by you prior to deployment. - Set the context parameter values either by changing the
cdk.json
file or by using the command line.- To create a command line context variable, use the
--context (-c) option
, as shown in the following example:$ cdk cdk synth -c bucket_name=mybucket
- To specify the same context variable and value in the
cdk.json
file, use the following code.{"context": { "bucket_name": "mybucket"}
- To create a command line context variable, use the
- Run
cdk synth
to emit the synthesized CloudFormation template. - Run
python -m pytest
to run the unit tests. It is the best practice to run the tests before deploying your template to the cloud. - Run
cdk deploy
to deploy the stack to your default AWS account/region. - Refer to the** Stack management** section below.
After deployment, you may need to monitor your created stack and its resources. To do this your starting point should be the AWS CloudFormation console.
- Sign in to the AWS Management Console and open the AWS CloudFormation console.
- Choose Stacks tab to view all the available stacks in your account.
- Find the stack that you just created and click on it.
- To verify that the stack’s creation is done successfully, check if its status is
CREATE_COMPLETE
. To learn more about what each status means refer to stack status codes. - You can view the stack’s general information such as ID, status, policy, rollback configuration, etc under the Stack info tab.
- If you click on the Events tab, each major step in the creation of the stack sorted by the time of each event is displayed.
- You can also find the resources that are part of the stack under the Resources tab.
There is more information on viewing your CloudFormation stack information here.
If you deploy and create the stack successfully, the following resources must get created under your stack. You can verify their creation by checking the Resources tab in your stack as mentioned above.
Resourse | Type |
---|---|
CDKMetadata | AWS::CDK::Metadata |
Timestream database | AWS::Timestream::Database |
Timestream table | AWS::Timestream::Table |
IAM role and policy that grants IoT access to Timestream | AWS::IAM::Role AWS::IAM::Policy |
IoT Rule | AWS::IoT::TopicRule |
CloudWatch log group to capture error logs | AWS::Logs::LogGroup |
IAM role and policy that grants IoT access to CloudWatch | AWS::IAM::Role AWS::IAM::Policy |
If CloudFormation fails to create, update, or delete your stack, you will be able to go through the logs or error messages to learn more about the issue. There are some general methods for troubleshooting a CloudFormation issue. For example, you can follow the steps below to find the issue manually in the console.
- Check the status of your stack in the CloudFormation console.
- From the Events tab, you can see a set of events while the last operation was being done on your stack.
- Find the failure event from the set of events and then check the status reason of that event. The status reason usually gives a good understanding of the issue that caused the failure.
In case of failures in stack creations or updates, CloudFormation automatically performs a rollback. However, you can also add rollback triggers during stack creation or updating to further monitor the state of your application. By setting up the rollback triggers if the application breaches the threshold of the alarms you've specified, it will roll back to that operation.
Finally, this troubleshooting guide is a helpful resource to refer to if there is an issue in your stack.
There is no additional charge for AWS CloudFormation. You pay for AWS resources created using CloudFormation as if you created them by hand. Refer to this guide to learn more about the stack cost estimation functionality.
Now that your stack and all the required resources are created and available, you can start by connecting your device to the cloud and sending your data to the cloud.
- If you are new to AWS IoT Core, this guide is a great starting point to connect your device to the cloud.
- After connecting your device to IoT Core, you can use the MQTT test client to monitor the MQTT messages being passed in your AWS account.
- Move to the Rules tab under the Message Routing section in the AWS IoT console. There you can verify the creation of the newly created topic rule and its timestream rule action which writes data received from your device to the Timestream database.
In the previous section, you verified that your device is connected to the cloud and is sending data to IoT Core. To view your data in the Timestream table, follow these steps:
-
Open the AWS Timestream Console.
-
From the navigation pane, choose Databases.
-
Find the database that was just created by your stack and select it.
-
Choose Tables and find the table that was created by your stack and select it.
-
Select Actions and select Query table.
-
In the query editor, run a query. For instance, to see the latest 10 rows in the table, run:
SELECT * FROM <database_name>.<table_name> ORDER BY time DESC LIMIT 10
-
Now you can see the result of your query in a table format.
-
If you cannot see any data in the query editor, follow these steps:
- First, make sure that your device is connected to the cloud and is sending data by using the MQTT test client. More details about this are provided in the previous section.
- If your data is getting landed in IoT Core but Timestream is not receiving it, there might be an error happening while the IoT rule attempts to send data from IoT Core to Timestream. To find out about the issue, you can use the CloudWatch log group that was created by the template earlier. To do so, open the Cloudwatch console. From the navigation bar, select Log > Log Groups. Find the log group name that was created by the stack earlier and select it. Now you can view the error logs to find out the issue.
In the previous section, you were able to see your device’s data in a table format under Timestream’s table query editor. You can take a further step to visualize your data and create dashboards. Here are several possible Timestream integrations with reporting dashboards :
AWS IoT Analytics provides direct integration with Amazon QuickSight. Amazon QuickSight is a fast business analytics service you can use to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. Amazon QuickSight is available in these regions.
To connect Amazon Timestream to QuickSight you need to follow these steps:
- Navigate to the AWS QuickSight console.
- If you have never used AWS QuickSight before, you will be asked to sign up. In this case, choose Standard tier and the correct region as your setup.
- During the signup phase, give QuickSight access to your Amazon Timestream.
- If you already have an account, give Amazon QuickSight access to your Timestream by choosing Admin > Manage QuickSight > Security & permissions. Under QuickSight access to AWS services, choose Add or remove, then select the check box next to AWS IoT Analytics and choose Update.
- From the admin Amazon QuickSight console page choose New Analysis and New data set.
- Choose Timestream as the source and enter a name for your data source.
- Choose your Timestream database and table to import, and then select Create data source.
- After your data source is created, you can start making visualizations in Amazon QuickSight.
You can follow this guide for a more detailed explanation of the above steps. Additionally, you can refer to this video tutorial to make QuickSight work with Timestream.
- If you have not already, install Grafana following these instructions.
- Grafana has default and custom configuration files. You can configure Grafana as explained here.
- Restart and sign in to Grafana following this guide.
- After signing into Grafana, in the side menu under the Configuration link, click on Data Sources.
- Click** Add data source** button.
- Select Timestream in the Time series databases section.
You can follow this guide for more information on how to integrate Timestream with Grafana. Additionally, you can refer to this video tutorial to connect Timestream to Grafana.
With Amazon Managed Grafana, you can add Amazon Timestream as a data source by using the AWS data source configuration option in the Grafana workspace console. To get started refer to Setting up to set up your amazon Managed Grafana and then you can follow this guide to connect Timestream to your Amazon Managed Grafana. Additionally, you can refer to Using Amazon Managed Grafana to query and visualize data from Amazon Timestream video tutorial to connect Timestream to Amazon Managed Grafana.
To clean up all the resources used in this demo, all you need to do is to delete the initial CloudFormation stack. To delete a stack and its resources, follow these steps:
- Open the AWS CloudFormation console.
- On the Stacks menu in the CloudFormation console, select the stack that you want to delete. (Note that the stack must be currently running.)
- In the stack details pane, choose Delete.
- Confirm deleting stack when prompted.
After the stack is deleted, its status will be DELETE_COMPLETE
. Stacks in the DELETE_COMPLETE
state aren't displayed in the CloudFormation console by default. However, you can follow the instructions in Viewing deleted stacks on the AWS CloudFormation console to be able to view them.
Finally, if the stack deletion failed, the stack will be in the DELETE_FAILED
state. For solutions, see the Delete stack fails troubleshooting topic. In this case, make sure to refer to the Monitoring the generated resources section of this document to verify that all the resources got deleted successfully.