diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/.gitignore b/cloud_templates/aws_cdk/IoTAnalyticsPattern/.gitignore
new file mode 100644
index 0000000..3037faa
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/.gitignore
@@ -0,0 +1,9 @@
+*.swp
+__pycache__
+.pytest_cache
+.venv
+*.egg-info
+
+# CDK asset staging directory
+.cdk.staging
+cdk.out
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/README.md b/cloud_templates/aws_cdk/IoTAnalyticsPattern/README.md
new file mode 100644
index 0000000..c7e132a
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/README.md
@@ -0,0 +1,121 @@
+
+# Welcome to your CDK project!
+# IoT Data visulaization with AWS IoT Analytics
+
+The `cdk.json` file tells the CDK Toolkit how to execute your app.
+
+This project is set up like a standard Python project. The initialization
+process also creates a virtualenv within this project, stored under the `.venv`
+directory. To create the virtualenv it assumes that there is a `python3`
+(or `python` for Windows) executable in your path with access to the `venv`
+package. If for any reason the automatic creation of the virtualenv fails,
+you can create the virtualenv manually.
+
+To manually create a virtualenv on MacOS and Linux:
+
+```
+$ python3 -m venv .venv
+```
+
+After the init process completes and the virtualenv is created, you can use the following
+step to activate your virtualenv.
+
+```
+$ source .venv/bin/activate
+```
+
+If you are a Windows platform, you would activate the virtualenv like this:
+
+```
+% .venv\Scripts\activate.bat
+```
+
+Once the virtualenv is activated, you can install the required dependencies.
+
+```
+$ pip install -r requirements.txt
+```
+
+At this point you can now synthesize the CloudFormation template for this code.
+
+```
+$ cdk synth
+```
+
+To add additional dependencies, for example other CDK libraries, just add
+them to your `setup.py` file and rerun the `pip install -r requirements.txt`
+command.
+
+## Useful commands
+
+ * `cdk ls` list all stacks in the app
+ * `cdk synth` emits the synthesized CloudFormation template
+ * `cdk deploy` deploy this stack to your default AWS account/region
+ * `cdk diff` compare deployed stack with current state
+ * `cdk docs` open CDK documentation
+
+## Context parameters
+There are multiple context parameters that you need to set before synthesizing or delpoying this CDK stack. You can specify a context variable either as part of an AWS CDK CLI command, or in `cdk.json`.
+To create a command line context variable, use the __--context (-c) option__, as shown in the following example.
+
+```
+$ cdk cdk synth -c bucket_name=mybucket
+```
+
+To specify the same context variable and value in the cdk.json file, use the following code.
+
+```
+{
+ "context": {
+ "bucket_name": "mybucket"
+ }
+}
+```
+
+In this project, these are the following parameters to be set:
+
+* `topic_sql`
+
It is required for IoT Core rule creation to add a simplified SQL syntax to filter messages received on an MQTT topic and push the data elsewhere.
+
__Format__: Enter an SQL statement using the following: ```SELECT FROM WHERE ```. For example: ```SELECT temperature FROM 'iot/topic' WHERE temperature > 50```. To learn more, see AWS IoT SQL Reference.
+
+* `analytics_channel_name` ``
+
The name of the IoT Analytics channel that will get connected to the IoT Core to get your data.
+
__Format__: Choose a unique name that you can easily identify. The channel name must contain 1-128 characters. Valid characters are a-z, A-Z, 0-9, and _ (underscore).
+
+* `analytics_datastore_name` ``
+
The name of the IoT Analytics datastore that will get connected to the IoT Core to store your data.
+
__Format__: A unique ID identifies your data store. You can't change this ID after you create it. Valid characters: a-z, A-Z, 0-9, and _ (underscore).
+
+* `analytics_dataset_name` ``
+
The name of the IoT Analytics SQL dataset that will get connected to the IoT Core. A SQL dataset is a materialized view from a data store.
+
__Format__: Choose a unique name that you can easily identify. The dataset name must contain 1-128 characters. Valid characters are a-z, A-Z, 0-9, and _ (underscore).
+
+* `analytics_pipeline_name` ``
+
The name of the IoT Analytics pipeline that will read messages from the channel and write processed data to the datastore.
+
__Format__: Valid characters: a-z, A-Z, 0-9, and _ (underscore).
+
+* `analytics_iot_rule_name` ``
+
The name of the IoT Core rule that is going to be created.
+
__Format__: Should be an alphanumeric string that can also contain underscore (_) characters, but no spaces.
+
+* `analytics_iot_role_name` ``
+
An IAM role should be created to grant AWS IoT access to your endpoint. This parameter is for setting the name of this role.
+
__Format__: Enter a unique role name that contains alphanumeric characters, hyphens, and underscores. A role name can't contain any spaces.
+
+* `channel_storage_type` ``
+
Where channel data is stored. You may choose one of ```serviceManagedS3``` or ```customerManagedS3``` storage. If not specified, the default is ```serviceManagedS3```. This can't be changed after creation of the channel.
+
__Format__: You should either include ```"channel_storage_type": "service_managed"``` or ```"channel_storage_type": "customer_managed"``` in the cdk.json file or command line.
+
+* `datastore_storage_type` ``
+
Where data in a data store is stored.. You can choose ```serviceManagedS3``` storage, ```customerManagedS3``` storage, or ```iotSiteWiseMultiLayerStorage``` storage. The default is ```serviceManagedS3```. You can't change the choice of Amazon S3 storage after your data store is created. In this version of the project there is no support for ```iotSiteWiseMultiLayerStorage``` storage.
+
__Format__: You should either include ```"datastore_storage_type": "service_managed"``` or ```"datastore_storage_type": "customer_managed"``` in the cdk.json file or command line.
+
+* `file_format_configuration` ``
+
Contains the configuration information of file formats. IoT Analytics data stores support JSON and Parquet. The default file format is JSON. You can specify only one format and you can't change the file format after you create the data store.
+
__Format__: You should either include ```"file_format_configuration": "json"``` or ```"file_format_configuration": "parquet"``` in the cdk.json file or command line.
+
+* `parquet_file_format_schema_columns` ``
+
Contains the configuration information of the Parquet format. This parameter is used only if you have set ```file_format_configuration``` to ```"parquet"```.
+
__Format__: The input should be a list of dictionaries. Each dictionary represents a single column and should be of the format of ```{"name": name, "type": type}```.
+
+Enjoy!
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/app.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/app.py
new file mode 100644
index 0000000..6b54549
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/app.py
@@ -0,0 +1,27 @@
+import os
+
+import aws_cdk as cdk
+
+from io_t_analytics_pattern.io_t_analytics_pattern_stack import IoTAnalyticsPatternStack
+
+
+app = cdk.App()
+IoTAnalyticsPatternStack(app, "IoTAnalyticsPatternStack",
+ # If you don't specify 'env', this stack will be environment-agnostic.
+ # Account/Region-dependent features and context lookups will not work,
+ # but a single synthesized template can be deployed anywhere.
+
+ # Uncomment the next line to specialize this stack for the AWS Account
+ # and Region that are implied by the current CLI configuration.
+
+ #env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
+
+ # Uncomment the next line if you know exactly what Account and Region you
+ # want to deploy the stack to. */
+
+ #env=cdk.Environment(account='123456789012', region='us-east-1'),
+
+ # For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html
+ )
+
+app.synth()
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/cdk.json b/cloud_templates/aws_cdk/IoTAnalyticsPattern/cdk.json
new file mode 100644
index 0000000..c895bed
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/cdk.json
@@ -0,0 +1,49 @@
+{
+ "app": "python3 app.py",
+ "watch": {
+ "include": [
+ "**"
+ ],
+ "exclude": [
+ "README.md",
+ "cdk*.json",
+ "requirements*.txt",
+ "source.bat",
+ "**/__init__.py",
+ "python/__pycache__",
+ "tests"
+ ]
+ },
+ "context": {
+ "@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
+ "@aws-cdk/core:stackRelativeExports": true,
+ "@aws-cdk/aws-rds:lowercaseDbIdentifier": true,
+ "@aws-cdk/aws-lambda:recognizeVersionProps": true,
+ "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
+ "@aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true,
+ "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
+ "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
+ "@aws-cdk/core:checkSecretUsage": true,
+ "@aws-cdk/aws-iam:minimizePolicies": true,
+ "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
+ "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
+ "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
+ "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
+ "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
+ "@aws-cdk/core:target-partitions": [
+ "aws",
+ "aws-cn"
+ ],
+ "topic_sql": "SELECT *, parse_time(\"YYYY-MM-dd'T'hh:mm:ss\", timestamp()) as Time FROM 'IoT_Analytics_demo'",
+ "analytics_channel_name": "demo_iot_channel",
+ "analytics_datastore_name": "demo_iot_datastore",
+ "analytics_dataset_name": "demo_iot_dataset",
+ "analytics_pipeline_name": "demo_iot_pipeline",
+ "analytics_iot_role_name": "demo_iot_iotanalytics_role",
+ "analytics_iot_rule_name": "demo_to_iotanalytics_rule",
+ "channel_storage_type": "service_managed",
+ "datastore_storage_type": "service_managed",
+ "file_format_configuration": "json",
+ "parquet_file_format_schema_columns" : [{"name": "device_id", "type": "string"}]
+ }
+}
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/io_t_analytics_pattern/__init__.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/io_t_analytics_pattern/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/io_t_analytics_pattern/io_t_analytics_pattern_stack.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/io_t_analytics_pattern/io_t_analytics_pattern_stack.py
new file mode 100644
index 0000000..4eaedec
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/io_t_analytics_pattern/io_t_analytics_pattern_stack.py
@@ -0,0 +1,289 @@
+from aws_cdk import (
+ Stack,
+ aws_iam as iam,
+ aws_iot as iot,
+ aws_iotanalytics as iotanalytics,
+ aws_s3 as s3,
+ aws_logs as logs
+)
+from constructs import Construct
+import aws_cdk as cdk
+import re
+import sys
+from enum import Enum
+
+sys.path.append('../')
+from common.inputValidation import *
+
+class StorageType(Enum):
+ SERVICE_MANAGED = "service_managed"
+ CUSTOMER_MANAGED = "customer_managed"
+
+class FileFormat(Enum):
+ JSON = "json"
+ PARQUET = "parquet"
+
+class IoTAnalyticsPatternStack(Stack):
+
+ # Defining the class variables
+ topic_sql = ""
+ analytics_channel_name = ""
+ analytics_datastore_name = ""
+ analytics_dataset_name = ""
+ analytics_pipeline_name = ""
+ analytics_iot_role_name = ""
+ analytics_iot_rule_name = ""
+ # By default, IoT analytics resources use service_managed storage and Json file format
+ channel_storage_type = StorageType.SERVICE_MANAGED
+ datastore_storage_type = StorageType.SERVICE_MANAGED
+ file_format_configuration = FileFormat.JSON
+
+ def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
+ super().__init__(scope, construct_id, **kwargs)
+
+ # Getting the context parameters
+
+ # Required context parameters
+ self.topic_sql = self.node.try_get_context("topic_sql")
+
+ # Optional context parameters
+ self.analytics_channel_name = self.node.try_get_context("analytics_channel_name")
+ self.analytics_datastore_name = self.node.try_get_context("analytics_datastore_name")
+ self.analytics_dataset_name = self.node.try_get_context("analytics_dataset_name")
+ self.analytics_pipeline_name = self.node.try_get_context("analytics_pipeline_name")
+ self.analytics_iot_role_name = self.node.try_get_context("analytics_iot_role_name")
+ self.analytics_iot_rule_name = self.node.try_get_context("analytics_iot_rule_name")
+
+ # Perform input validation
+ self.performInputValidation()
+
+ # Checking for advanced settings
+ self.checkAdvSettings()
+
+ # Getting the IoT Analytics Channel
+ analytics_channel = self.createChannel()
+
+ # Creating an IoT Analytics Datastore
+ analytics_datastore = self.createDataStore()
+
+ # Creating an IoT Analytics Dataset
+ analytics_dataset = iotanalytics.CfnDataset(self, self.analytics_dataset_name, actions=[iotanalytics.CfnDataset.ActionProperty(
+ action_name="QueryDatastoreCDK",
+ query_action=iotanalytics.CfnDataset.QueryActionProperty(
+ sql_query= f'''SELECT * FROM {analytics_datastore.datastore_name}'''
+ )
+ )])
+ analytics_dataset.node.add_dependency(analytics_datastore)
+ analytics_dataset.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+ # Creating an Iot Analytics Pipeline
+ analytics_pipeline = iotanalytics.CfnPipeline(self, self.analytics_pipeline_name, pipeline_name=self.analytics_pipeline_name, pipeline_activities=[
+ iotanalytics.CfnPipeline.ActivityProperty(
+ channel=iotanalytics.CfnPipeline.ChannelProperty(
+ channel_name=analytics_channel.channel_name,
+ name=analytics_channel.channel_name,
+ next=analytics_datastore.datastore_name
+ ),
+ datastore=iotanalytics.CfnPipeline.DatastoreProperty(
+ datastore_name=analytics_datastore.datastore_name,
+ name=analytics_datastore.datastore_name
+ )
+ )])
+ analytics_pipeline.node.add_dependency(analytics_datastore)
+ analytics_pipeline.node.add_dependency(analytics_channel)
+ analytics_pipeline.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+
+ # Creating the role for the IoT-Analytics rule
+ channel_arn = f"arn:aws:iotanalytics:{self.region}:{self.account}:channel/{analytics_channel.channel_name}"
+ iot_analytics_role = iam.Role(self, self.analytics_iot_role_name, assumed_by=iam.ServicePrincipal("iot.amazonaws.com"))
+ iot_analytics_role.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, resources=[channel_arn], actions=["iotanalytics:BatchPutMessage"]))
+ iot_analytics_role.node.add_dependency(analytics_channel)
+ iot_analytics_role.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+ # Creating a cloudwatch log group for topic rule's error action
+ log_group = logs.LogGroup(self, "iot_to_analytics_log_group" , log_group_name="iot_to_analytics_log_group", removal_policy=cdk.RemovalPolicy.DESTROY)
+
+ iot_to_cloudwatch_logs_role = iam.Role(self, "iot_to_analytics_log_group_role", assumed_by=iam.ServicePrincipal("iot.amazonaws.com"))
+ iot_to_cloudwatch_logs_role.add_to_policy(iam.PolicyStatement(
+ effect=iam.Effect.ALLOW, resources=[log_group.log_group_arn],
+ actions=["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:PutMetricFilter", "logs:PutRetentionPolicy"]))
+ iot_to_cloudwatch_logs_role.node.add_dependency(log_group)
+ iot_to_cloudwatch_logs_role.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+
+ # Creating the IoT Core Rule
+ topic_rule = iot.CfnTopicRule(self, self.analytics_iot_rule_name, topic_rule_payload=iot.CfnTopicRule.TopicRulePayloadProperty(
+ actions=[iot.CfnTopicRule.ActionProperty( iot_analytics=iot.CfnTopicRule.IotAnalyticsActionProperty(
+ channel_name=analytics_channel.channel_name,
+ role_arn=iot_analytics_role.role_arn,
+ )
+ )],
+
+ sql=self.topic_sql,
+ aws_iot_sql_version = '2016-03-23',
+ error_action= iot.CfnTopicRule.ActionProperty(
+ cloudwatch_logs=iot.CfnTopicRule.CloudwatchLogsActionProperty(
+ log_group_name=log_group.log_group_name,
+ role_arn=iot_to_cloudwatch_logs_role.role_arn
+ )
+ )))
+
+ topic_rule.node.add_dependency(analytics_channel)
+ topic_rule.node.add_dependency(iot_analytics_role)
+ topic_rule.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+
+ def performInputValidation(self):
+ self.validateTopicSQL(self.topic_sql)
+ self.validateAnalyticsChannelName(self.analytics_channel_name)
+ self.validateAnalyticsDatasetName(self.analytics_dataset_name)
+ self.validateAnalyticsDatastoreName(self.analytics_datastore_name)
+ self.validateAnalyticsPipelineName(self.analytics_pipeline_name)
+ self.validateRoleName(self.analytics_iot_role_name)
+ self.validateIoTRuleName(self.analytics_iot_rule_name)
+
+ def validateTopicSQL(self, sqlStatement):
+ if not sqlStatement:
+ raise NoSQL
+ elif type(sqlStatement) != str:
+ raise WrongFormattedInput("The input sql statement does not have a right format. Please refer to README.md for more information.")
+ return
+
+ def validateAnalyticsChannelName(self, channelName):
+ if not channelName:
+ self.analytics_channel_name = "demo_iot_channel"
+ else:
+ checkInputLength(self, 1, 128, channelName, "channel")
+ checkInputPattern(self, r'^[a-zA-Z0-9_]+$', channelName, "channel")
+
+ def validateAnalyticsDatasetName(self, datasetName):
+ if not datasetName:
+ self.analytics_dataset_name = "demo_iot_dataset"
+ else:
+ checkInputLength(self, 1, 128, datasetName, "dataset")
+ checkInputPattern(self, r'^[a-zA-Z0-9_]+$', datasetName, "dataset")
+
+ def validateAnalyticsDatastoreName(self, datastoreName):
+ if not datastoreName:
+ self.analytics_datastore_name = "demo_iot_datastore"
+ else:
+ checkInputPattern(self, r'^[a-zA-Z0-9_]+$', datastoreName, "datastore")
+
+ def validateAnalyticsPipelineName(self, pipelineName):
+ if not pipelineName:
+ self.analytics_pipeline_name = "demo_iot_pipeline"
+ else:
+ checkInputPattern(self, r'^[a-zA-Z0-9_]+$', pipelineName, "pipeline")
+
+ def validateRoleName(self, roleName):
+ if not roleName:
+ self.analytics_iot_role_name = "demo_iot_iotanalytics_role"
+ elif type(roleName) != str:
+ raise WrongFormattedInput("The provided input for the IAM role name is not of type string")
+ else:
+ checkInputLength(self, 1, 64, roleName, "IAM role")
+ checkInputPattern(self, r'^[a-zA-Z0-9+=,@-_\.]+$', roleName, "IAMrole")
+
+ def validateIoTRuleName(self, ruleName):
+ if not ruleName:
+ self.analytics_iot_rule_name = "demo_to_iotanalytics_rule"
+ elif type(ruleName) != str:
+ raise WrongFormattedInput("The provided input for topic rule name is not of type string")
+ else:
+ checkInputPattern(self, r'^[a-zA-Z0-9_]+$', ruleName, "IoT Rule")
+
+ def checkAdvSettings(self):
+ channel_storage_type = self.node.try_get_context("channel_storage_type")
+ datastore_storage_type = self.node.try_get_context("datastore_storage_type")
+ file_format = self.node.try_get_context("file_format_configuration")
+
+ if channel_storage_type == StorageType.CUSTOMER_MANAGED.value:
+ self.channel_storage_type = StorageType.CUSTOMER_MANAGED
+
+ if datastore_storage_type == StorageType.CUSTOMER_MANAGED.value:
+ self.datastore_storage_type = StorageType.CUSTOMER_MANAGED
+
+ if file_format == FileFormat.PARQUET.value:
+ self.file_format_configuration = FileFormat.PARQUET
+
+ def createChannel(self):
+
+ analytics_channel = ""
+
+ if self.channel_storage_type == StorageType.SERVICE_MANAGED:
+ analytics_channel = iotanalytics.CfnChannel(self, self.analytics_channel_name, channel_name=self.analytics_channel_name)
+ analytics_channel.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+ elif self.channel_storage_type == StorageType.CUSTOMER_MANAGED:
+ # Creating a bucket for channel storage
+ channel_bucket = s3.Bucket(self, "iot-analytics-channel-storage", versioned=True, removal_policy=cdk.RemovalPolicy.DESTROY, auto_delete_objects=True)
+
+ # Creating an IAM Role to give iotanalytics access to the bucket
+ channel_storage_bucket_role = iam.Role(self, "iot_analytics_channel_storage_bucket_role", assumed_by=iam.ServicePrincipal("iotanalytics.amazonaws.com"))
+ channel_storage_bucket_role.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, resources=[channel_bucket.bucket_arn, channel_bucket.bucket_arn + "/*"],
+ actions=["s3:GetBucketLocation","s3:GetObject", "s3:ListBucket", "s3:PutObject"]))
+ channel_storage_bucket_role.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+ analytics_channel = iotanalytics.CfnChannel(self, self.analytics_channel_name, channel_name=self.analytics_channel_name,
+ channel_storage = iotanalytics.CfnChannel.ChannelStorageProperty(
+ customer_managed_s3 = iotanalytics.CfnChannel.CustomerManagedS3Property(
+ bucket = channel_bucket.bucket_name,
+ role_arn= channel_storage_bucket_role.role_arn
+ )
+ ))
+ analytics_channel.node.add_dependency(channel_bucket)
+ analytics_channel.node.add_dependency(channel_storage_bucket_role)
+ analytics_channel.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+ else:
+ raise Exception("An error occured while getting the channel's storage type.")
+
+ return analytics_channel
+
+ def createDataStore(self):
+ analytics_datastore = ""
+
+ if self.file_format_configuration == FileFormat.JSON:
+ file_format_config = iotanalytics.CfnDatastore.FileFormatConfigurationProperty(
+ json_configuration={}
+ )
+
+ if self.file_format_configuration == FileFormat.PARQUET:
+ file_format_config = iotanalytics.CfnDatastore.FileFormatConfigurationProperty(
+ parquet_configuration=iotanalytics.CfnDatastore.ParquetConfigurationProperty(
+ schema_definition=iotanalytics.CfnDatastore.SchemaDefinitionProperty(
+ columns=[iotanalytics.CfnDatastore.ColumnProperty(name=column["name"],type=column["type"]) for column in self.node.try_get_context("parquet_file_format_schema_columns")]
+ )
+ ))
+
+ if self.datastore_storage_type == StorageType.SERVICE_MANAGED:
+ analytics_datastore = analytics_datastore = iotanalytics.CfnDatastore(self, self.analytics_datastore_name, datastore_name=self.analytics_datastore_name,
+ datastore_storage=iotanalytics.CfnDatastore.DatastoreStorageProperty(
+ service_managed_s3={}
+ ))
+ analytics_datastore.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+ elif self.datastore_storage_type == StorageType.CUSTOMER_MANAGED:
+ # Creating a bucket for channel storage
+ datastore_bucket = s3.Bucket(self, "iot-analytics-datastore-storage", versioned=True, removal_policy=cdk.RemovalPolicy.DESTROY, auto_delete_objects=True)
+
+ # Creating an IAM Role to give iotanalytics access to the bucket
+ datastore_storage_bucket_role = iam.Role(self, "iot_analytics_datastore_storage_bucket_role", assumed_by=iam.ServicePrincipal("iotanalytics.amazonaws.com"))
+ datastore_storage_bucket_role.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, resources=[datastore_bucket.bucket_arn, datastore_bucket.bucket_arn + "/*"],
+ actions=["s3:GetBucketLocation","s3:GetObject", "s3:ListBucket", "s3:PutObject"]))
+ datastore_storage_bucket_role.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+
+ analytics_datastore = iotanalytics.CfnDatastore(self, self.analytics_datastore_name, datastore_name=self.analytics_datastore_name,
+ datastore_storage = iotanalytics.CfnDatastore.DatastoreStorageProperty(
+ customer_managed_s3=iotanalytics.CfnDatastore.CustomerManagedS3Property(
+ bucket=datastore_bucket.bucket_name,
+ role_arn=datastore_storage_bucket_role.role_arn
+ )),
+ file_format_configuration=file_format_config
+ )
+ analytics_datastore.apply_removal_policy(policy=cdk.RemovalPolicy.DESTROY)
+ analytics_datastore.node.add_dependency(datastore_bucket)
+ analytics_datastore.node.add_dependency(datastore_storage_bucket_role)
+ else:
+ raise Exception("An error occured while getting the datastore's storage type.")
+
+ return analytics_datastore
\ No newline at end of file
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements-dev.txt b/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements-dev.txt
new file mode 100644
index 0000000..9270945
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements-dev.txt
@@ -0,0 +1 @@
+pytest==6.2.5
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements.txt b/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements.txt
new file mode 100644
index 0000000..0822bbe
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/requirements.txt
@@ -0,0 +1,2 @@
+aws-cdk-lib==2.37.1
+constructs>=10.0.0,<11.0.0
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/source.bat b/cloud_templates/aws_cdk/IoTAnalyticsPattern/source.bat
new file mode 100644
index 0000000..9e1a834
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/source.bat
@@ -0,0 +1,13 @@
+@echo off
+
+rem The sole purpose of this script is to make the command
+rem
+rem source .venv/bin/activate
+rem
+rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows.
+rem On Windows, this command just runs this batch file (the argument is ignored).
+rem
+rem Now we don't need to document a Windows command for activating a virtualenv.
+
+echo Executing .venv\Scripts\activate.bat for you
+.venv\Scripts\activate.bat
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/__init__.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/unit/__init__.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/unit/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/unit/test_io_t_analytics_pattern_stack.py b/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/unit/test_io_t_analytics_pattern_stack.py
new file mode 100644
index 0000000..ee57912
--- /dev/null
+++ b/cloud_templates/aws_cdk/IoTAnalyticsPattern/tests/unit/test_io_t_analytics_pattern_stack.py
@@ -0,0 +1,344 @@
+import aws_cdk as core
+import aws_cdk.assertions as assertions
+from aws_cdk.assertions import Match
+import pytest
+
+from io_t_analytics_pattern.io_t_analytics_pattern_stack import IoTAnalyticsPatternStack
+
+app = core.App(context= {"topic_sql": "SELECT temperature, pressure, humidity FROM 'EL-analytics-test'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"})
+
+stack = IoTAnalyticsPatternStack(app, "io-t-analytics-pattern")
+template = assertions.Template.from_stack(stack)
+
+# Defining Capture objects for obtaining values in tests
+iam_policy_ref = assertions.Capture()
+iam_role_ref = assertions.Capture()
+iot_datastore_ref = assertions.Capture()
+iot_channel_ref = assertions.Capture()
+
+# Testing the resources' creation and properties
+
+def test_analytics_channel_created():
+ template.has_resource("AWS::IoTAnalytics::Channel", {"DeletionPolicy":"Delete", "UpdateReplacePolicy":"Delete"})
+ template.resource_count_is("AWS::IoTAnalytics::Channel", 1)
+
+def test_analytics_channel_properties():
+ template.has_resource_properties("AWS::IoTAnalytics::Channel", {
+ "ChannelName": app.node.try_get_context("analytics_channel_name")
+ })
+
+def test_analytics_datastore_created():
+ template.has_resource("AWS::IoTAnalytics::Datastore", {"DeletionPolicy":"Delete", "UpdateReplacePolicy":"Delete"})
+ template.resource_count_is("AWS::IoTAnalytics::Datastore", 1)
+
+def test_analytics_datastore_properties():
+ template.has_resource_properties("AWS::IoTAnalytics::Datastore", {
+ "DatastoreName": app.node.try_get_context("analytics_datastore_name"),
+ "DatastoreStorage": {"ServiceManagedS3": {}}
+ })
+
+def test_analytics_dataset_created():
+ template.has_resource("AWS::IoTAnalytics::Dataset", {"DeletionPolicy":"Delete", "UpdateReplacePolicy":"Delete"})
+ template.resource_count_is("AWS::IoTAnalytics::Dataset", 1)
+
+def test_analytics_dataset_properties():
+ template.has_resource_properties("AWS::IoTAnalytics::Dataset", {
+ "Actions": [{
+ "ActionName": "QueryDatastoreCDK",
+ "QueryAction": {
+ "SqlQuery": f'SELECT * FROM {app.node.try_get_context("analytics_datastore_name")}'
+ }
+ }]
+ })
+
+def test_analytics_pipeline_created():
+ template.has_resource("AWS::IoTAnalytics::Pipeline", {"DeletionPolicy":"Delete", "UpdateReplacePolicy":"Delete"})
+ template.resource_count_is("AWS::IoTAnalytics::Pipeline", 1)
+
+def test_analytics_pipeline_properties():
+ template.has_resource_properties("AWS::IoTAnalytics::Pipeline", {
+ "PipelineActivities": [{
+ "Channel": {
+ "ChannelName": app.node.try_get_context("analytics_channel_name"),
+ "Name": app.node.try_get_context("analytics_channel_name"),
+ "Next": app.node.try_get_context("analytics_datastore_name")
+ },
+ "Datastore": {
+ "DatastoreName": app.node.try_get_context("analytics_datastore_name"),
+ "Name": app.node.try_get_context("analytics_datastore_name")
+ }
+ }],
+ "PipelineName": app.node.try_get_context("analytics_pipeline_name")
+ })
+
+def test_iam_role_properties():
+ template.has_resource_properties("AWS::IAM::Role", {
+ "AssumeRolePolicyDocument": {
+ "Statement": [{
+ "Action": "sts:AssumeRole",
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "iot.amazonaws.com"
+ }
+ }],
+ "Version": Match.any_value()
+ }
+ })
+
+def test_iam_policy_properties():
+ template.has_resource_properties("AWS::IAM::Policy", {
+ "PolicyDocument": {
+ "Statement": [
+ {
+ "Action": "iotanalytics:BatchPutMessage",
+ "Effect": "Allow",
+ "Resource": {
+ "Fn::Join": [
+ "",
+ [
+ "arn:aws:iotanalytics:",
+ {
+ "Ref": "AWS::Region"
+ },
+ ":",
+ {
+ "Ref": "AWS::AccountId"
+ },
+ f":channel/{app.node.try_get_context('analytics_channel_name')}"
+ ]
+ ]
+ }
+ }
+ ],
+ "Version": Match.any_value()
+ },
+ "PolicyName": iam_policy_ref,
+ "Roles": [
+ {
+ "Ref": iam_role_ref
+ }
+ ]
+ })
+
+def test_iot_topic_rule_created():
+ template.has_resource("AWS::IoT::TopicRule", {"DeletionPolicy":"Delete", "UpdateReplacePolicy":"Delete"})
+ template.resource_count_is("AWS::IoT::TopicRule", 1)
+
+def test_iot_topic_rule_properties():
+ template.has_resource_properties("AWS::IoT::TopicRule", {
+ "TopicRulePayload": {
+ "Actions": [{
+ "IotAnalytics": {
+ "ChannelName": app.node.try_get_context("analytics_channel_name"),
+ "RoleArn": {
+ "Fn::GetAtt": [
+ iam_role_ref.as_string(),
+ "Arn"
+ ]
+ }
+ }
+ }],
+ "Sql": app.node.try_get_context("topic_sql")
+ }
+ })
+
+# Testing dependencies between the resources
+
+def test_dataset_dependencies():
+ template.has_resource("AWS::IoTAnalytics::Dataset", {
+ "DependsOn": [
+ iot_datastore_ref
+ ]
+ })
+
+def test_pipeline_dependencies():
+ template.has_resource("AWS::IoTAnalytics::Pipeline", {
+ "DependsOn": [
+ iot_channel_ref,
+ iot_datastore_ref.as_string()
+ ]
+ })
+
+def test_iam_role_dependencies():
+ template.has_resource("AWS::IAM::Role", {
+ "DependsOn": [
+ iot_channel_ref.as_string()
+ ]
+ })
+
+def test_iam_policy_dependencies():
+ template.has_resource("AWS::IAM::Policy", {
+ "DependsOn": [
+ iot_channel_ref.as_string()
+ ]
+ })
+
+def test_topic_rule_dependencies():
+ template.has_resource("AWS::IoT::TopicRule", {
+ "DependsOn": [
+ iam_policy_ref.as_string(),
+ iam_role_ref.as_string(),
+ iot_channel_ref.as_string()
+ ]
+ })
+
+# Testing input validations
+
+def test_no_sql():
+ test_app = core.App(context= {
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"No sql statemtnt .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_sql_format():
+ test_app = core.App(context= {
+ "topic_sql": ["SELECT * FROM 'topic"],
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"The input sql statement does not have a right format.*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_channel_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk-iot-channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_length_channel_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "x" * 129,
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input length .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_dataset_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk-iot-dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_length_dataset_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "x" * 129,
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input length .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_datastore_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk-iot-datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_pipeline_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk-iot-pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_role_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_!analytics_role",
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_length_role_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "x" * 65,
+ "analytics_iot_rule_name": "cdk_to_analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input length .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
+
+def test_wrong_format_rule_name():
+ test_app = core.App(context= {
+ "topic_sql": "SELECT * FROM 'topic'",
+ "analytics_channel_name": "cdk_iot_channel",
+ "analytics_datastore_name": "cdk_iot_datastore",
+ "analytics_dataset_name": "cdk_iot_dataset",
+ "analytics_pipeline_name": "cdk_iot_pipeline",
+ "analytics_iot_role_name": "cdk_iot_analytics_role",
+ "analytics_iot_rule_name": "cdk to analytics_rule"
+ })
+ with pytest.raises(Exception, match=r"Invalid input pattern .*"):
+ stack = IoTAnalyticsPatternStack(test_app, "io-t-analytics-pattern")
+ template = assertions.Template.from_stack(stack)
\ No newline at end of file
diff --git a/cloud_templates/demo/demo_templates/iotanalytics_pattern.json b/cloud_templates/demo/demo_templates/iotanalytics_pattern.json
new file mode 100644
index 0000000..70ddcf7
--- /dev/null
+++ b/cloud_templates/demo/demo_templates/iotanalytics_pattern.json
@@ -0,0 +1,504 @@
+{
+ "Resources": {
+ "demoiotchannel": {
+ "Type": "AWS::IoTAnalytics::Channel",
+ "Properties": {
+ "ChannelName": "demo_iot_channel"
+ },
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_channel"
+ }
+ },
+ "demoiotdatastore": {
+ "Type": "AWS::IoTAnalytics::Datastore",
+ "Properties": {
+ "DatastoreName": "demo_iot_datastore",
+ "DatastoreStorage": {
+ "ServiceManagedS3": {}
+ }
+ },
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_datastore"
+ }
+ },
+ "demoiotdataset": {
+ "Type": "AWS::IoTAnalytics::Dataset",
+ "Properties": {
+ "Actions": [
+ {
+ "ActionName": "QueryDatastoreCDK",
+ "QueryAction": {
+ "SqlQuery": "SELECT * FROM demo_iot_datastore"
+ }
+ }
+ ]
+ },
+ "DependsOn": [
+ "demoiotdatastore"
+ ],
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_dataset"
+ }
+ },
+ "demoiotpipeline": {
+ "Type": "AWS::IoTAnalytics::Pipeline",
+ "Properties": {
+ "PipelineActivities": [
+ {
+ "Channel": {
+ "ChannelName": "demo_iot_channel",
+ "Name": "demo_iot_channel",
+ "Next": "demo_iot_datastore"
+ },
+ "Datastore": {
+ "DatastoreName": "demo_iot_datastore",
+ "Name": "demo_iot_datastore"
+ }
+ }
+ ],
+ "PipelineName": "demo_iot_pipeline"
+ },
+ "DependsOn": [
+ "demoiotchannel",
+ "demoiotdatastore"
+ ],
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_pipeline"
+ }
+ },
+ "demoiotiotanalyticsroleB60804E3": {
+ "Type": "AWS::IAM::Role",
+ "Properties": {
+ "AssumeRolePolicyDocument": {
+ "Statement": [
+ {
+ "Action": "sts:AssumeRole",
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "iot.amazonaws.com"
+ }
+ }
+ ],
+ "Version": "2012-10-17"
+ }
+ },
+ "DependsOn": [
+ "demoiotchannel"
+ ],
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_iotanalytics_role/Resource"
+ }
+ },
+ "demoiotiotanalyticsroleDefaultPolicy5B5688F1": {
+ "Type": "AWS::IAM::Policy",
+ "Properties": {
+ "PolicyDocument": {
+ "Statement": [
+ {
+ "Action": "iotanalytics:BatchPutMessage",
+ "Effect": "Allow",
+ "Resource": {
+ "Fn::Join": [
+ "",
+ [
+ "arn:aws:iotanalytics:",
+ {
+ "Ref": "AWS::Region"
+ },
+ ":",
+ {
+ "Ref": "AWS::AccountId"
+ },
+ ":channel/demo_iot_channel"
+ ]
+ ]
+ }
+ }
+ ],
+ "Version": "2012-10-17"
+ },
+ "PolicyName": "demoiotiotanalyticsroleDefaultPolicy5B5688F1",
+ "Roles": [
+ {
+ "Ref": "demoiotiotanalyticsroleB60804E3"
+ }
+ ]
+ },
+ "DependsOn": [
+ "demoiotchannel"
+ ],
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_iot_iotanalytics_role/DefaultPolicy/Resource"
+ }
+ },
+ "iottoanalyticsloggroup24354EF0": {
+ "Type": "AWS::Logs::LogGroup",
+ "Properties": {
+ "LogGroupName": "iot_to_analytics_log_group",
+ "RetentionInDays": 731
+ },
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/iot_to_analytics_log_group/Resource"
+ }
+ },
+ "iottoanalyticsloggrouprole85868C50": {
+ "Type": "AWS::IAM::Role",
+ "Properties": {
+ "AssumeRolePolicyDocument": {
+ "Statement": [
+ {
+ "Action": "sts:AssumeRole",
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "iot.amazonaws.com"
+ }
+ }
+ ],
+ "Version": "2012-10-17"
+ }
+ },
+ "DependsOn": [
+ "iottoanalyticsloggroup24354EF0"
+ ],
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/iot_to_analytics_log_group_role/Resource"
+ }
+ },
+ "iottoanalyticsloggrouproleDefaultPolicy9740758A": {
+ "Type": "AWS::IAM::Policy",
+ "Properties": {
+ "PolicyDocument": {
+ "Statement": [
+ {
+ "Action": [
+ "logs:CreateLogGroup",
+ "logs:CreateLogStream",
+ "logs:PutLogEvents",
+ "logs:PutMetricFilter",
+ "logs:PutRetentionPolicy"
+ ],
+ "Effect": "Allow",
+ "Resource": {
+ "Fn::GetAtt": [
+ "iottoanalyticsloggroup24354EF0",
+ "Arn"
+ ]
+ }
+ }
+ ],
+ "Version": "2012-10-17"
+ },
+ "PolicyName": "iottoanalyticsloggrouproleDefaultPolicy9740758A",
+ "Roles": [
+ {
+ "Ref": "iottoanalyticsloggrouprole85868C50"
+ }
+ ]
+ },
+ "DependsOn": [
+ "iottoanalyticsloggroup24354EF0"
+ ],
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/iot_to_analytics_log_group_role/DefaultPolicy/Resource"
+ }
+ },
+ "demotoiotanalyticsrule": {
+ "Type": "AWS::IoT::TopicRule",
+ "Properties": {
+ "TopicRulePayload": {
+ "Actions": [
+ {
+ "IotAnalytics": {
+ "ChannelName": "demo_iot_channel",
+ "RoleArn": {
+ "Fn::GetAtt": [
+ "demoiotiotanalyticsroleB60804E3",
+ "Arn"
+ ]
+ }
+ }
+ }
+ ],
+ "AwsIotSqlVersion": "2016-03-23",
+ "ErrorAction": {
+ "CloudwatchLogs": {
+ "LogGroupName": {
+ "Ref": "iottoanalyticsloggroup24354EF0"
+ },
+ "RoleArn": {
+ "Fn::GetAtt": [
+ "iottoanalyticsloggrouprole85868C50",
+ "Arn"
+ ]
+ }
+ }
+ },
+ "Sql": "SELECT *, parse_time(\"YYYY-MM-dd'T'hh:mm:ss\", timestamp()) as Time FROM 'IoT_Analytics_demo'"
+ }
+ },
+ "DependsOn": [
+ "demoiotchannel",
+ "demoiotiotanalyticsroleDefaultPolicy5B5688F1",
+ "demoiotiotanalyticsroleB60804E3"
+ ],
+ "UpdateReplacePolicy": "Delete",
+ "DeletionPolicy": "Delete",
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/demo_to_iotanalytics_rule"
+ }
+ },
+ "CDKMetadata": {
+ "Type": "AWS::CDK::Metadata",
+ "Properties": {
+ "Analytics": "v2:deflate64:H4sIAAAAAAAA/z2MwQ6CMAyGn4X7qCgHHwATLx4IejdzTKiMlrASQwjv7pjRU7/+/fsdIM8hS/Tbp6buUocPWK6iTadCdF+QRZN2s6DxsBRPKlpNZJ0KeNKivfBo/4uVDUscrEOyq0Ldw1Kxi404S3Zo5tiKtCrHTTBfuDmPPA3b5cfhnwVCcOMBTTW5YNw81vM0mugsmGoUZFpVOUvLtMvhCPsseXnEdJxIsLdQfecHMGzhi+sAAAA="
+ },
+ "Metadata": {
+ "aws:cdk:path": "IoTAnalyticsPatternStack/CDKMetadata/Default"
+ },
+ "Condition": "CDKMetadataAvailable"
+ }
+ },
+ "Conditions": {
+ "CDKMetadataAvailable": {
+ "Fn::Or": [
+ {
+ "Fn::Or": [
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "af-south-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-east-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-northeast-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-northeast-2"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-south-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-southeast-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ap-southeast-2"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "ca-central-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "cn-north-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "cn-northwest-1"
+ ]
+ }
+ ]
+ },
+ {
+ "Fn::Or": [
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-central-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-north-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-south-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-west-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-west-2"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "eu-west-3"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "me-south-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "sa-east-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "us-east-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "us-east-2"
+ ]
+ }
+ ]
+ },
+ {
+ "Fn::Or": [
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "us-west-1"
+ ]
+ },
+ {
+ "Fn::Equals": [
+ {
+ "Ref": "AWS::Region"
+ },
+ "us-west-2"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ "Parameters": {
+ "BootstrapVersion": {
+ "Type": "AWS::SSM::Parameter::Value",
+ "Default": "/cdk-bootstrap/hnb659fds/version",
+ "Description": "Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip]"
+ }
+ },
+ "Rules": {
+ "CheckBootstrapVersion": {
+ "Assertions": [
+ {
+ "Assert": {
+ "Fn::Not": [
+ {
+ "Fn::Contains": [
+ [
+ "1",
+ "2",
+ "3",
+ "4",
+ "5"
+ ],
+ {
+ "Ref": "BootstrapVersion"
+ }
+ ]
+ }
+ ]
+ },
+ "AssertDescription": "CDK bootstrap stack version 6 required. Please run 'cdk bootstrap' with a recent version of the CDK CLI."
+ }
+ ]
+ }
+ }
+}
+
diff --git a/cloud_templates/user_guides/iotanalytics_guide.md b/cloud_templates/user_guides/iotanalytics_guide.md
new file mode 100644
index 0000000..e1ae46c
--- /dev/null
+++ b/cloud_templates/user_guides/iotanalytics_guide.md
@@ -0,0 +1,221 @@
+# Getting started with IoT Analytics template guide
+
+## Setting up and prerequisites
+___
+
+### AWS Account
+
+If you don't already have an AWS account follow the [Setup Your Environment](https://aws.amazon.com/getting-started/guides/setup-environment/) getting started guide for a quick overview.
+
+___
+
+### AWS CloudFormation
+
+Before you start using AWS CloudFormation, you might need to know what IAM permissions you need, how to start logging AWS CloudFormation API calls, or what endpoints to use. Refer to this [guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/settingup.html) to get started using AWS CloudFormation.
+
+___
+
+### AWS CDK
+
+**Note**: If you are just going to use the sample demo template you can skip this section.
+
+The AWS Cloud Development Kit (CDK) is an open-source software development framework that lets you define your cloud infrastructure as code in one of its supported programming languages. It is intended for moderately to highly experienced AWS users. Refer to this [guide](https://aws.amazon.com/getting-started/guides/setup-cdk/?pg=gs&sec=gtkaws) to get started with AWS CDK.
+
+___
+
+## Template deployment and CloudFormation stack creation
+
+A template is a JSON or YAML text file that contains the configuration information about the AWS resources you want to create in the [stack](https://docs.aws.amazon.com/cdk/v2/guide/stacks.html). To learn more about how to work with CloudFormation templates refer to the [Working with templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) guide.
+
+You can either use the provided demo template and deploy it directly to the console or customize the template’s resources before deployment using AWS CDK. Based on your decision follow the respective section below.
+
+___
+
+### Sample demo template
+
+By using the sample JSON template that is provided, you do not need to take any further actions except creating the stack by uploading the template file. For simplicity’s sake, a simple code is provided that you can run on your device. It is an example of multiple devices sending their weather measurements to the cloud through ExpressLink. You can find the code and guide to get it working under the `demo/demo_weather_station_code` directory.
+
+Follow the steps below to create the CloudFormation stack using the sample template file.
+
+1. Sign in to the AWS Management Console and open the [AWS CloudFormation console.](https://console.aws.amazon.com/cloudformation)
+2. If this is a new CloudFormation account, choose **Create New Stack**. Otherwise, select **Create Stack** and then select **with new resources**.
+3. In the **Template** section, select **Upload a template file** and upload the JSON template file. Choose **Next**.
+4. In the **Specify Details** section, enter a stack name in the **Name** field.
+5. If you want you can add tags to your stack. Otherwise, choose **Next**.
+6. Review the stack’s settings and then select **Create.**
+7. At this point, you will find the status of your stack to be `CREATE_IN_PROGRESS`. Your stack might take several minutes to get created. See the next sections to learn about monitoring your stack creation.
+
+___
+
+### Custom template
+
+If you are interested in using the CloudFormation templates more than just for demo purposes, you need to customize the stack’s resources based on your specific use case. Follow the steps below to do so:
+
+1. Make sure that you already [set up your AWS CDK](https://aws.amazon.com/getting-started/guides/setup-cdk/?pg=gs&sec=gtkaws) environment.
+2. Starting in your current directory, change your directory and go to `aws_cdk/IoTAnalyticsPattern` directory.
+3. Just to verify everything is working correctly, list the stacks in your app by running `cdk ls`. If you don't see `IoTAnalyticsPatternStack`, make sure you are currently in `IoTAnalyticsPattern` directory.
+4. The structure of the files inside `IoTAnalyticsPattern` is as below:
+
+
+
+* `io_t_analytics_pattern_stack.py` is the main code of the stack. It is here where the required resources are created.
+* `tests/unit/test_io_t_analytics_pattern_stack.py` is where the unit tests of the stack are written. The unit tests check
+ * Right creation of the resources in addition to their properties
+ * Dependencies between the resources
+ * Right error handlings in case of input violations
+* `cdk.json` tells the CDK Toolkit how to execute your app. Context values are key-value pairs that can be associated with an app, stack, or construct. You can add the context key-values to this file or in the command line before synthesizing the template.
+* `README.md` is where you can find detailed instructions on how to get started with the code including how to synthesize the template, a set of useful commands, the stack’s context parameters, and details about the code.
+* `cdk.out` is where the synthesized template (in a JSON format) will be located in.
+
+1. Run `source .venv/bin/activate` to activate the app's Python virtual environment.
+2. Run `python -m pip install -r requirements.txt` and `python -m pip install -r requirements.txt` to install the dependencies.
+3. Go through the `README.md` file to learn about the context parameters that need to be set by you prior to deployment.
+4. Set the context parameter values either by changing `cdk.json` file or by using the command line.
+ 1. To create a command line context variable, use the **`—-context (-c) option`**, as shown in the following example: `$ cdk cdk synth -c bucket_name=mybucket`
+ 2. To specify the same context variable and value in the `cdk.json` file, use the following code.`
+ {"context": { "bucket_name": "mybucket"}`
+5. Run `cdk synth` to emit the synthesized CloudFormation template.
+6. Run `python -m pytest` to run the unit tests. It is the best practice to run the tests before deploying your template to the cloud.
+7. Run `cdk deploy` to deploy the stack to your default AWS account/region.
+8. Use the instructions in the ***Stack management*** section below to manage your stack creation.
+
+## Stack management
+
+___
+
+### Viewing CloudFormation stack data and resources
+
+After deployment, you may need to monitor your created stack and its resources. To do this, your starting point should be AWS CloudFormation.
+
+1. Sign in to the AWS Management Console and open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation).
+2. Select the **Stacks** tab to view all the available stacks in your account.
+3. Find the stack that you just created and select it.
+4. To verify that the stack’s creation was done successfully, check if its status is `CREATE_COMPLETE`. To learn more about what each status means refer to [stack status codes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-status-codes).
+5. You can view the stack’s general information such as its ID, status, policy, rollback configuration, etc under the **Stack info** tab.
+6. If you click on the **Events** tab, each major step in the creation of the stack sorted by the time of each event, with the latest events on top is displayed.
+7. You can also find the resources that are part of the stack under the **Resources** tab.
+
+There is more information on viewing your CloudFormation stack information [here](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-view-info).
+
+___
+
+### Monitoring the generated resources
+
+If the stack is deployed successfully, the following resources must get created under your stack. You can verify their creation by checking the **Resources** tab in your stack.
+
+|Resourse |Type |
+|--- |--- |
+|CDKMetadata |[AWS::CDK::Metadata](https://docs.aws.amazon.com/cdk/api/v1/docs/constructs.ConstructMetadata.html) |
+|IoT Analytics Channel |[AWS::IoTAnalytics::Channel](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotanalytics-channel.html) |
+|IoT Analytics Datastore |[AWS::IoTAnalytics::Datastore](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotanalytics-datastore.html) |
+|IoT Analytics Dataset |[AWS::IoTAnalytics::Dataset](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotanalytics-dataset.html) |
+|IoT Analytics Pipeline |[AWS::IoTAnalytics::Pipeline](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotanalytics-pipeline.html) |
+|IAM role and policy that grants IoT access to IoT Analytics |[AWS::IAM::Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html) [AWS::IAM::Policy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html) |
+|IoT Rule |[AWS::IoT::TopicRule](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iot-topicrule.html) |
+|CloudWatch log group to capture error logs |[AWS::Logs::LogGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html) |
+|IAM role and policy that grants IoT access to CloudWatch |[AWS::IAM::Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html) [AWS::IAM::Policy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html) |
+
+___
+
+### Handling stack failures
+
+If CloudFormation fails to create, update, or delete your stack, you will be able to go through the logs or error messages to learn about the issue. There are some general methods for troubleshooting a CloudFormation issue. For example, you can follow the steps below to find the issue manually in the console.
+
+* Check the status of your stack in the [CloudFormation console](https://console.aws.amazon.com/cloudformation/).
+* From the **Events** tab, you can see a set of events while the last operation was being done on your stack.
+* Find the failure event from the set of events and then check the status reason of that event. The status reason usually gives a good understanding of the issue that caused the failure.
+
+
+In case of failures in stack creations or updates, CloudFormation automatically performs a rollback. However, you can also [add rollback triggers during stack creation or updating](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-rollback-triggers.html#using-cfn-rollback-triggers-create) to further monitor the state of your application. By setting up the rollback triggers if the application breaches the threshold of the alarms you've specified, it will roll back to that operation.
+
+Finally, this [troubleshooting guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#basic-ts-guide) is a helpful resource to refer to if there is an issue in your stack.
+
+___
+
+### Estimating the cost of the stack
+
+There is no additional charge for AWS CloudFormation. You pay for AWS resources created using CloudFormation as if you created them by hand. Refer to this [guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-paying.html) to learn more about the stack cost estimation functionality.
+
+## Ingesting and visualizing your IoT data with the constructed resources
+
+___
+
+### Sending data to the cloud from your device
+
+Now that your stack and all the required resources are created and available, you can get started by connecting your device to the cloud and sending your data to IoT Core.
+
+* If you are new to AWS IoT Core, this [guide](https://docs.aws.amazon.com/iot/latest/developerguide/connect-to-iot.html) is a great starting point to connect your device to the cloud.
+* After connecting your device to IoT Core, you can use the [MQTT test client](https://docs.aws.amazon.com/iot/latest/developerguide/view-mqtt-messages.html) to monitor the MQTT messages being passed in your AWS account.
+* Move to the **Rules** tab under the **Message Routing** section in the [AWS IoT console](https://console.aws.amazon.com/iot/home). There you can verify the creation of the newly created topic rule and its [iotanalytics rule action](https://docs.aws.amazon.com/iot/latest/developerguide/iotanalytics-rule-action.html) which sends data received from your device to the IoT Analytics channel.
+
+___
+
+### Query data in the IoT Analytics console
+
+In the previous section, you verified that your device is connected to the cloud and is sending data to IoT Core. To query the data, you should use your IoTAnalytics dataset. A dataset contains the SQL that you use to query the data store along with an optional schedule that repeats the query at a day and time you choose. To get started follow these steps:
+
+* Open the [AWS IoT Analytics Console](https://console.aws.amazon.com/iotanalytics).
+* From the navigation pane, choose **Datasets.**
+* Find the dataset that was created by your stack and select it.
+* Select **Run now**.
+* Select the **Content** tab.
+* The result of the query is your dataset content, stored as a file, in CSV format. You also have the option to download the CSV file.
+* If you cannot see any data in the dataset content, follow these steps:
+ * First, make sure that your device is connected to the cloud and is sending data by using the [MQTT test client](https://docs.aws.amazon.com/iot/latest/developerguide/view-mqtt-messages.html). More details about this are provided in the previous section.
+ * If your data is getting landed in IoT Core but IoT Analytics is not receiving it, there might be an error happening while the IoT rule attempts to send data from IoT Core to IoT Analytics. To find out about the issue, you can use the CloudWatch log group that is created by the template earlier. To do so, open the [Cloudwatch console](https://console.aws.amazon.com/cloudwatch). From the navigation bar, select **Log > Log Groups**. Find the log group name that was created by the stack earlier and select it. Now you can view the error logs to find out the issue.
+
+___
+
+### Optional steps
+
+* You can schedule the query runs. You create the optional schedules using expressions similar to [Amazon CloudWatch schedule expressions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html).
+* You can add more activities to your IoT Analytics pipeline based on your use case. The simplest functional pipeline connects a channel to a data store which is used for this simple demo. You can achieve more powerful message processing by adding additional activities to your pipeline. Refer to [pipeline activities](https://docs.aws.amazon.com/iotanalytics/latest/userguide/pipeline-activities.html) to learn more about the possible pipeline activities.
+
+___
+
+### Integrating with dashboards to visualize data
+
+In the previous section, you were able to see your device’s data in a table format as the IoT Analytics‘s dataset content. You can take an additional step to visualize your data and create dashboards. Here are two possible IoT Analytics integrations with reporting dashboards :
+
+#### Amazon QuickSight
+
+AWS IoT Analytics provides direct integration with [Amazon QuickSight](https://aws.amazon.com/quicksight/). Amazon QuickSight is a fast business analytics service you can use to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. Amazon QuickSight is available in [these regions](https://docs.aws.amazon.com/general/latest/gr/quicksight.html).
+
+To connect AWS IoT Analytics to QuickSight you need to follow these steps:
+
+1. Navigate to the AWS QuickSight console.
+2. If you have never used AWS QuickSight before, you will be asked to sign up. In this case, choose the **Standard** tier and your region as your setup.
+3. During the signup phase, give QuickSight access to your Amazon S3 buckets and AWS IoT Analytics.
+4. If you already have an account, give Amazon QuickSight access to your AWS IoT Analytics by choosing **Admin >** **Manage QuickSight > Security & permissions.** Under QuickSight access to AWS services, choose **Add or remove**, then select the check box next to AWS IoT Analytics and choose **Update**.
+5. From the admin Amazon QuickSight console page choose **New Analysis** and **New data set.**
+6. Choose AWS IoT Analytics as the source and enter a name for your data source.
+7. Choose your IoT Analytics dataset to import, and then select **Create data source**.
+8. After your data source is created, you can start making visualizations in Amazon QuickSight.
+
+___
+
+#### Jupyter Notebook
+
+AWS IoT Analytics datasets can also be directly consumed by Jupyter Notebook in order to perform advanced analytics and data exploration. Jupyter Notebook is an open-source solution. You can install and download it from [http://jupyter.org/install.html](https://jupyter.org/install.html). Additional integration with SageMaker, an Amazon-hosted notebook solution, is also available.
+
+## Cleaning up the stack
+
+To clean up all the resources used in this demo, all you need to do is to delete the initial CloudFormation stack. To delete a stack and its resources, follow these steps:
+
+1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).
+2. On the Stacks page in the CloudFormation console, select the stack that you want to delete. Note that the stack must be currently running.
+3. In the stack details pane, choose **Delete**.
+4. Confirm deleting stack when prompted.
+
+After the stack is deleted, the stack’s status will be `DELETE_COMPLETE`. Stacks in the `DELETE_COMPLETE` state aren't displayed in the CloudFormation console by default. However, you can follow the instructions in [Viewing deleted stacks on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-deleted-stacks.html) to be able to view them.
+
+Finally, if the stack deletion failed, the stack will be in the `DELETE_FAILED` state. For solutions, see [Delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails). In this case, make sure to refer to the **Monitoring the generated resources** section of this document to verify that all the resources got deleted successfully.
+
+___
+
+## Useful resources
+
+* [CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/index.html)
+* [IoT Analytics User Guide](https://docs.aws.amazon.com/iotanalytics/latest/userguide/index.html)
+* [IoT Core User Guide](https://docs.aws.amazon.com/iot/latest/developerguide/index.html)
+* [AWS CDK (v2) User Guide](https://docs.aws.amazon.com/cdk/v2/guide/index.html)