From 565977417dccb04d164ab953821fc2462d2c42a2 Mon Sep 17 00:00:00 2001 From: neelamkoshiya Date: Tue, 9 Aug 2022 15:33:50 -0700 Subject: [PATCH] Delete TS-Workshop-Advanced.ipynb --- .../TS-Workshop-Advanced.ipynb | 329 ------------------ 1 file changed, 329 deletions(-) delete mode 100644 sagemaker-datawrangler/timeseries-dataflow/TS-Workshop-Advanced.ipynb diff --git a/sagemaker-datawrangler/timeseries-dataflow/TS-Workshop-Advanced.ipynb b/sagemaker-datawrangler/timeseries-dataflow/TS-Workshop-Advanced.ipynb deleted file mode 100644 index c8868c5b31..0000000000 --- a/sagemaker-datawrangler/timeseries-dataflow/TS-Workshop-Advanced.ipynb +++ /dev/null @@ -1,329 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Amazon SageMaker Data Wrangler time series advanced transformations\n", - "This notebook must be run after you finished the first part of the Data Wrangler time series transformation lab in the [`TS-Workshop.ipynb`](./TS-Workshop.ipynb) notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%store -r bucket_name\n", - "%store -r data_uploaded\n", - "%store -r region\n", - "\n", - "try:\n", - " bucket_name\n", - " data_uploaded\n", - "except NameError:\n", - " print(\"++++++++++++++++++++++++++++++++++++++++++++++\")\n", - " print(\"[ERROR] YOU HAVE TO RUN TS-Workshop notebook \")\n", - " print(\"++++++++++++++++++++++++++++++++++++++++++++++\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Advanced time series dataset preparation\n", - "As we mentioned previously current dataset is suitable for managed services like [Amazon SageMaker Canvas](https://aws.amazon.com/sagemaker/canvas/), [Amazon Forecast](https://aws.amazon.com/forecast/) and [DeepAR algotithm](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html) in SageMaker. \n", - "\n", - "All of them could automatically add multiple features, like weekend flag, day of week number, lag feature, etc. If you are planing to use a dataset with other tools and algorithms then it is better to add a few more transformations. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Adding missing timestamp - locationID combinations. \n", - "Current dataset might missing some timestamp - locationID combinations. Lets add them with 0 as a value for features. There is no built-in transformation, so we will create a new custom one. \n", - "To create a custom transformation you have to:\n", - "1. Click the plus sign next to a collection of transformation elements and choose Add transform.\n", - "![addMissingCombinations](./pictures/addMissingCombinations.png)\n", - "1. Click \"+ Add step\" orange button in the TRANSFORMS menu.\n", - "![AddStep](./pictures/AddStep.png)\n", - "1. Choose Custom Transform. \\\n", - "![CustomTransform](./pictures/CustomTransform.png)\n", - "1. In drop down menu select Python (PySpark) and use code below. This code will create a new dataframe with all possible combinations of timestamps and locations id and then joint it with existing dataframe. All missing values will be replaced by 0. \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from pyspark.sql.types import StructType,StructField, StringType, IntegerType, TimestampType\n", - "from pyspark.sql import SparkSession\n", - "from pyspark.sql.functions import *\n", - "spark = SparkSession.builder.getOrCreate()\n", - "data = []\n", - "for location in range(1,264):\n", - " for time in range (0,10200):\n", - " data.append((location, 1546300800 + time*3600))\n", - "schema = StructType([\n", - " StructField(\"PULocationID\",StringType(),True),\n", - " StructField(\"pickup_time_temp\",StringType(),True)\n", - "])\n", - "df_temp = spark.createDataFrame(data=data,schema=schema)\n", - "df_temp = df_temp.withColumn(\"timestamp\",from_unixtime(\"pickup_time_temp\"))\n", - "df_temp = df_temp.withColumn(\"pickup_time\",col(\"timestamp\").cast(TimestampType()))\n", - "df_temp = df_temp.drop(\"pickup_time_temp\",\"timestamp\")\n", - "df = df_temp.join(df,on=['pickup_time','PULocationID'],how='left')\n", - "df = df.na.fill(value=0)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![combinationsCode](./pictures/combinationsCode.png)\n", - "1. Choose Preview\n", - "1. Choose Add to save the step.\n", - "\n", - "When transfromation is applied on a sampled data you should see all curent steps and a preview of a resulted dataset with a new column `pickup_time` and without column `tpep_pickup_datetime`. Dont worry about many zeros as this is happend because sampled data is about 100MB of 6GB dataset. \n", - "![CombinationsResult](./pictures/CombinationsResult.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If you take a look on a data flow graph you could notice that we have a new branch with this transformation. This is happend bacause we can have a several destination nodes to Amazon S3. \n", - "![newDAG](./pictures/newDAG.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Featurize datetime\n", - "\"Featurize datetime\" time series transformation will add the month, day of the month, day of the year, week of the year, hour and quarter features to our dataset. Because we’re providing the date/time components as separate features, we enable ML algorithms to detect signals and patterns for improving prediction accuracy.\n", - "\n", - "To create this transformation you have to:\n", - "1. Click the plus sign next to a collection of transformation elements and choose Add transform.\n", - "![addDateFeature](./pictures/addDateFeature.png)\n", - "1. Click \"+ Add step\" orange button in the TRANSFORMS menu.\n", - "![AddStep](./pictures/AddStep.png)\n", - "1. Choose Time Series. \\\n", - "![SelectTimeSeries](./pictures/SelectTimeSeries.png)\n", - " 1. For \"Transform\" choose \"Featurize date/time\"\n", - " 1. For \"Input Column\" choose pickup_time\n", - " 1. For \"Output Column\" enter \"date\" \n", - " 1. For \"Output mode\" choose \"Ordinal\"\n", - " 1. For \"Output format\" choose \"Columns\"\n", - " 1. For date/time features to extract, select Year, Month, Day, Hour, Week of year, Day of year, and Quarter.\n", - "![dataFeatureConfig](./pictures/dataFeatureConfig.png)\n", - "1. Choose Preview\n", - "1. Choose Add to save the step.\n", - "\n", - "When transfromation is applied on a sampled data you should see all curent steps and a preview of a resulted dataset. \n", - "![dateFeatureResult](./pictures/dateFeatureResult.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Lag feature\n", - "Let’s create lag features for the target column `count`. Lag features in time series analysis are values at prior timestamps that are considered helpful in inferring future values. They also help identify autocorrelation, also known as serial correlation, patterns in the residual series by quantifying the relationship of the observation with observations at previous time steps. Autocorrelation is similar to regular correlation but between the values in a series and its past values. It forms the basis for the autoregressive forecasting models in the ARIMA series.\n", - "\n", - "With the Data Wrangler Lag feature transform, you can easily create lag features `n` periods apart. Additionally, we often want to create multiple lag features at different lags and let the model decide the most meaningful features. For such a scenario, the **Lag features** transform helps create multiple lag columns over a specified window size.\n", - "\n", - "To create this transformation you have to:\n", - "1. Click the plus sign next to a collection of transformation elements and choose Add transform.\n", - "![addLag](./pictures/addLag.png)\n", - "1. Click \"+ Add step\" orange button in the TRANSFORMS menu.\n", - "![AddStep](./pictures/AddStep.png)\n", - "1. Choose Time Series. \\\n", - "![SelectTimeSeries](./pictures/SelectTimeSeries.png)\n", - " 1. For \"Transform\" choose \"Lag features\"\n", - " 1. For \"Generate lag features for this column\" choose \"count\"\n", - " 1. For \"ID column\" enter \"PULocationID\" \n", - " 1. For \"Timestamp Column\" choose \"pickup_time\"\n", - " 1. For Lag, enter 8. You could try to use different values, maybe 24 hoyrs in our case makes more sense. \n", - " 1. Because we’re interested in observing up to the previous 8 lag values, let’s select Include the entire lag window.\n", - " 1. To create a new column for each lag value, select Flatten the output.\n", - "![lagConfig](./pictures/lagConfig.png)\n", - "1. Choose Preview\n", - "1. Choose Add to save the step.\n", - "\n", - "When transfromation is applied on a sampled data you should see all curent steps and a preview of a resulted dataset. \n", - "![lagResult](./pictures/lagResult.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Rolling window features\n", - "We can also calculate meaningful statistical summaries across a range of values and include them as input features. Let’s extract common statistical time series features.\n", - "\n", - "Data Wrangler implements automatic time series feature extraction capabilities using the open source `tsfresh` package. With the time series feature extraction transforms, you can automate the feature extraction process. This eliminates the time and effort otherwise spent manually implementing signal processing libraries. We will extract features using the **Rolling window** features transform. This method computes statistical properties across a set of observations defined by the window size.\n", - "\n", - "To create this transformation you have to:\n", - "1. Click the plus sign next to a collection of transformation elements and choose Add transform.\n", - "![addRolling](./pictures/addRolling.png)\n", - "1. Click \"+ Add step\" orange button in the TRANSFORMS menu.\n", - "![AddStep](./pictures/AddStep.png)\n", - "1. Choose Time Series. \\\n", - "![SelectTimeSeries](./pictures/SelectTimeSeries.png)\n", - " 1. For \"Transform\" choose \"Rolling window features\"\n", - " 1. For \"Generate rolling window features for this column\" choose \"count\"\n", - " 1. For \"Timestamp Column\" choose \"pickup_time\"\n", - " 1. For \"ID column\" enter \"PULocationID\" \n", - " 1. For \"Window size\", enter 8. You could try to use different values, maybe 24 hoyrs in our case makes more sense. \n", - " 1. Select Flatten to create a new column for each computed feature.\n", - " 1. Choose \"Strategy\" as \"Minimal subset\". This strategy extracts eight features that are useful in downstream analyses. Other strategies include Efficient Subset, Custom subset, and All features. \\\n", - "![rollingConfig](./pictures/rollingConfig.png)\n", - "1. Choose Preview\n", - "1. Choose Add to save the step.\n", - "\n", - "When transfromation is applied on a sampled data you should see all curent steps and a preview of a resulted dataset. \n", - "![rollingResult](./pictures/rollingResult.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Dataset export\n", - "We have transformed the time series dataset and are ready to use the transformed dataset as input for a custom forecasting algorithm. The last step is to export the transformed dataset to Amazon S3. We are going to repeat steps from previous \"Dataset export\" section in the [`TS-Workshop.ipynb`](./TS-Workshop.ipynb) notebook.\n", - "\n", - "To do that you have to:\n", - "1. Click the plus sign next to a collection of transformation elements and choose \"Add destination\"->\"Amazon S3\".\n", - "![addNewExport](./pictures/addNewExport.png)\n", - "1. Provide parameters for S3 destination:\n", - " 1. \"Dataset name\" - name for new dataset, I used \"NYC_export_advanced\"\n", - " 1. \"File type\" - CSV\n", - " 1. Delimeter - Comma\n", - " 1. Compression - none\n", - " 1. \"Amazon S3 location\" - I use a bucket name which we created at the begining with additional path in it, like \"s3://979894173312-us-east-1-datawranglertimeseries-6596/NYC_export_advanced/\"\n", - "1. Click \"Add destination\" orange button \\\n", - "![newDestinationConfig](./pictures/newDestinationConfig.png)\n", - "1. Now your dataflow have a new final step and you could click \"Create job\" orange button. \n", - "![flowCompletedNew](./pictures/flowCompletedNew.png)\n", - "1. Provide a \"Job name\" (you could keep autogenerated) and select \"destination\". We have two (previous and new one), lets use only new one \"S3: NYC_export_advanced\", but you are allowed to select both. Leave a \"KMS key ARN\" field empty and click \"Next\" orange button. \n", - "![newJob1](./pictures/newJob1.png)\n", - "1. Now your have to provide configuration for a compute capacity for a job. You could keep all defaults values:\n", - " 1. For \"Instance type\" use \"ml.m5.4xlarge\"\n", - " 1. For \"Instance count\" use \"2\"\n", - " 1. You could explore \"Additional configuration\", but keep them without change. \n", - " 1. Click \"Run\" orange button \\\n", - "![Job2](./pictures/Job2.png)\n", - "1. Now you job is started and it will take about 3 hours to process 6 GB of data according to our workflow. Cost for this job will be around 6 USD as \"ml.m5.4xlarge\" cost 0.922 USD per hour and we are using two of them. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Run the following cell and click on the link to go to the SageMaker console to see the processing jobs." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.core.display import display, HTML\n", - "\n", - "display(\n", - " HTML(\n", - " 'Open Processing Jobs'.format(\n", - " region, region\n", - " )\n", - " )\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
💡\n", - "Congratulations!
\n", - "You reached the end of Data Wrangler time series advanced lab! Now you know how to use Amazon SageMaker Data Wrangler for advanced time series tranformations!\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Clean up\n", - "Please move to the cleanup notebook [`TS-Workshop-Cleanup.ipynb`](./TS-Workshop-Cleanup.ipynb) to remove resources incurring charges." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Release resources\n", - "The following code will stop the kernel in this notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%html\n", - "\n", - "

Shutting down your kernel for this notebook to release resources.

\n", - "\n", - " \n", - "" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3.8.2 64-bit", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.2" - }, - "vscode": { - "interpreter": { - "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" - } - } - }, - "nbformat": 4, - "nbformat_minor": 4 -}