-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
39 changed files
with
3,483 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,41 @@ | ||
![Amazon SageMaker Data Wrangler](https://github.com/aws/amazon-sagemaker-examples/raw/main/_static/sagemaker-banner.png) | ||
|
||
# Amazon SageMaker Data Wrangler Examples | ||
|
||
Example flows that demonstrate how to aggregate and prepare data for Machine Learning using Amazon SageMaker Data Wrangler. | ||
|
||
## :books: Background | ||
|
||
[Amazon SageMaker Data Wrangler](https://aws.amazon.com/sagemaker/data-wrangler/) reduces the time it takes to aggregate and prepare data for ML. From a single interface in SageMaker Studio, you can import data from Amazon S3, Amazon Athena, Amazon Redshift, AWS Lake Formation, and Amazon SageMaker Feature Store, and in just a few clicks SageMaker Data Wrangler will automatically load, aggregate, and display the raw data. It will then make conversion recommendations based on the source data, transform the data into new features, validate the features, and provide visualizations with recommendations on how to remove common sources of error such as incorrect labels. Once your data is prepared, you can build fully automated ML workflows with Amazon SageMaker Pipelines or import that data into Amazon SageMaker Feature Store. | ||
|
||
|
||
|
||
The [SageMaker example notebooks](https://sagemaker-examples.readthedocs.io/en/latest/) are Jupyter notebooks that demonstrate the usage of Amazon SageMaker. | ||
|
||
## :hammer_and_wrench: Setup | ||
|
||
Amazon SageMaker Data Wrangler is a feature in Amazon SageMaker Studio. Use this section to learn how to access and get started using Data Wrangler. Do the following: | ||
|
||
* Complete each step in [Prerequisites](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-getting-started.html#data-wrangler-getting-started-prerequisite). | ||
|
||
* Follow the procedure in [Access Data Wrangler](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-getting-started.html#data-wrangler-getting-started-access) to start using Data Wrangler. | ||
|
||
|
||
|
||
|
||
## :notebook: Examples | ||
|
||
### **[Tabular DataFlow](tabular-dataflow/README.md)** | ||
|
||
This example provide quick walkthrough of how to aggregate and prepare data for Machine Learning using Amazon SageMaker Data Wrangler for Tabular dataset. | ||
|
||
### **[Timeseries DataFlow](timeseries-dataflow/readme.md)** | ||
|
||
This example provide quick walkthrough of how to aggregate and prepare data for Machine Learning using Amazon SageMaker Data Wrangler for Timeseries dataset. | ||
|
||
### **[Joined DataFlow](joined-dataflow/readme.md)** | ||
|
||
This example provide quick walkthrough of how to aggregate and prepare data for Machine Learning using Amazon SageMaker Data Wrangler for Joined dataset. | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
.venv/ | ||
.DS_Store | ||
data/MyMNIST | ||
pt.grpc.local/data/* | ||
pt.grpc.local/__pycache__ | ||
pt.grpc.local/profile | ||
tf.data.service.sagemaker/data | ||
tf.data.service.sagemaker/code/__pycache__ | ||
tf.data.service.local/data | ||
pt.grpc.sagemaker/data | ||
tf.data.service.sagemaker/__pycache__ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# Heterogeneous Clusters | ||
SageMaker Training Heterogeneous Clusters allows you to run one training job | ||
that includes instances of different types. For example a GPU instance like | ||
ml.p4d.24xlarge and a CPU instance like c5.18xlarge. | ||
|
||
One primary use case is offloading CPU intensive tasks like image | ||
pre-processing (data augmentation) from the GPU instance to a dedicate | ||
CPU instance, so you can fully utilize the expensive GPUs, and arrive at | ||
an improved time and cost to train. | ||
|
||
You'll find TensorFlow (tf.data.service) and PyTorch (a customer gRPC based distributed data loading) examples on how to utilize Heterogeneous clusters in your training jobs. You can reuse these examples when enabling your own training workload to use heterogeneous clusters. | ||
|
||
![Hetero job diagram](tf.data.service.sagemaker/images/basic-heterogeneous-job.png) | ||
|
||
## Examples: | ||
|
||
### TensorFlow examples | ||
- [**TensorFlow's tf.data.service running locally**](tf.data.service.local/README.md): | ||
This example runs the tf.data.service locally on your machine (not on SageMaker). It's helpful in order to get familiar with tf.data.service and to run small scale quick experimentation. | ||
|
||
- [**TensorFlow's tf.data.service with Amazon SageMaker Training Heterogeneous Clusters**](tf.data.service.sagemaker/hetero-tensorflow-restnet50.ipynb): | ||
This TensorFlow example runs a Homogenous trainign job and compares its results with a Heterogeneous Clusters SageMaker training job that runs with two instance groups: | ||
- `data_group` - this group has two ml.c5.18xlarge instances to which data augmentation is offloaded. | ||
- `dnn_group` - Running one ml.p4d.24xlarge instance (8GPUs) in a horovod/MPI distribution. | ||
|
||
### PyTorch examples | ||
- [**PyTorch with gRPC distributed dataloader running locally**](pt.grpc.local/README.md): | ||
This Pytorch example runs a training job split into two processes locally on your machine (not on SageMaker). It's helpful in order to get familiar with the GRPC distributed data loader and to run small scale quick experimentation. | ||
|
||
- [**PyTorch with gRPC distributed dataloader Heterogeneous Clusters training job example**](pt.grpc.sagemaker/hetero-pytorch-mnist.ipynb): | ||
This PyTorch example runs a Hetero SageMaker training job that uses gRPC to offload data augmentation to a CPU based server. | ||
|
||
|
||
### Hello world example | ||
- [**Hetero Training Job - Hello world**](hello.world.sagemaker/README.md): | ||
This basic example run a heterogeneous training job consisting of two instance groups. Each group includes a different instance_type. | ||
Each instance prints its instance group information and exits. | ||
Note: This example only shows how to orchastrate the training job with instance type, for actual code to help with a distributed data loader, see the TF or PT examples below. |
Oops, something went wrong.