From 06093eb0316c24d563a92d5df2e4ed5f9e56f26a Mon Sep 17 00:00:00 2001
From: awstools CreateEndpointConfig
API, and then create an endpoint with the
* CreateEndpoint
API. SageMaker then deploys all of the containers that you
* defined for the model in the hosting environment.
For an example that calls this method when deploying a model to SageMaker hosting services, - * see Create a Model (Amazon Web Services SDK for Python (Boto 3)). - *
*To run a batch transform using your model, you start a job with the
* CreateTransformJob
API. SageMaker uses your model and your dataset to get
* inferences which are then saved to a specified S3 location.
TtlDuration
exists from using the PutRecord
API, the record
* level TtlDuration
applies to that record instead of the default
- * TtlDuration
.
+ * TtlDuration
. To remove the default TtlDuration
from an
+ * existing feature group, use the UpdateFeatureGroup
API and set the
+ * TtlDuration
+ * Unit
and Value
to null
.
* @example
* Use a bare-bones client and the command you need to make an API call.
* ```javascript
diff --git a/clients/client-sagemaker/src/commands/UpdateModelPackageCommand.ts b/clients/client-sagemaker/src/commands/UpdateModelPackageCommand.ts
index 4e1ef4eacdf3..0c09f31fb36d 100644
--- a/clients/client-sagemaker/src/commands/UpdateModelPackageCommand.ts
+++ b/clients/client-sagemaker/src/commands/UpdateModelPackageCommand.ts
@@ -55,6 +55,16 @@ export interface UpdateModelPackageCommandOutput extends UpdateModelPackageOutpu
* Image: "STRING_VALUE", // required
* ImageDigest: "STRING_VALUE",
* ModelDataUrl: "STRING_VALUE",
+ * ModelDataSource: { // ModelDataSource
+ * S3DataSource: { // S3ModelDataSource
+ * S3Uri: "STRING_VALUE", // required
+ * S3DataType: "S3Prefix" || "S3Object", // required
+ * CompressionType: "None" || "Gzip", // required
+ * ModelAccessConfig: { // ModelAccessConfig
+ * AcceptEula: true || false, // required
+ * },
+ * },
+ * },
* ProductId: "STRING_VALUE",
* Environment: { // EnvironmentMap
* "The access configuration file to control access to the ML model. You can explicitly accept the model
+ * end-user license agreement (EULA) within the ModelAccessConfig
.
If you are a Jumpstart user, see the End-user license agreements section for more details on accepting the EULA.
+ *If you are an AutoML user, see the Optional Parameters section of + * Create an AutoML job to fine-tune text generation models using the + * API for details on How to set the EULA acceptance when fine-tuning a model using the AutoML + * API.
+ *Specifies agreement to the model end-user license agreement (EULA). The
+ * AcceptEula
value must be explicitly defined as True
in order
+ * to accept the EULA that this model requires. You are responsible for reviewing and
+ * complying with any applicable license terms and making sure they are acceptable for your
+ * use case before downloading or using a model.
Specifies the S3 location of ML model data to deploy.
+ */ +export interface S3ModelDataSource { + /** + * @public + *Specifies the S3 path of ML model data to deploy.
+ */ + S3Uri: string | undefined; + + /** + * @public + *Specifies the type of ML model data to deploy.
+ *If you choose S3Prefix
, S3Uri
identifies a key name prefix.
+ * SageMaker uses all objects that match the specified key name prefix as part of the ML model
+ * data to deploy. A valid key name prefix identified by S3Uri
always ends
+ * with a forward slash (/).
If you choose S3Object
, S3Uri
identifies an object that is
+ * the ML model data to deploy.
Specifies how the ML model data is prepared.
+ *If you choose Gzip
and choose S3Object
as the value of
+ * S3DataType
, S3Uri
identifies an object that is a
+ * gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during
+ * model deployment.
If you choose None
and chooose S3Object
as the value of
+ * S3DataType
, S3Uri
identifies an object that represents an
+ * uncompressed ML model to deploy.
If you choose None and choose S3Prefix
as the value of
+ * S3DataType
, S3Uri
identifies a key name prefix, under
+ * which all objects represents the uncompressed ML model to deploy.
If you choose None, then SageMaker will follow rules below when creating model data files + * under /opt/ml/model directory for use by your inference code:
+ *If you choose S3Object
as the value of S3DataType
,
+ * then SageMaker will split the key of the S3 object referenced by S3Uri
+ * by slash (/), and use the last part as the filename of the file holding the
+ * content of the S3 object.
If you choose S3Prefix
as the value of S3DataType
,
+ * then for each S3 object under the key name pefix referenced by
+ * S3Uri
, SageMaker will trim its key by the prefix, and use the
+ * remainder as the path (relative to /opt/ml/model
) of the file
+ * holding the content of the S3 object. SageMaker will split the remainder by slash
+ * (/), using intermediate parts as directory names and the last part as filename
+ * of the file holding the content of the S3 object.
Do not use any of the following as file names or directory names:
+ *An empty or blank string
+ *A string which contains null bytes
+ *A string longer than 255 bytes
+ *A single dot (.
)
A double dot (..
)
Ambiguous file names will result in model deployment failure. For example, if
+ * your uncompressed ML model consists of two S3 objects
+ * s3://mybucket/model/weights
and
+ * s3://mybucket/model/weights/part1
and you specify
+ * s3://mybucket/model/
as the value of S3Uri
and
+ * S3Prefix
as the value of S3DataType
, then it will
+ * result in name clash between /opt/ml/model/weights
(a regular file)
+ * and /opt/ml/model/weights/
(a directory).
Do not organize the model artifacts in S3 console using + * folders. When you create a folder in S3 console, S3 creates a 0-byte + * object with a key set to the folder name you provide. They key of the 0-byte + * object ends with a slash (/) which violates SageMaker restrictions on model artifact + * file names, leading to model deployment failure.
+ *Specifies the access configuration file for the ML model. You can explicitly accept the
+ * model end-user license agreement (EULA) within the ModelAccessConfig
. You are
+ * responsible for reviewing and complying with any applicable license terms and making sure
+ * they are acceptable for your use case before downloading or using a model.
Specifies the location of ML model data to deploy. If specified, you must specify one + * and only one of the available data sources.
+ */ +export interface ModelDataSource { + /** + * @public + *Specifies the S3 location of ML model data to deploy.
+ */ + S3DataSource?: S3ModelDataSource; +} + /** * @public *Input object for the model.
@@ -328,6 +502,12 @@ export interface ModelPackageContainerDefinition { */ ModelDataUrl?: string; + /** + * @public + *Specifies the location of ML model data to deploy during endpoint creation.
+ */ + ModelDataSource?: ModelDataSource; + /** * @public *The Amazon Web Services Marketplace product ID of the model package.
@@ -4119,7 +4299,7 @@ export interface FileSystemConfig { /** * @public - *The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app.
+ *The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app. The FileSystemConfig
object is not supported.
List of available metrics:
* Regression: InferenceLatency
, MAE
,
+ *
Regression: MAE
,
* MSE
, R2
, RMSE
*
Binary classification: Accuracy
, AUC
,
* BalancedAccuracy
, F1
,
- * InferenceLatency
, LogLoss
,
* Precision
, Recall
*
Multiclass classification: Accuracy
,
* BalancedAccuracy
, F1macro
,
- * InferenceLatency
, LogLoss
,
* PrecisionMacro
, RecallMacro
*
The access configuration file to control access to the ML model. You can explicitly accept the model
- * end-user license agreement (EULA) within the ModelAccessConfig
.
If you are a Jumpstart user, see the End-user license agreements section for more details on accepting the EULA.
- *If you are an AutoML user, see the Optional Parameters section of - * Create an AutoML job to fine-tune text generation models using the - * API for details on How to set the EULA acceptance when fine-tuning a model using the AutoML - * API.
- *Specifies agreement to the model end-user license agreement (EULA). The
- * AcceptEula
value must be explicitly defined as True
in order
- * to accept the EULA that this model requires. You are responsible for reviewing and
- * complying with any applicable license terms and making sure they are acceptable for your
- * use case before downloading or using a model.
The collection of settings used by an AutoML job V2 for the text generation problem @@ -9468,152 +9618,6 @@ export const ContainerMode = { */ export type ContainerMode = (typeof ContainerMode)[keyof typeof ContainerMode]; -/** - * @public - * @enum - */ -export const ModelCompressionType = { - Gzip: "Gzip", - None: "None", -} as const; - -/** - * @public - */ -export type ModelCompressionType = (typeof ModelCompressionType)[keyof typeof ModelCompressionType]; - -/** - * @public - * @enum - */ -export const S3ModelDataType = { - S3Object: "S3Object", - S3Prefix: "S3Prefix", -} as const; - -/** - * @public - */ -export type S3ModelDataType = (typeof S3ModelDataType)[keyof typeof S3ModelDataType]; - -/** - * @public - *
Specifies the S3 location of ML model data to deploy.
- */ -export interface S3ModelDataSource { - /** - * @public - *Specifies the S3 path of ML model data to deploy.
- */ - S3Uri: string | undefined; - - /** - * @public - *Specifies the type of ML model data to deploy.
- *If you choose S3Prefix
, S3Uri
identifies a key name prefix.
- * SageMaker uses all objects that match the specified key name prefix as part of the ML model
- * data to deploy. A valid key name prefix identified by S3Uri
always ends
- * with a forward slash (/).
If you choose S3Object
, S3Uri
identifies an object that is
- * the ML model data to deploy.
Specifies how the ML model data is prepared.
- *If you choose Gzip
and choose S3Object
as the value of
- * S3DataType
, S3Uri
identifies an object that is a
- * gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during
- * model deployment.
If you choose None
and chooose S3Object
as the value of
- * S3DataType
, S3Uri
identifies an object that represents an
- * uncompressed ML model to deploy.
If you choose None and choose S3Prefix
as the value of
- * S3DataType
, S3Uri
identifies a key name prefix, under
- * which all objects represents the uncompressed ML model to deploy.
If you choose None, then SageMaker will follow rules below when creating model data files - * under /opt/ml/model directory for use by your inference code:
- *If you choose S3Object
as the value of S3DataType
,
- * then SageMaker will split the key of the S3 object referenced by S3Uri
- * by slash (/), and use the last part as the filename of the file holding the
- * content of the S3 object.
If you choose S3Prefix
as the value of S3DataType
,
- * then for each S3 object under the key name pefix referenced by
- * S3Uri
, SageMaker will trim its key by the prefix, and use the
- * remainder as the path (relative to /opt/ml/model
) of the file
- * holding the content of the S3 object. SageMaker will split the remainder by slash
- * (/), using intermediate parts as directory names and the last part as filename
- * of the file holding the content of the S3 object.
Do not use any of the following as file names or directory names:
- *An empty or blank string
- *A string which contains null bytes
- *A string longer than 255 bytes
- *A single dot (.
)
A double dot (..
)
Ambiguous file names will result in model deployment failure. For example, if
- * your uncompressed ML model consists of two S3 objects
- * s3://mybucket/model/weights
and
- * s3://mybucket/model/weights/part1
and you specify
- * s3://mybucket/model/
as the value of S3Uri
and
- * S3Prefix
as the value of S3DataType
, then it will
- * result in name clash between /opt/ml/model/weights
(a regular file)
- * and /opt/ml/model/weights/
(a directory).
Do not organize the model artifacts in S3 console using - * folders. When you create a folder in S3 console, S3 creates a 0-byte - * object with a key set to the folder name you provide. They key of the 0-byte - * object ends with a slash (/) which violates SageMaker restrictions on model artifact - * file names, leading to model deployment failure.
- *Specifies the access configuration file for the ML model. You can explicitly accept the
- * model end-user license agreement (EULA) within the ModelAccessConfig
. You are
- * responsible for reviewing and complying with any applicable license terms and making sure
- * they are acceptable for your use case before downloading or using a model.
Specifies the location of ML model data to deploy. If specified, you must specify one - * and only one of the available data sources.
- */ -export interface ModelDataSource { - /** - * @public - *Specifies the S3 location of ML model data to deploy.
- */ - S3DataSource?: S3ModelDataSource; -} - /** * @public * @enum diff --git a/clients/client-sagemaker/src/models/models_1.ts b/clients/client-sagemaker/src/models/models_1.ts index c203f57a0ffc..0c0d802dd234 100644 --- a/clients/client-sagemaker/src/models/models_1.ts +++ b/clients/client-sagemaker/src/models/models_1.ts @@ -46,6 +46,7 @@ import { MetricDefinition, MetricsSource, ModelApprovalStatus, + ModelDataSource, NeoVpcConfig, OutputConfig, OutputDataConfig, @@ -9428,7 +9429,7 @@ export interface ModelMetrics { /** * @public - *Metrics that measure bais in a model.
+ *Metrics that measure bias in a model.
*/ Bias?: Bias; @@ -9471,6 +9472,12 @@ export interface SourceAlgorithm { */ ModelDataUrl?: string; + /** + * @public + *Specifies the location of ML model data to deploy during endpoint creation.
+ */ + ModelDataSource?: ModelDataSource; + /** * @public *The name of an algorithm that was used to create the model package. The algorithm must @@ -9561,8 +9568,8 @@ export interface CreateModelPackageInput { /** * @public - *
Specifies details about inference jobs that can be run with models based on this model - * package, including the following:
+ *Specifies details about inference jobs that you can run with models based on this model + * package, including the following information:
*The Amazon ECR paths of containers that contain the inference code and model @@ -9695,6 +9702,14 @@ export interface CreateModelPackageInput { *
Indicates if you want to skip model validation.
*/ SkipModelValidation?: SkipModelValidation; + + /** + * @public + *The URI of the source for the model package. If you want to clone a model package, + * set it to the model package Amazon Resource Name (ARN). If you want to register a model, + * set it to the model ARN.
+ */ + SourceUri?: string; } /** diff --git a/clients/client-sagemaker/src/models/models_2.ts b/clients/client-sagemaker/src/models/models_2.ts index 21947bb69416..b491417cdf8b 100644 --- a/clients/client-sagemaker/src/models/models_2.ts +++ b/clients/client-sagemaker/src/models/models_2.ts @@ -3012,7 +3012,10 @@ export interface DescribeAppResponse { /** * @public - *The creation time.
+ *The creation time of the application.
+ *After an application has been shut down for 24 hours, SageMaker deletes all metadata for the application. To be considered an update and retain application metadata, applications must be restarted within 24 hours after the previous application has been shut down. After this time window, creation of an application is considered a new application rather than an update of the previous application.
+ *Details about inference jobs that can be run with models based on this model + *
Details about inference jobs that you can run with models based on this model * package.
*/ InferenceSpecification?: InferenceSpecification; @@ -8747,6 +8750,12 @@ export interface DescribeModelPackageOutput { *Indicates if you want to skip model validation.
*/ SkipModelValidation?: SkipModelValidation; + + /** + * @public + *The URI of the source for the model package.
+ */ + SourceUri?: string; } /** diff --git a/clients/client-sagemaker/src/models/models_4.ts b/clients/client-sagemaker/src/models/models_4.ts index e3a7ab0ab7ac..928e3408e1b9 100644 --- a/clients/client-sagemaker/src/models/models_4.ts +++ b/clients/client-sagemaker/src/models/models_4.ts @@ -3399,6 +3399,12 @@ export interface ModelPackage { */ AdditionalInferenceSpecifications?: AdditionalInferenceSpecificationDefinition[]; + /** + * @public + *The URI of the source for the model package.
+ */ + SourceUri?: string; + /** * @public * */ AdditionalInferenceSpecificationsToAdd?: AdditionalInferenceSpecificationDefinition[]; + + /** + * @public + *Specifies details about inference jobs that you can run with models based on this model + * package, including the following information:
+ *The Amazon ECR paths of containers that contain the inference code and model + * artifacts.
+ *The instance types that the model package supports for transform jobs and + * real-time endpoints used for inference.
+ *The input and output content formats that the model package supports for + * inference.
+ *The URI of the source for the model package.
+ */ + SourceUri?: string; } /** diff --git a/clients/client-sagemaker/src/protocols/Aws_json1_1.ts b/clients/client-sagemaker/src/protocols/Aws_json1_1.ts index b7f94bd3a0e8..b695e48b421b 100644 --- a/clients/client-sagemaker/src/protocols/Aws_json1_1.ts +++ b/clients/client-sagemaker/src/protocols/Aws_json1_1.ts @@ -13025,6 +13025,7 @@ const se_CreateModelPackageInput = (input: CreateModelPackageInput, context: __S SamplePayloadUrl: [], SkipModelValidation: [], SourceAlgorithmSpecification: _json, + SourceUri: [], Tags: _json, Task: [], ValidationSpecification: _json, @@ -19677,6 +19678,7 @@ const de_DescribeModelPackageOutput = (output: any, context: __SerdeContext): De SamplePayloadUrl: __expectString, SkipModelValidation: __expectString, SourceAlgorithmSpecification: (_: any) => de_SourceAlgorithmSpecification(_, context), + SourceUri: __expectString, Task: __expectString, ValidationSpecification: (_: any) => de_ModelPackageValidationSpecification(_, context), }) as any; @@ -24248,6 +24250,7 @@ const de_ModelPackage = (output: any, context: __SerdeContext): ModelPackage => SamplePayloadUrl: __expectString, SkipModelValidation: __expectString, SourceAlgorithmSpecification: (_: any) => de_SourceAlgorithmSpecification(_, context), + SourceUri: __expectString, Tags: (_: any) => de_TagList(_, context), Task: __expectString, ValidationSpecification: (_: any) => de_ModelPackageValidationSpecification(_, context), @@ -24266,6 +24269,7 @@ const de_ModelPackageContainerDefinition = (output: any, context: __SerdeContext FrameworkVersion: __expectString, Image: __expectString, ImageDigest: __expectString, + ModelDataSource: (_: any) => de_ModelDataSource(_, context), ModelDataUrl: __expectString, ModelInput: (_: any) => de_ModelInput(_, context), NearestModelName: __expectString, @@ -27016,6 +27020,7 @@ const de_ShuffleConfig = (output: any, context: __SerdeContext): ShuffleConfig = const de_SourceAlgorithm = (output: any, context: __SerdeContext): SourceAlgorithm => { return take(output, { AlgorithmName: __expectString, + ModelDataSource: (_: any) => de_ModelDataSource(_, context), ModelDataUrl: __expectString, }) as any; }; diff --git a/codegen/sdk-codegen/aws-models/sagemaker.json b/codegen/sdk-codegen/aws-models/sagemaker.json index 33152c132628..2a49b050623f 100644 --- a/codegen/sdk-codegen/aws-models/sagemaker.json +++ b/codegen/sdk-codegen/aws-models/sagemaker.json @@ -2883,7 +2883,7 @@ "target": "com.amazonaws.sagemaker#AutoMLMetricEnum", "traits": { "smithy.api#clientOptional": {}, - "smithy.api#documentation": "The name of the objective metric used to measure the predictive quality of a machine\n learning system. During training, the model's parameters are updated iteratively to\n optimize its performance based on the feedback provided by the objective metric when\n evaluating the model on the validation dataset.
\nThe list of available metrics supported by Autopilot and the default metric applied when you\n do not specify a metric name explicitly depend on the problem type.
\nFor tabular problem types:
\nList of available metrics:
\n Regression: InferenceLatency
, MAE
,\n MSE
, R2
, RMSE
\n
Binary classification: Accuracy
, AUC
,\n BalancedAccuracy
, F1
,\n InferenceLatency
, LogLoss
,\n Precision
, Recall
\n
Multiclass classification: Accuracy
,\n BalancedAccuracy
, F1macro
,\n InferenceLatency
, LogLoss
,\n PrecisionMacro
, RecallMacro
\n
For a description of each metric, see Autopilot metrics for classification and regression.
\nDefault objective metrics:
\nRegression: MSE
.
Binary classification: F1
.
Multiclass classification: Accuracy
.
For image or text classification problem types:
\nList of available metrics: Accuracy
\n
For a description of each metric, see Autopilot metrics for text and image classification.
\nDefault objective metrics: Accuracy
\n
For time-series forecasting problem types:
\nList of available metrics: RMSE
, wQL
,\n Average wQL
, MASE
, MAPE
,\n WAPE
\n
For a description of each metric, see Autopilot metrics for\n time-series forecasting.
\nDefault objective metrics: AverageWeightedQuantileLoss
\n
For text generation problem types (LLMs fine-tuning): \n Fine-tuning language models in Autopilot does not\n require setting the AutoMLJobObjective
field. Autopilot fine-tunes LLMs\n without requiring multiple candidates to be trained and evaluated. \n Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a\n default objective metric, the cross-entropy loss. After fine-tuning a language model,\n you can evaluate the quality of its generated text using different metrics. \n For a list of the available metrics, see Metrics for\n fine-tuning LLMs in Autopilot.
The name of the objective metric used to measure the predictive quality of a machine\n learning system. During training, the model's parameters are updated iteratively to\n optimize its performance based on the feedback provided by the objective metric when\n evaluating the model on the validation dataset.
\nThe list of available metrics supported by Autopilot and the default metric applied when you\n do not specify a metric name explicitly depend on the problem type.
\nFor tabular problem types:
\nList of available metrics:
\n Regression: MAE
,\n MSE
, R2
, RMSE
\n
Binary classification: Accuracy
, AUC
,\n BalancedAccuracy
, F1
,\n Precision
, Recall
\n
Multiclass classification: Accuracy
,\n BalancedAccuracy
, F1macro
,\n PrecisionMacro
, RecallMacro
\n
For a description of each metric, see Autopilot metrics for classification and regression.
\nDefault objective metrics:
\nRegression: MSE
.
Binary classification: F1
.
Multiclass classification: Accuracy
.
For image or text classification problem types:
\nList of available metrics: Accuracy
\n
For a description of each metric, see Autopilot metrics for text and image classification.
\nDefault objective metrics: Accuracy
\n
For time-series forecasting problem types:
\nList of available metrics: RMSE
, wQL
,\n Average wQL
, MASE
, MAPE
,\n WAPE
\n
For a description of each metric, see Autopilot metrics for\n time-series forecasting.
\nDefault objective metrics: AverageWeightedQuantileLoss
\n
For text generation problem types (LLMs fine-tuning): \n Fine-tuning language models in Autopilot does not\n require setting the AutoMLJobObjective
field. Autopilot fine-tunes LLMs\n without requiring multiple candidates to be trained and evaluated. \n Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a\n default objective metric, the cross-entropy loss. After fine-tuning a language model,\n you can evaluate the quality of its generated text using different metrics. \n For a list of the available metrics, see Metrics for\n fine-tuning LLMs in Autopilot.
Creates a model in SageMaker. In the request, you name the model and describe a primary\n container. For the primary container, you specify the Docker image that\n contains inference code, artifacts (from prior training), and a custom environment map\n that the inference code uses when you deploy the model for predictions.
\nUse this API to create a model if you want to use SageMaker hosting services or run a batch\n transform job.
\nTo host your model, you create an endpoint configuration with the\n CreateEndpointConfig
API, and then create an endpoint with the\n CreateEndpoint
API. SageMaker then deploys all of the containers that you\n defined for the model in the hosting environment.
For an example that calls this method when deploying a model to SageMaker hosting services,\n see Create a Model (Amazon Web Services SDK for Python (Boto 3)).\n
\nTo run a batch transform using your model, you start a job with the\n CreateTransformJob
API. SageMaker uses your model and your dataset to get\n inferences which are then saved to a specified S3 location.
In the request, you also provide an IAM role that SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute hosting instances or for batch\n transform jobs. In addition, you also use the IAM role to manage permissions the\n inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
" + "smithy.api#documentation": "Creates a model in SageMaker. In the request, you name the model and describe a primary\n container. For the primary container, you specify the Docker image that\n contains inference code, artifacts (from prior training), and a custom environment map\n that the inference code uses when you deploy the model for predictions.
\nUse this API to create a model if you want to use SageMaker hosting services or run a batch\n transform job.
\nTo host your model, you create an endpoint configuration with the\n CreateEndpointConfig
API, and then create an endpoint with the\n CreateEndpoint
API. SageMaker then deploys all of the containers that you\n defined for the model in the hosting environment.
To run a batch transform using your model, you start a job with the\n CreateTransformJob
API. SageMaker uses your model and your dataset to get\n inferences which are then saved to a specified S3 location.
In the request, you also provide an IAM role that SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute hosting instances or for batch\n transform jobs. In addition, you also use the IAM role to manage permissions the\n inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
" } }, "com.amazonaws.sagemaker#CreateModelBiasJobDefinition": { @@ -10982,7 +10982,7 @@ "InferenceSpecification": { "target": "com.amazonaws.sagemaker#InferenceSpecification", "traits": { - "smithy.api#documentation": "Specifies details about inference jobs that can be run with models based on this model\n package, including the following:
\nThe Amazon ECR paths of containers that contain the inference code and model\n artifacts.
\nThe instance types that the model package supports for transform jobs and\n real-time endpoints used for inference.
\nThe input and output content formats that the model package supports for\n inference.
\nSpecifies details about inference jobs that you can run with models based on this model\n package, including the following information:
\nThe Amazon ECR paths of containers that contain the inference code and model\n artifacts.
\nThe instance types that the model package supports for transform jobs and\n real-time endpoints used for inference.
\nThe input and output content formats that the model package supports for\n inference.
\nIndicates if you want to skip model validation.
" } + }, + "SourceUri": { + "target": "com.amazonaws.sagemaker#ModelPackageSourceUri", + "traits": { + "smithy.api#documentation": "The URI of the source for the model package. If you want to clone a model package,\n set it to the model package Amazon Resource Name (ARN). If you want to register a model,\n set it to the model ARN.
" + } } }, "traits": { @@ -16241,7 +16247,7 @@ "CreationTime": { "target": "com.amazonaws.sagemaker#Timestamp", "traits": { - "smithy.api#documentation": "The creation time.
" + "smithy.api#documentation": "The creation time of the application.
\nAfter an application has been shut down for 24 hours, SageMaker deletes all metadata for the application. To be considered an update and retain application metadata, applications must be restarted within 24 hours after the previous application has been shut down. After this time window, creation of an application is considered a new application rather than an update of the previous application.
\nDetails about inference jobs that can be run with models based on this model\n package.
" + "smithy.api#documentation": "Details about inference jobs that you can run with models based on this model\n package.
" } }, "SourceAlgorithmSpecification": { @@ -21211,6 +21217,12 @@ "traits": { "smithy.api#documentation": "Indicates if you want to skip model validation.
" } + }, + "SourceUri": { + "target": "com.amazonaws.sagemaker#ModelPackageSourceUri", + "traits": { + "smithy.api#documentation": "The URI of the source for the model package.
" + } } }, "traits": { @@ -26101,7 +26113,7 @@ "traits": { "smithy.api#length": { "min": 0, - "max": 16 + "max": 100 } } }, @@ -32684,7 +32696,7 @@ } }, "traits": { - "smithy.api#documentation": "The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app.
" + "smithy.api#documentation": "The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app. The FileSystemConfig
object is not supported.
Metrics that measure bais in a model.
" + "smithy.api#documentation": "Metrics that measure bias in a model.
" } }, "Explainability": { @@ -43196,6 +43208,12 @@ "smithy.api#documentation": "An array of additional Inference Specification objects.
" } }, + "SourceUri": { + "target": "com.amazonaws.sagemaker#ModelPackageSourceUri", + "traits": { + "smithy.api#documentation": "The URI of the source for the model package.
" + } + }, "Tags": { "target": "com.amazonaws.sagemaker#TagList", "traits": { @@ -43276,6 +43294,12 @@ "smithy.api#documentation": "The Amazon S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip
compressed tar archive\n (.tar.gz
suffix).
The model artifacts must be in an S3 bucket that is in the same region as the\n model package.
\nSpecifies the location of ML model data to deploy during endpoint creation.
" + } + }, "ProductId": { "target": "com.amazonaws.sagemaker#ProductId", "traits": { @@ -43529,6 +43553,16 @@ } } }, + "com.amazonaws.sagemaker#ModelPackageSourceUri": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 0, + "max": 1024 + }, + "smithy.api#pattern": "^[\\p{L}\\p{M}\\p{Z}\\p{N}\\p{P}]{0,1024}$" + } + }, "com.amazonaws.sagemaker#ModelPackageStatus": { "type": "enum", "members": { @@ -56715,6 +56749,12 @@ "smithy.api#documentation": "The Amazon S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip
compressed tar archive\n (.tar.gz
suffix).
The model artifacts must be in an S3 bucket that is in the same Amazon Web Services\n region as the algorithm.
\nSpecifies the location of ML model data to deploy during endpoint creation.
" + } + }, "AlgorithmName": { "target": "com.amazonaws.sagemaker#ArnOrName", "traits": { @@ -63287,7 +63327,7 @@ } ], "traits": { - "smithy.api#documentation": "Updates the feature group by either adding features or updating the online store\n configuration. Use one of the following request parameters at a time while using the\n UpdateFeatureGroup
API.
You can add features for your feature group using the FeatureAdditions
\n request parameter. Features cannot be removed from a feature group.
You can update the online store configuration by using the\n OnlineStoreConfig
request parameter. If a TtlDuration
is\n specified, the default TtlDuration
applies for all records added to the\n feature group after the feature group is updated. If a record level\n TtlDuration
exists from using the PutRecord
API, the record\n level TtlDuration
applies to that record instead of the default\n TtlDuration
.
Updates the feature group by either adding features or updating the online store\n configuration. Use one of the following request parameters at a time while using the\n UpdateFeatureGroup
API.
You can add features for your feature group using the FeatureAdditions
\n request parameter. Features cannot be removed from a feature group.
You can update the online store configuration by using the\n OnlineStoreConfig
request parameter. If a TtlDuration
is\n specified, the default TtlDuration
applies for all records added to the\n feature group after the feature group is updated. If a record level\n TtlDuration
exists from using the PutRecord
API, the record\n level TtlDuration
applies to that record instead of the default\n TtlDuration
. To remove the default TtlDuration
from an\n existing feature group, use the UpdateFeatureGroup
API and set the\n TtlDuration
\n Unit
and Value
to null
.
An array of additional Inference Specification objects to be added to the \n existing array additional Inference Specification. Total number of additional \n Inference Specifications can not exceed 15. Each additional Inference Specification \n specifies artifacts based on this model package that can be used on inference endpoints. \n Generally used with SageMaker Neo to store the compiled artifacts.
" } + }, + "InferenceSpecification": { + "target": "com.amazonaws.sagemaker#InferenceSpecification", + "traits": { + "smithy.api#documentation": "Specifies details about inference jobs that you can run with models based on this model\n package, including the following information:
\nThe Amazon ECR paths of containers that contain the inference code and model\n artifacts.
\nThe instance types that the model package supports for transform jobs and\n real-time endpoints used for inference.
\nThe input and output content formats that the model package supports for\n inference.
\nThe URI of the source for the model package.
" + } } }, "traits": { @@ -65573,7 +65625,7 @@ "Key": { "target": "com.amazonaws.sagemaker#VisibilityConditionsKey", "traits": { - "smithy.api#documentation": "The key that specifies the tag that you're using to filter the search results. It must be in the following format: Tags.
.
The key that specifies the tag that you're using to filter the search results. It must be in the following format: Tags.
.
The list of key-value pairs that you specify for your resources.
" + "smithy.api#documentation": "The list of key-value pairs used to filter your search results. If a search result contains a key from your list, it is included in the final search response if the value associated with the key in the result matches the value you specified. \n If the value doesn't match, the result is excluded from the search response. Any resources that don't have a key from the list that you've provided will also be included in the search response.
" } }, "com.amazonaws.sagemaker#VisibilityConditionsKey": {