Support all NeptuneML API command parameters in neptune_ml magics, accept unified JSON blob for parameter input #202
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #, if available: #65, #103, #187
Description of changes:
JSON parameter blob usage:
%%neptune_ml
cell magic command, using the cell variable injection syntax. Re-use with as many steps as needed.New
%neptune_ml
Arguments:%neptune_ml dataprocessing
:--prev-job-id
- The job ID of a completed data processing job run on an earlier version of the data.--instance-type
- The type of ML instance used during data processing.--instance-volume-size-in-gb
- The disk volume size of the processing instance.--timeout-in-seconds
- Timeout in seconds for the data processing job.--model-type
- Heterogeneous graph model (heterogeneous) or knowledge graph model (kge).%neptune_ml training
:--max-hpo-number
- Maximum total number of training jobs to start for the hyperparameter tuning job.--max-hpo-parallel
- Maximum number of parallel training jobs to start for the hyperparameter tuning job.-
--prev-job-id
- The job ID of a completed model-training job that you want to update incrementally based on updated data.-model_name
- The model type for training. If not specified, the model-training job will use the same modelType used in the data processing step.-base-processing-instance-type
- The type of ML instance used in preparing and managing training of ML models.--instance-volume-size-in-gb
- The disk volume size of the training instance.--timeout-in-seconds
- Timeout in seconds for the training job.%neptune_ml modeltransform
:--job-id
- A unique identifier for the new job.--s3-output-uri
- The URI of the S3 bucket/location to store your transform result.--data-processing-job-id
- The job Id of a completed data-processing job.--model-training-job-id
- The job Id of a completed model-training job.--training-job-name
- The name of a completed SageMaker training job.--base-processing-instance-type
- The type of ML instance used in preparing and managing training of ML models.--base-processing-instance-volume-size-in-gb
- The disk volume size of the new training instance.Note that you must now specify either:
a)
--data-processing-job-id
AND--model-training-job-id
b)
--training-job-name
%neptune_ml endpoint
:--model-training-job-id
- The job Id of a completed model-training job.--model-transform-job-id
- The job Id of a completed model-transform job.--update
- Indicates that this is an update request.--model-name
- Model type that was used for training.--instance-type
- The type of ML instance used.--instance-count
- The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction.--neptune-iam-role-arn
,-volume-encryption-kms-key
- See "Shared Security Parameters" belowNote that you must now specify either:
a)
--model-training-job-id
b)
--model-transform-job-id
Shared security parameters for
dataprocessing
,training
, andmodeltransform
:--sagemaker-iam-role-arn
- The ARN of an IAM role for SageMaker execution.--neptune-iam-role-arn
- The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources.--subnets
- The IDs of the subnets in the Neptune VPC.--security-group-ids
- The VPC security group IDs.--volume-encryption-kms-key
- The Key Management Service(KMS) key used by SageMaker to encrypt data on the storage volume attached to the ML compute instances that run the job.--s3-output-encryption-kms-key
- The KMS key that SageMaker uses to encrypt the output of the job.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.