Skip to content

Latest commit

 

History

History
151 lines (132 loc) · 7.08 KB

happypath.md

File metadata and controls

151 lines (132 loc) · 7.08 KB

Prerequisites

Ensure you have the latest Azure CLI and Azure Machine Learning CLI installed

Follow the instructions at this link: https://docs.microsoft.com/en-us/azure/machine-learning/service/reference-azure-machine-learning-cli

Setup Your Development Environment

Clone this repo into a folder on your develpment machine and open the folder in your favorite Python IDE such as Visual Studio Code. If using Visual Studio Code and Anaconda, activate your Python environment in a new terminal window. Be sure to select the same Anaconda Python environment in Visual Studio by opening the Commmand Pallete (Ctr-Shift-P) and typing "Python: Select Interpretor" to select your desired Python environment (whether using Anaconda, or any other Python distro).

Strongly recommend reading through all of the instructions first before proceeding.


Train and Deploy from Scratch using Iris Dataset and Remote Compute

This walkthrough tutorial will show the happy path for using the AzureML CLI to train and deploy a model

Create a new workspace associated with a pre-existing resource group

az ml workspace create -n <workspace name> -g <resource group> -l <azure region>

Attach current working directory to that workspace

az ml folder attach -w <workspace name> -g <resource group>

Create a low priority AzureML compute target

az ml computetarget create amlcompute ^
--max-nodes 1 ^
--name <desired name of compute target> ^
--vm-size STANDARD_D2_V2 ^
--resource-group <name of your resource group> ^
--vm-priority lowpriority ^
--workspace <name of your workspace>

After running this command, you should see the following output:

Provisionig compute resources...
Resource creation submitted successfully

You can check that it has been created by running the following ocmmand:

az ml computetarget list -w <workspace name> -g <resource group name>

Note: You will need to edit aml_compute.runconfig in the .azureml folder with the name of your compute target in the "target" parameter of the file:

# The name of the compute target to use for this run.
target: <Insert your compute target name here (case sensitive)> 

Create a python script run

First, we will need to write a script that we want to execute on the remote compute target that we just created. You can write your own script using vscode or you IDE of choice and save it at the root directory from which you are running your CLI commands. You can also just use the train.py script included in this repo.

When you attached the current working directory to a workspace, a .azureml folder should have been autogenerated. In this folder, you should find a conda_dependencies.yml file. You can add dependencies to this as needed. In this folder, you should also find two .runconfig files. These should serve you as examples for creating your own .runconfig file that specifies the remote compute target that you created earlier. Feel free to use the aml_compute.runconfig file in this repo.

Now, run the command in the CLI to kick off the script run on your remote compute target

az ml run submit-script -c <name of your .runconfig file*> -e <name of experiment**> <path to the script that you want to run>

**Note: if you provide an experiment name that does not already exist in the workspace, a new experiment will be created.
*Note: when passing the .runconfig parameter, do not include the file extension. For example, if using the runconfig file provided, your CLI command would look like this:

az ml run submit-script -c aml_compute -e ...

Navigate to the portal to see your experiment running

You can now see your experiment running in ms.portal.azure.com under your workspace Once the run is complete, you should see a model.pkl file in the outputs folder (if you used the provided train.py script)

Register your model

Register the model from the script run in the model registry for your workspace

az ml model register --name <name_you_want_to_give_model> --experiment-name <name of your experiment> --run-id <run_id_from_portal> --asset-path <path to your model in the experiment*>

*Note: in this case, --asset-path should be outputs/model.pkl (if you used the train.py script provided)

Check to see if the model was registered correctly

You can check in the portal under the "Models" tab in your workspace on by using the following CLI command:

az ml model list

Deploy the registered model

Before you can deploy the model as a webservice, you need to create:

  • A scoring script:
    • This script must have an init() function that loads the model and a run() function that takes in json data as a parameter and returns the prediction in json format.
    • You can create your own scoring script or use the score.py file provided in this repo.
      • Note: if you use the score.py script provided, update line 13 with the name of your registered model.
  • An inferenceconfig.json file:
    • That specifies the name of your scoring script and any dependencies
    • You can create your own inferenceconfig.json file or use the inferenceconfig.json file provided in this repo.
  • A deploymentconfig.json file:
    • That specifies the metadata of your deployment
    • You can create your own deploymentconfig.json file or use the deploy.json file provided in this repo.
  • An AKS cluster to use for deployment:
az ml computetarget create aks ^
--name <name for your aks cluster> ^
--location <location you want it to reside in> ^
--resource-group <name of your resource group> ^
--workspace-name <the workspace you are working in>

Make sure to update line 3 in .azureml/deploy.json with the name of your newly created compute target.

Once you have all of these things, run this CLI command:

az ml model deploy  ^
--name <name that you want to give your webservice> ^
--model <name of registered model>:<model version number> ^
--inference-config-file <path to inferenceconfig.json> ^
--deploy-config-file <path to deploymentconfig.json> ^
--compute-target <name of your aks cluster>

The output from this should look like this:

Once you have your scoring uri, you can validate your deployed service via a POST request

  • To get keys for sendings requests, run the following command:
    az ml service get-keys --name <name of service>
  • Use the following data for a test (if you used the iris data set):
    {"data": [[5.1, 3.5, 1.4, 0.2],
              [4.9, 3.0, 1.4, 0.2],
              [4.7, 3.2, 1.3, 0.2], 
              [6.5, 3.0, 5.2, 2.0],
              [6.2, 3.4, 5.4, 2.3]]
    }
    And expect to get the following output:
    [
        0,
        0,
        0,
        2,
        2
    ]
    

You can also check the status of your deployed model with the following CLI command:

az ml service show --name <name of service>