Skip to content

Latest commit

 

History

History
545 lines (324 loc) · 22.1 KB

index.rst

File metadata and controls

545 lines (324 loc) · 22.1 KB
.. toctree::
  :maxdepth: 2
  :caption:     Contents
  :hidden:

  index

Xi IoT - Quick Start for AI Inference

Xi IoT Overview

The Nutanix Xi IoT platform delivers local compute and AI for IoT edge devices, converging the edge and cloud into one seamless data processing platform. The Xi IoT platform eliminates complexity, accelerates deployments, and elevates developers to focus on the business logic powering IoT applications and services. Now developers can use a low-code development platform to create application software via APIs instead of arduous programming methods.

Xi IoT Trial

This Xi IoT Quick Start leverages the Xi IoT Trial. The trial is a limited-time, ready-to-deploy implementation of the Xi IoT edge computing platform. The Xi IoT Trial provides pre-built applications and data connectors hosted on its own infrastructure. This instant architecture demonstrates how to quickly develop and test IoT applications in the cloud for seamless deployment to the edge.

Nutanix has already created the basic infrastructure you need to get started.

What's In the Xi IoT Trial?

  1. Xi IoT management console, which provides the base for your Xi IoT trial.
  2. A Starter project that includes:
    • You (the project user).
    • Xi Edge stack, connected and ready to go: no cluster or bare-metal resources required on your part.
    • YouTube-8M application, just waiting for your YouTube-8M video URL.
    • Xi IoT Sensor smartphone app, if you want to use your own video instead of YouTube-8M.

What Can I Do with the Xi IoT Trial?

  • Stream video from YouTube-8M video or your smartphone to the Xi Cloud edge.
  • Automatically run containerized apps at the edge to perform object recognition on your video.
  • Stream the results back to the Xi IoT console or your smartphone, with recognized objects highlighted in your video.

Signing Up For the Xi IoT Trial

Do any of these steps to sign up for the Xi IoT Trial.

  1. Click Start Trial at https://www.nutanix.com/products/iot/ or https://iot.nutanix.com.
  2. Sign up now for a My Nutanix account at https://my.nutanix.com.
  3. If you already have an account, log on to https://my.nutanix.com with your existing account credentials and click Learn More in the Xi IoT panel.

SUPPORT FOR AND LEARNING MORE ABOUT XI IOT

The most support for the Xi IoT trial is available through the Nutanix Next Xi IoT trial forum. Nutanix asks that you share your experiences and lessons learned with your fellow users.

You can also visit the following pages for more information about Xi IoT.

Getting Started With the Xi IoT Trial

  1. Log on to the Xi IoT management console.
  2. Have a YouTube-8M URL handy or create and upload a video from your smartphone.
  3. On your smartphone, download the Xi IoT Sensor app (available from the Google Play Store or Apple App Store).
Logging On to the Xi IoT Console

Before you begin:

Supported web browsers include the current and two previous versions of Google Chrome. You'll need your My Nutanix credentials for this step.

  1. Open https://iot.nutanix.com/ in a web browser, click Log in with My Nutanix and log on with your My Nutanix credentials.
  2. If you are logging on for the first time, click to read the Terms and Conditions, then click to Accept and Continue.
  3. Take a few moments to read about Xi IoT, then click Get Started.

Your web browser displays the Xi IoT dashboard and the Xi IoT Quick Start Menu.

Xi IoT Quick Start Menu

The Xi IoT management console includes a Quick Start menu next to your user name. You can click Quick Start, then click the links to:

  1. See object detection in action by using a YouTube-8M video.
  2. Try object detection on your phone.
  3. Invite your colleagues to try out Xi IoT
  4. Edit a data pipeline
  5. Create an application

This tutorial utilizes the Quick Start menu, but does not follow the same steps. Please continue to steps below.

Using the Xi IoT Sensor App to Detect Objects in Your Smartphone Video

About this task

Connect your Android or iOS based phone as a data source to stream video and perform object detection in near realtime using Xi IoT. Output can be viewed on your phone and from an HTTP Live Stream (HLS) in your browser.

  1. If you are not logged on, open https://iot.nutanix.com/ in a web browser and log in.

  2. Connect your phone through the Quick Start menu.

    1. Click Quick Start, then click Pair your smartphone now to connect a phone.
    contents/images/image5.png

    Figure 1: Quick Start: Phone as Data Source

    1. Open the Google Play Store or App Store on your phone, search for Xi IoT Sensor, and install the app on your phone.
    2. After downloading and installing the Xi IoT Sensor app, scan the QR code to authenticate (Android users), or log in with your My Nutanix credentials (iPhone).
    3. Enter a name for your phone, then click Next.
  3. From the Xi IoT management portal, Click :fa:`bars` > Apps and Data > Data Pipelines.

    The phone-object-detection data pipeline tile should show Status: Healthy.

  4. If it shows Status: Stopped, click Actions, then Start.

  5. Open the Xi IoT Sensor app on your phone, click Capture Video, and wait for up to 30 seconds for the inference engine to start.

  6. Switch to the phone-object-detection tab to view the results. Point your phone's camera around the room to identify objects in near realtime!

  7. From the Xi IoT management portal, click View Http Live Stream on the phone-object-detection data pipeline tile. This opens HLS output for viewing the results in your browser.

  8. Click :fa:`remove` to close the HLS page.

  9. From the Xi IoT Sensor app, press :fa:`stop` to stop capturing video.

  10. From the Xi IoT management portal, click Actions, then Stop, then Stop again on the phone-object-detection data pipeline tile.

Using the Xi IoT App Library to Detect Objects in a YouTube-8M Video

About this task

Use a YouTube-8M video to demonstrate object recognition in Xi IoT. We recommend a short video showing city scenes, drone footage, or a sporting event.

  1. If you are not logged on, open https://iot.nutanix.com/ in a web browser and log in.

  2. Click :fa:`bars` > Apps and Data > Kubernetes Apps.

  3. On the youtube-8m-object-detection-app application tile, click Actions, then Start, then Start again.

  4. Click View App UI.

    Note

    It may take a few moments for the application to initialize. If the App UI window launches to a "This site can’t be reached" error, wait a few more moments and refresh the page.

  5. Copy and paste the following YouTube-8M URL in the field, then press play.

    https://www.youtube.com/watch?v=HqqsJkonXsA

    Note

    It may take a few moments for the video stream to initialize. As the video plays in the video panel, the object detection software shows those parts of the video it has detected.

    contents/images/image6.png

    Figure 2: App UI - Detecting Objects

  6. Click :fa:`stop` beside the URL, then copy and paste another video URL to try it again!

  7. Close the App UI tab.

  8. Back on the youtube-8m-object-detection-app application tile, click Actions, then click Stop, then click Stop again.

Using Xi IoT Data Pipelines to Detect Objects in a YouTube-8M Video

About this task

Use data pipelines and two YouTube-8M videos to demonstrate object detection using only python code in Xi IoT.

  1. If you are not logged on, open https://iot.nutanix.com/ in a web browser and log in.

  2. Click :fa:`bars` > Apps and Data > Data Pipelines.

  3. On the youtube-8m-object-detection data pipeline tile, click Actions, then Start, then Start again.

  4. Now click View Http Live Stream to view object detection via HLS output.

  5. After viewing the output, click :fa:`remove` to close the HLS page.

  6. On the same youtube-8m-object-detection data pipeline tile, click Actions, then Edit.

    Note

    The pipeline's input data source is selected using a category named youtube-8m with a value of channel1. Like all Xi IoT data pipeline inputs, YouTube-8M data sources are selected dynamically using categories. This dynamic nature makes adding and changing data sources at scale much easier than if they were static. It also means that data sources can be easily changed without changes to transformation code.

    contents/images/image7.png

    Figure 3: Data Pipeline: Input by channel1 category

    Note

    The pipeline executes a transformation function named objdetect_func-python. This is a sample python function that uses a Tensorflow based ssd mobilenet v2 model trained on the COCO dataset embedded in the Xi IoT Tensorflow Python runtime.

  7. Click :fa:`remove` to close the data pipeline without making any changes.

In Xi IoT, categories help you assign various attributes to edges and data sources which can be further used to query and select them when creating Data Pipelines or deploying Kubernetes Apps.

An example of a category could be “City” with values in [San Francisco, San Jose, San Diego] or “State” with values in [California, Washington, Oregon] and so on. It can be anything meaningful to your environment.

In the next steps, you'll add a new category for assignment to a new YouTube-8M channel, add the channel to the YouTube-8M data source, and modify the data pipeline to use this new channel.

  1. Click :fa:`bars` > Administration > Categories.

  2. Click the :fa:`check` box beside the youtube-8m Category, then click Edit.

  3. Click :fa:`plus` Add Value, enter channel2, click the round, blue :fa:`check`, then click Update.

  4. Click :fa:`bars` > Infrastructure > Data Sources.

  5. Click the :fa:`check` box beside the youtube-8m data source, then click Edit.

  6. Click Add New URL, enter youtube-8m-2 in the Name field, copy and paste https://www.youtube.com/watch?v=PYbrTRE1bZg into the URL field, click the round, blue :fa:`check`, then click Next.

    The URL Extraction page should look like the figure below.

    contents/images/image8.png

    Figure 4: Two YouTube-8M URLs

  7. On the Category Assignment page, click inside the Select Fields dropdown and choose Select Fields....

  8. From the Select Fields dialog, select click the :fa:`check` box beside the youtube-8m-1 field to select it and click OK.

    contents/images/image9.png

    Figure 5: Category Assignment: Select youtube-8m-1 field

  9. Click Add... to add a category assignment for the youtube-8m-2 field just created.

  10. Click inside the Select Fields dropdown of the newly added category assignment and choose Select Fields....

  11. From the Select Fields dialog, click the :fa:`check` box beside the youtube-8m-2 field to select it and click OK.

  12. Click inside the first (left) Attribute dropdown of the newly added category assignment and choose youtube-8m.

  13. Click inside the second (right) Attribute dropdown of the newly added category assignment and choose channel2.

    contents/images/image10.png

    Figure 6: Category Assignment: YouTube-8M categories assigned

  14. Click Update to update the data source with the new channel.

  15. Click :fa:`bars` > Apps and Data > Data Pipelines.

  16. On the youtube-8m-object-detection data pipeline tile, click Actions, then Edit.

  17. In the Input section, click inside the second (right) Select by Categories dropdown and change channel1 to channel2.

    contents/images/image11.png

    Figure 7: Data Pipeline: Input by channel2 category

  18. Click Update to update the data pipeline with the new input data source category.

    The data pipeline will automatically update to use the YouTube-8M stream created and assigned the channel2 category as the new data source.

  19. To verify the new stream is being used, click View Http Live Stream on the youtube-8m-object-detection data pipeline tile to view object detection via HLS output.

  20. Click :fa:`remove` to close the HLS page.

  21. On the youtube-8m-object-detection data pipeline tile, click Actions, then Stop, then Stop again to stop the data pipeline.

Using Xi IoT Input and Output Connectors for Kubernetes Apps

About this task

Learn more about using your phone or a YouTube-8M video as a data source, and a HTTP Live Stream as output when writing your own applications for Xi IoT by exploring the echoapp sample application provided in the Application Library.

  1. If you are not logged on, open https://iot.nutanix.com/ in a web browser and log in.

  2. Click :fa:`bars` > Apps and Data > Kubernetes Apps.

  3. On the echoapp application tile, click Actions, then Edit.

    The General Information page displays information about the application such as its Name, Description, the Project its assigned to, and the edges on which its assigned to run.

  4. Click Next.

    The Yaml Configuration page lists the application pod's specification YAML in Kubernetes format.

  5. Click Next.

    The Input and Output page provides the option to use a YouTube-8M video or Xi IoT Sensor phone app as input and a HTTP Live Stream (HLS) as an output for applications. Simply check the appropriate boxes, and install a NATS client within your application. The selected input will be available on the NATS topic name stored in the NATS_SRC_TOPIC environment variable. Subscribe to it using the NATS server name stored in the NATS_ENDPOINT environment variable. Application output in jpeg format sent to the topic name stored in NATS_DST_TOPIC will be available via the application's HTTP Live Stream.

  6. Use one of the YouTube-8M sample (or your own) videos, or the Xi IoT Sensor phone app to demonstrate. Choose phone or youtube-8m in the Type of Input dropdown, and the channel as appropriate in the Field dropdown.

  7. Click Update.

  8. Click Actions, then Start, then Start again on the echoapp application tile to start the application.

  9. Click View Http Live Stream on the echoapp application tile to view the application's HLS output.

  10. Click :fa:`remove` to close the HLS page.

  11. Click Actions, then Stop, then Stop again on the echoapp application tile to stop the application.

Using the Xi IoT AI Inferencing Service to Detect Objects

About this task

Use data pipelines and a YouTube-8M video to demonstrate object detection using the Xi IoT AI Inferencing Service.

  1. If you are not logged on, open https://iot.nutanix.com/ in a web browser and log in.

  2. Click :fa:`bars` > Apps and Data > Data Pipelines.

  3. On the ai-inference-service-demo data pipeline tile, click Actions, then Edit.

    This data pipeline will look very similar to the youtube-8m-object-detection pipeline used in the earlier exercise. However, there's one major difference. Take notice of the transformation function used in this pipeline. It's named ml_objectdetect_func-python.

    To better understand how the AI Inferencing Service works, first take a look at the objdetect_func-python used in the youtube-8m-object-detection data pipeline in the earlier exercise, then compare it to the ml_objectdetect_func-python function.

  4. Click :fa:`remove` to close the data pipeline without making any changes.

  5. Click :fa:`bars` > Apps and Data > Functions.

  6. Click the :fa:`check` box beside the objdetect_func-python function, then click Edit.

    Notice that the function is written in python and uses the Xi IoT Tensorflow Python runtime.

  7. Click Next.

    The function's python code is now displayed. Take notice of lines 16-19 excerpted below:

    BASE_PATH = "/mllib/objectdetection"
    
    # ssd_inception_v2_coco   latency - 42ms
    PATH_TO_CKPT = BASE_PATH + '/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb'

    As mentioned in the earlier exercise, the ssd inception v2 model is embedded in the Xi IoT Tensorflow Python runtime. This is fine for an example, but not suitable for production deployments. For example, the model cannot be updated.

    Now compare this python code to that used in the ml_objectdetect_func-python function used in the ai-inference-service-demo data pipeline.

  8. Click :fa:`remove` to close the function without making any changes.

  9. Uncheck the :fa:`check` box beside the objdetect_func-python function, click the :fa:`check` box beside the ml_objdetect_func-python function, then click Edit.

    Notice that this function is also written in python and uses the Xi IoT Tensorflow Python runtime.

  10. Click Next.

    The function's python code is now displayed. This time, first take notice of line 23:

    ai_inference_endpoint = os.environ['AI_INFERENCE_ENDPOINT']

    Object detection, or inference, will now be performed by the Inferencing Service, so the function must know the service endpoint for submission at the edge. As you can see, its automatically passed to the runtime as an environment variable.

    Now take notice of lines 153-170 excerpted below:

    def detect(image):
       image_np = np.asarray(image, dtype="int32")
       image_np_expanded = np.expand_dims(image_np, axis=0)
       data = json.dumps({"signature_name": "serving_default",
                         "instances": image_np_expanded.tolist()})
       headers = {"content-type": "application/json"}
       model_name = "objectdetect"
       model_version = 1
       url = "http://%s/v1/models/%s/versions/%d:infer" % (
          ai_inference_endpoint, model_name, model_version)
       response = requests.post(url, data=data, headers=headers)
       if response.status_code != 200:
          logging.error(response.json())
          return None
       text = response.text
       inference_payload = json.loads(text)
       predictions = inference_payload['predictions']
       return predictions[0]

    This excerpt is of the detect function utilizing the Inferencing Service. Of particular note are lines 159-161 where the model_name is set to objectdetect, model_version is set to 1, and the connection url is built with the ai_inference_endpoint (remember this was passed automatically).

    For this example, the Inferencing Service has already been pre-loaded with the same ssd inception v2 model, but this time trained using the Open Images dataset.

  11. Click :fa:`remove` to close the function without making any changes.

  12. View how this model and others can be managed by clicking :fa:`bars` > Apps and Data > ML Models.

  13. Click the :fa:`check` box beside the objectdetect model, then click Edit.

    Example model version 1 is listed along with the Tensorflow Framework Type. A new version of the model could be uploaded by clicking :fa:`plus` Add new. This version could then be referenced in functions similarly to line 160 in the example code above.

    Note

    One additional benefit of using the Xi IoT Inferencing Service is that multiple functions and pipelines can share edge hardware resources, such as GPUs, when running machine inference.

  14. Click :fa:`remove` to close the ML model without making any changes.

If you'd like to view the data pipeline and Inferencing Service in action, simply navigate back to the ai-inference-service-demo data pipeline tile, start the pipeline, then click to view the Http Live Stream.

Takeaways

What are the key takeaways and other things you should know about Nutanix Xi IoT?

  • Get started with AI Inference in minutes with Xi Cloud based edges
  • Use Xi IoT Sensor app as instant video data source
  • Use HTTP Live Stream output to instantly view application and data pipeline results
  • A single platform that can run AI-based apps, containers, and functions.
  • Easy to deploy applications at scale with a SaaS control plane.
  • Reduced time to setup and configure edge intelligence (i.e. kubernetes and analytics platform).
  • Operate edge locations offline with limited internet connectivity.
  • Can choose cloud connectivity without heavy lifting via native public cloud APIs.
  • Supports development languages like Python, Node.js and Go and integrates into existing CI/CD pipelines.
  • Developer APIs and pluggable architecture enables "bring your own framework and functions" for simplified integrations without having to rewrite your code.