Skip to content

Mycroft skill that enables AIMAR voice control and interaction.

License

Notifications You must be signed in to change notification settings

UMD-AIMAR/mycroft_aimar

Repository files navigation

Files

This repository is a Mycroft "Skill". It is loaded when Mycroft starts up.

Here are the important files to know:

.
├── dialog/en-us            # Response texts, currently removed - responses are listed in the source code
├── vocab/en-us             # Voice command trigger texts
├── __init__.py             # Central file for managing voice commands
├── aimar_arm.py            # uArm control code
├── aimar_move.py           # TurtleBot movement code
├── aimar_skin.py           # Skin diagnosis code
└── aimar_patient.py        # Patient database code

TLDR (What do I work on?)

Read the Files section.

Linguini

Robo

  • Arm movements
    • See aimar_arm.py
    • Google stuff
  • Robot movements

Code Workflow

Consider an existing command: "AIMAR, what's on my skin?"

  1. Detect "what's on my skin" as a voice command.

    • vocab/en-us/skin.intent contains a list of phrases.
    • @intent_file_handler('skin.intent') means any one of those phrases will trigger the function below: handle_skin_intent(...).
  2. Run the skin diagnosis function.

    • aimar_skin.capture_photo_and_diagnose() means: "inside the script aimar_skin.py, run the function capture_photo_and_diagnose()".
    • capture_photo_and_diagnose() captures a photo and sends it to desk-server - more about this in step 3.
  3. Trigger skin diagnosis on the central server.

    • requests.post(...) sends a web request to a specified address.
    • We are running desk-server on our central computer. There is an endpoint that receives requests to /api/skin. This is where we do the actual diagnosis, which gets passed back to the robot.

Why is everything in a different file?

Specifically, you're asking why __init__.py doesn't contain any of the "actual" code for uArm movement, skin diagnosis, etc. - seems unnecessarily complicated.

The purpose is to separate different code into different files. See how __init__.py only manages different voice commands (e.g. "arm test" triggers test() inside aimar_arm.py.

This lets Linguini and Robo people work on different files. Also, the system is modular e.g. someone who's working on voice commands doesn't need to know how the arm is being moved.

Integration Diagram

AIMAR's workflow uses two repositories in UMD-AIMAR: mycroft-aimar (this repo), desk-server.

AIMAR Diagram

desk-server?

In theory, we have several AIMAR bots communicating with one central computer. desk-server runs on this computer; it manages the patient database and skin diagnosis functions.

How do voice commands trigger those functions? Read Code Workflow.

Setup

  1. Set up Mycroft.

  2. Download the repository. Inside your Linux VM, open a terminal (Konsole) and run: git clone https://github.com/UMD-AIMAR/mycroft_aimar.git

  3. For uArm (optional): download uArm-Python-SDK: git clone https://github.com/uArm-Developer/uArm-Python-SDK.git

    • Copy 'uArm-Python-SDK/uarm' folder into 'mycroft-aimar' folder

About

Mycroft skill that enables AIMAR voice control and interaction.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published