This repository is a Mycroft "Skill". It is loaded when Mycroft starts up.
Here are the important files to know:
.
├── dialog/en-us # Response texts, currently removed - responses are listed in the source code
├── vocab/en-us # Voice command trigger texts
├── __init__.py # Central file for managing voice commands
├── aimar_arm.py # uArm control code
├── aimar_move.py # TurtleBot movement code
├── aimar_skin.py # Skin diagnosis code
└── aimar_patient.py # Patient database code
Read the Files section.
Linguini
- Voice commands
- See
__init__.py
,vocab/..
, anddialog/..
- Read the Mycroft Intent Tutorial
- See
- Patient database
- See
aimar_patient.py
and desk_server - Look up Python sqlite3 tutorials. (e.g. docs.python.org)
- See
- User Interface
- Set up Mycroft on your personal machine.
- Mycroft GUI Blog Post, Github
Robo
- Arm movements
- See
aimar_arm.py
- Google stuff
- See
- Robot movements
- See
aimar_move.py
- Read TurtleBot3 manual and ROS tutorials
- See
Consider an existing command: "AIMAR, what's on my skin?"
-
Detect "what's on my skin" as a voice command.
vocab/en-us/skin.intent
contains a list of phrases.
@intent_file_handler('skin.intent')
means any one of those phrases will trigger the function below:handle_skin_intent(...)
.
-
Run the skin diagnosis function.
aimar_skin.capture_photo_and_diagnose()
means: "inside the scriptaimar_skin.py
, run the functioncapture_photo_and_diagnose()
".
capture_photo_and_diagnose()
captures a photo and sends it to desk-server - more about this in step 3.
-
Trigger skin diagnosis on the central server.
requests.post(...)
sends a web request to a specified address.
- We are running desk-server on our central computer. There is an endpoint that receives requests to
/api/skin
. This is where we do the actual diagnosis, which gets passed back to the robot.
Specifically, you're asking why __init__.py
doesn't contain any of the "actual" code for uArm movement, skin diagnosis, etc. - seems unnecessarily complicated.
The purpose is to separate different code into different files. See how __init__.py
only manages different voice commands (e.g. "arm test"
triggers test()
inside aimar_arm.py
.
This lets Linguini and Robo people work on different files. Also, the system is modular e.g. someone who's working on voice commands doesn't need to know how the arm is being moved.
AIMAR's workflow uses two repositories in UMD-AIMAR: mycroft-aimar (this repo), desk-server.
In theory, we have several AIMAR bots communicating with one central computer.
desk-server
runs on this computer; it manages the patient database and skin diagnosis functions.
How do voice commands trigger those functions? Read Code Workflow.
-
Download the repository. Inside your Linux VM, open a terminal (Konsole) and run:
git clone https://github.com/UMD-AIMAR/mycroft_aimar.git
-
For uArm (optional): download uArm-Python-SDK:
git clone https://github.com/uArm-Developer/uArm-Python-SDK.git
- Copy 'uArm-Python-SDK/uarm' folder into 'mycroft-aimar' folder