This project was created at MakeHarvard 2018. It won first place for the design and practicality of the device. It was created to bring solutions to individuals who are blind or may have poor vision. Using lidars, ultrasonic sensors, haptic vibrating discs, Emic 2 Text to Speech Module, microcontrollers(Arudino and Mbed) and the ARM Raspberry Pi, we developed a device that gives the user both object avoidance/detection information and object/scene recognition information.
Arduino Uno Board
MBED (LPC 1768)
Raspberry Pi 3
VL53L0X - Time of Flight Sensor (LIDAR)
HC-SR04 - Ultrasonic Sensor
DRV2605 - Haptic Controller Breakout
Vibrating Motor Discs
Raspberry Pi Camera
Emic2 - Text to Speech Module
VMA410 - Logic Level Converter - 3.3 to 5V
MBED (Head LIDAR/UltraSonic with Haptic Feedback) - C++ code
Arduino (Modular LIDAR with Haptic Feedback, as shown on ankle) - C++
Google Cloud Vision - Python
Serial Interface with Emic2 - Python
Microsoft Azure Computer Vision (Experimental) - Python