Skip to content

Using lidar/sonar and a deep neural network to help the visually-impaired navigate ("visualize") their surroundings

Notifications You must be signed in to change notification settings

yoogchu/blindEyes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BlindEyes - First Place Winner at MakeHarvard2018

This project was created at MakeHarvard 2018. It won first place for the design and practicality of the device. It was created to bring solutions to individuals who are blind or may have poor vision. Using lidars, ultrasonic sensors, haptic vibrating discs, Emic 2 Text to Speech Module, microcontrollers(Arudino and Mbed) and the ARM Raspberry Pi, we developed a device that gives the user both object avoidance/detection information and object/scene recognition information.

Hardware Components

Arduino Uno Board
MBED (LPC 1768)
Raspberry Pi 3

Object Avoidance:

VL53L0X - Time of Flight Sensor (LIDAR)
HC-SR04 - Ultrasonic Sensor

Haptic Feedback:

DRV2605 - Haptic Controller Breakout
Vibrating Motor Discs

Object Recognition and Audio

Raspberry Pi Camera
Emic2 - Text to Speech Module
VMA410 - Logic Level Converter - 3.3 to 5V

Software Components

MBED (Head LIDAR/UltraSonic with Haptic Feedback) - C++ code
Arduino (Modular LIDAR with Haptic Feedback, as shown on ankle) - C++
Google Cloud Vision - Python
Serial Interface with Emic2 - Python
Microsoft Azure Computer Vision (Experimental) - Python

About

Using lidar/sonar and a deep neural network to help the visually-impaired navigate ("visualize") their surroundings

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published