Skip to content

koyaneni-yaswanth/Indoor-Navigation-IvLabs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Indoor Navigation using ArUco Markers

Description

In our project, we've employed Turtlebot for autonomous indoor navigation which involves using marker-based localization, like ArUco. These markers act as crucial reference points for the robot's autonomous movement within unfamiliar indoor spaces. This approach is cost-effective, simply requiring the deployment of marker patterns and protective measures. For the completion of this project, we also made use of ROS, OpenCV, and Gazebo simulation, enabling us to navigate, control, and process data for various robotic applications, which could include warehouse management

Methodology

  • Turtlesim

We performed simple tasks on turtlesim node to get practical knowledge about ROS. How to write code in ROS, implement different publisher and subscriber nodes.

Circle Spiral
Go To Goal Follower
  • SLAM - Hector and G-mapping

Hector SLAM and Gmapping are both algorithms used for mapping and localization in robotics. Hector SLAM is a laser-based algorithm that can only afford indoor use, as the map created is small, while Gmapping can be used both indoors and outdoors. Hector SLAM is more efficient than Gmapping in terms of map drawing, but it has a limit on the size of the map it can create. On the other hand, Gmapping has no such limit and is more versatile than Hector SLAM

Hector Mapping GMapping
  • Camera Calibration

The process of estimating the parameters of a camera is called camera calibration. This includes recovering two kinds of parameters

  1. Internal parameters of the camera/lens system. E.g. focal length, optical center, and radial distortion coefficients of the lens
  2. External parameters: This refers to the orientation (rotation and translation) of the camera with respect to some world coordinate system We used a checkerboard for the calibration of the camera as its pattern is easy to detect in an image, also its corners have a sharp gradient in two directions, so it is easy to detect the corners.

Callibration

  • Aruco Detection and Pose Estimation

We used OpenCV’s aruco library for the detection and pose estimation of aruco markers.

ArucoDetection

  • Gazebo Simulation

We used two nodes for navigation. The camera node detects the markers and publishes data about markers like their IDs and distance. Nav node subscribes data from the camera node and publishes velocity commands to turtlebot. We used turtlebot’s ‘waffle_pi’ model for simulation as it comes with a camera. To get camera information we created a subscriber in the camera node.

Working : Nav node publishes velocity commands so turtlebot moves in a square loop until it finds any marker. After finding the marker bot will do a task assigned to that ID. We assign a simple task to each ID. The bot should go towards markers up to a certain distance and then again start searching for another marker.

ezgif com-video-to-gif

  • Implementation on Hardware

Our turtlebot model was “burger”, which requires ‘ros-melodic’. So we require docker to create a ros-melodic workspace. We also had to change our Cam node code such that it takes video input from a webcam instead of a subscriber.

Requirements

Software:

  • Ubuntu 20.04
  • Python3
  • ROS-Noetic
  • Docker Image for ROS-Melodic
  • OpenCV 4.8.x (Noetic)
  • OpenCV 4.2.x (Melodic)
  • Gazebo simulation package 

Hardware:

  • TurtleBot Burger Model [Kobuki]
  • Webcam
  • YdLidar

How to use the Project

For Simulation

  1. Create a catkin package
    1. Go to cd ~/catkin_ws/src
    2. Do catkin_create_pkg indoor_nav rospy turtlesim geometry_msgs sensor_msgs std_msgs
    3. Go back to cd ~/catkin_ws 
    4. Do catkin_make
  2. Put camFeed.py, velcmds.py and markers.launch (or any gazebo world) in your catkin package and make them executable files.
  3. In your terminal, run the following code:
    1. roslaunch indoor_nav markers.launch
    2. rosrun indoor_nav cam.py
    3. rosrun indoor_nav navi.py

For Hardware

  1. Create a catkin package in both, your docker container (ROS-melodic) as well as your main device (ROS-noetic)
    1. Go to cd ~/catkin_ws/src
    2. Do catkin_create_pkg indoor_nav rospy turtlesim geometry_msgs sensor_msgs std_msgs
    3. Go back to cd ~/catkin_ws 
    4. Do catkin_make
  2. Put lidar_data.py code in your ROS-Noetic package and camera.py, MultiMatrix.npz, and navi.py code as well as a lidar_data.txt file in your ROS-melodic package, and make these all executable files.
  3. Make sure to change calib_data_path in camera.py to MultiMatrix.npz path. Also, change lidar_data_path in lidar.py to the relative path of lidar_data.txt to your ROS-noetic terminal, and in navi.py to the relative path to your container.
  4. Connect your TurtleBot to your device and run the following code in your container:
    1. bash .devcontainer/post_create_commands.sh
    2. roslaunch turtlebot_bringup minimal.launch
  5. In your ROS-noetic terminal, run the following code:
    1. roscore
    2. rosrun lidar_data.py
  6. In your container, run the following code:
    1. roscore
    2. rosrun camera.py
    3. rosrun navi.py

Results

To be added :/

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%