Skip to content

MAYA-1-0/Aruco_Marker_Based_Navigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aruco-Marker-Based-Navigation

Visual Navigation

Use of fiducial markers in visual navigation is a prominent area of research these days, Aruco markers have proved to be very promising among other fiducial markers like April Tag, QR tag. Aruco markers are 2D binary patterns which can be associated with ID’s and each unique ID can be used to associate with different activities or tasks in case of MAYA.

An ArUco marker has a wide black border and an inner binary matrix which determines its unique ID. The black border facilitates its fast detection in the image.The marker size determines the size of the internal matrix. For instance a marker size of 4x4 is composed of 16 bits.

Usage and Requirements

Developed,Tested and Deployed on Nvidia Jetson Nano with Ubuntu 18.04, Opencv (4.1.x +) Aruco library (Should come installed with opencv-contrib) python 3.x.x ROS Melodic

  1. For Aruco Based Navigation you will need Kinematics Repository Cloned
  2. Clone this repository
  3. There should be 2 packages in the ROS workspace.
  4. Follow the steps given in Kinematics Repository for getting wheel base kinematics and Docker started.
  5. Then ROS run following scripts
rosrun package_name 
rosrun package_name 
rosrun package_name 
Aruco Markers

image

ArUco Markers are used for the Navigation, Multiple markers can be placed at different locations in a building and can be used to localise the robot and also find the parameters needed to navigate towards any markers in case of automatic docking system for charging. We’ve employed Realsense d455 for visual and depth perception

Marker Search Algorithm

img

1 . Robot’s camera plane and Marker’s plane are always parallel- Ideal Case

Marker and Robot's Pose Representation

img

When the above two planes are parallel, which is only true in ideal cases, depth of the marker can be found from the realsense depth camera, further angle between robot’s x axis and the axis along the aligned centroid of frame and the marker. Let ‘a’ be the height. ‘b’ is the breadth of the detected marker, d is the depth of the marker’s centroid from the robot and d_c is the depth of the centroid of the frame. Let (x,y) be the point of the centroid and (x1,y1) be the centroid of the detected marker. To find angle theta, find a new depth ‘d_n’ at point (x1,y) and use basic trigonometric equations to find the angle theta theta=acos(d/d_n) With this Robot can easily move towards a marker which is in the frame by travelling distance and theta calculated as above.

Depth Calculation for Marker

img

2. Robot’s camera plane and Marker’s plane are not parallel

When two planes are not parallel to each other calculations are much more complex when compared to the ideal case as there are parallax effects. As represented in the fig theta and depth required to reach the marker in the frame can be found.

Marker and Robot's Pose Representation

img

Marker detection and Pose Estimation

img

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •