-
Notifications
You must be signed in to change notification settings - Fork 0
/
FolDetectAvoid_GNC2015.tex
196 lines (138 loc) · 16 KB
/
FolDetectAvoid_GNC2015.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
\documentclass[]{aiaa-tc}% insert '[draft]' option to show overfull boxes
\usepackage{amssymb}
\usepackage[intlimits,cmex10]{amsmath}
%\usepackage{txfonts}
%\usepackage{overpic}
\usepackage{float}
\usepackage{subfig}% subcaptions for subfigures
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{psfrag}
%\usepackage[nolists,nomarkers]{endfloat}
\usepackage{afterpage}
\usepackage[colorlinks]{hyperref}% hyperlinks [must be loaded after dropping]
\usepackage{cite}
\usepackage{url}
\usepackage{breakurl}
\hypersetup{breaklinks=true}
%\usepackage{refcheck}
%NEW COMMANDS & ENVIRONMENTS
%L1 Math and Proofs
\newcommand{\Lone}{\mathcal{L}_1}
\newcommand{\Linf}{\mathcal{L}_{\infty}}
\newcommand{\Lonenorm}[1]{\ensuremath{\left\Vert{#1}\right\Vert_{\mathcal{L}_1}}}
\newcommand{\Linfnorm}[1]{\ensuremath{\left\Vert{#1}\right\Vert_{\mathcal{L}_{\infty}}}}
\newcommand{\Linfnormt}[2]{\ensuremath{\left\Vert{#1}_{#2}\right\Vert_{\mathcal{L}_{\infty}}}}
\newcommand{\lonenorm}[1]{\ensuremath{\Vert{#1}\Vert_{\mathcal{L}_1}}}
\newcommand{\linfnorm}[1]{\ensuremath{\Vert{#1}\Vert_{\mathcal{L}_{\infty}}}}
\newcommand{\linfnormt}[2]{\ensuremath{\Vert{#1}_{#2}\Vert_{\mathcal{L}_{\infty}}}}
\newcommand{\Infnorm}[1]{\ensuremath{\left\Vert{#1}\right\Vert_{\infty}}}
\newcommand{\Twonorm}[1]{\ensuremath{\left\Vert{#1}\right\Vert_{2}}}
\newcommand{\infnorm}[1]{\ensuremath{\Vert{#1}\Vert_{\infty}}}
\newcommand{\twonorm}[1]{\ensuremath{\Vert{#1}\Vert_{2}}}
\newcommand{\trace}[1]{\ensuremath{{\rm tr}\left({#1}\right)}}
\newcommand{\Proj}[2]{\ensuremath{{\rm Proj}\left({#1},{#2}\right)}}
%\newcommand{\Proj}[2]{\ensuremath{{\rm Proj}({#1},{#2})}}
\newcommand{\sech}{\ensuremath{\textrm{sech}}}
\newcommand{\CLRS}{closed-loop reference system~}
\newcommand{\IR}{\mathbb{R}}
\newcommand{\II}{\mathbb{I}}
\newcommand{\thetahat}{\hat{\theta}}
\newcommand{\sigmahat}{\hat{\sigma}}
\newcommand{\omegahat}{\hat{\omega}}
\newcommand{\thetatilde}{\tilde{\theta}}
\newcommand{\sigmatilde}{\tilde{\sigma}}
\newcommand{\omegatilde}{\tilde{\omega}}
\newcommand{\rf}{\mathrm{ref}}
\newcommand{\id}{\mathrm{id}}
\newcommand{\xhat}{\hat{x}}
\newcommand{\xtilde}{\tilde{x}}
\newcommand{\yhat}{\hat{y}}
\newcommand{\ytilde}{\tilde{y}}
%TUDelft
\newcommand{\degr}{^\circ}
\newcommand{\citep}[1]{\citeleft\citen{#1}\citeright}
%\newcommand{\citep}[1]{Ref~\citeleft\citen{#1}\citeright}
\newtheorem{thm}{Theorem}
\newtheorem{pro}{Property}
\newtheorem{lem}{Lemma}
\newtheorem{defn}{Definition}
\newtheorem{cor}{Corollary}
\newtheorem{asm}{Assumption}
\newtheorem{rem}{Remark}
\newenvironment{pf}{\noindent {\bf Proof.}\mbox{}}{\em}
\graphicspath{{./figures/}}
\title{Monocular Collision Avoidance and Detection System for a Ground Robot}
\author{
Thiago Marinho\thanks{Doctoral Student, Dept.\ of Mechanical Science and Engineering,Student Member AIAA, [email protected].}~,
Arun Lakshmanan\thanks{Master Student, Dept.\ of Aerospace Engineering,Student Member AIAA [email protected].}~,
Robert Mitchell Jones\thanks{Master Student, Dept.\ of Mechanical Science and Engineering,Student Member AIAA [email protected].}~,\\
Venanzio Cichella\thanks{Doctoral Student, Dept.\ of Mechanical Science and Engineering, Student Member AIAA,[email protected].}~,
Naira Hovakimyan\thanks{Professor, Dept.\ of Mechanical Science and Engineering, Associate Fellow AIAA
{\normalsize\itshape University of Illinois at Urbana-Champaign, Urbana, IL 61801}
}
% Data used by 'handcarry' option if invoked
\AIAApapernumber{YEAR-NUMBER}
\AIAAconference{Guidance, Navigation, and Control, 2013, Boston, Massachusetts, USA}
\AIAAcopyright{\AIAAcopyrightD{2013}}
\begin{document}
\maketitle
%==============================================================================
\begin{abstract}
%==============================================================================%
This paper presents a unified approach to addresses the problem of collision avoidance and detection for low-cost unmanned vehicles that rely solely on monocular vision for sensing. The proposed method relies on the line-of-sight angle measurement, which can be obtained from an inertial measurement unit and a gimbaled camera mounted onboard the vehicle. Under given conditions on the estimated line-of-sight rate, the algorithm is capable of predicting whether a collision is likely to occur and perform a collision avoidance maneuver, thus ensuring that the autonomous vehicles can avert collision with cooperative and uncooperative obstacles. This work aims at presenting the complete implementation of the collision avoidance system on a low cost ground robot, including the computer vision algorithm for obstacle detection, the path following algorithm and the detection and avoidance strategy. Simulation and experiments are carried to ensure validity of the proposed solution.
\end{abstract}
%%==============================================================================
\section{Introduction}
\label{sec:Introduction}
Research and development in autonomous systems has seen an unprecedented growth in the past decade. Examples range from autonomous drones to self-driving cars which can make intelligent decisions based on environmental factors without explicit human control.
In recent years, there has been a growing concern for accidents involving autonomous vehicles. Regulatory agencies such as Federal Aviation Administration (FAA) and National Highway Traffic Safety Administration (NHTSA) have proposed policies, such as the Unmanned Aerial Systems (UAS) roadmap \cite{FAA_UAS} and Policy on Automated Vehicle Development \cite{autodev2014} respectively, to mitigate the accidents which take place every year \cite{FAAaccd2010}. The first UAS Roadmap published by FAA describes the necessity of UAS integration within the National Airspace (NAS) without compromising on safety or increasing the risk to people and properties.
A fundamental component of any autonomous vehicle is its ability to perceive the environment around itself and perform maneuvers to prevent impending collisions with obstacles, may it be with oncoming vehicles, people, and other hazards. A collision avoidance system should be able to predict and mitigate such a collision with cooperative and uncooperative obstacles.
Collision avoidance problem has received a lot of attention in recent years (see, for example, \cite{khatib96,chakravarthy98,carbone08,smith08,sti10,shankaran2011collision,rodriguez2011collision,mikaCA, gusrialdi2008coverage,laventall2008coverage} and references therein) and is still an active field of research. The importance of an adequate collision avoidance solution is still a limiting factor for the envisioned self driving cars and the autonomous drones. Nonetheless, the effectiveness of any collision avoidance method is constrained by the capability of avoiding and detecting the collision with the available hardware.
%Maybe remove this.
A complete collision avoidance system should contain three elements: (i) sensing, which is the data acquisition capability of the hardware; (ii) detection, a decision making tool capable of predicting whether a collision is expected; (iii) avoidance, which selects and enforces the proper evasive maneuver.
This work has the goal of choosing and designing a integrated collision strategy by {\em detecting}\/probable collisions under restricted {\em sensing}\/ capabilities. Once this detection strategy is in place an adequate avoidance method must be chosen to perform the evasive maneuver. Although most applications of a collision avoidance system involve autonomous vehicle, in the case of non autonomous remotely operated drones or semi-autonomous cars a detection scheme could be used as a trigger alarm to the operator. This might be useful in cases where the operator might not have knowledge of the obstacle due to limited visibility angle (as in the case of a ground operator commanding an unmanned vehicle by vision feedback only and natural obstacles as trees and buildings could impair the operator’s visibility), or just to reduce the operator workload.
One of the main limitations of most detection and avoidance solutions is the assumption that the vehicles have the ability to measure the position and velocity of the obstacles. Usually, this limitation can be circumvented by using a combination of multiple sensors and range-based detection methods, (such as, for example, stereo cameras \cite{BarryT14} \footnote{\url{http://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html}}). For instance, in Google's self-driving car \cite{rathod2013autonomous} \footnote{\url{http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works}} collision avoidance is implemented through ultrasonic sensors, LIDAR sensors, GPS, video camera, and a 64 beam laser sensor to generate a 3D map of the surroundings.
Although these systems provide very reliable and accurate information about the environment, they are often cumbersome, expensive, require precise camera calibration and are computationally intensive, thus demanding powerful microprocessors. On the contrary, economical solutions do not provide reliable estimates about the environment and the obstacles therein. However, these low-cost sensor systems are ubiquitous in most vehicles and better detection algorithms can be made to maximize the information available about the obstacles and any imminent collisions.
The objective of this work is to propose a collision detection solution when monocular vision is the only sensing information available. Most commercially available low cost aerial platforms only have single cameras mounted onboard (e.g. DJI Phantom, Lily, Parrot AR Drone) and developing a detection method for these platforms is the first step to enable true autonomous deconfliction for the low cost market. Present FAA rules require the operator to always control the drone in the visible region, but with an obstacle detection algorithm the operator will have the capability to safely maneuver the drone in vision-occluded areas.
With this goal in mind, this paper proposes a solution for the collision detection problem, which is real-time implementable and does not rely on velocity or position information of the obstacle or on its estimate. In fact, we present a collision detection procedure based on the line-of-sight (LOS) rate estimate and the expansion rate of the image. The LOS angle rate can be estimated from the LOS obtained using a dedicated pan-tilt gimbaled camera, which can be built using low-cost commercial off-the-shelf components. In a typical scenario when the unmanned vehicle is moving autonomously, a pan-tilt camera searches for obstacles. Once an obstacle is identified and locked in the camera frame, an image processing algorithm \cite{PerceptiVU} provides the position of the obstacle in the image frame to an integrated UAV-gimbal control algorithm. The latter steers the gimbal to keep the obstacle in the center of the camera frame (e.g.\ drives the position of the obstacle in the image frame to zero) \cite{dobrokhodov2008vision}. The measurement of the pan angle of the gimbal is then used by the estimator to compute the LOS rate estimate.
It has been shown that a necessary condition for collision between two obstacles is that the LOS rate is zero \cite{kochenderfer2008hazard}. A common approach is to declare an existing threat of collision whenever the LOS rate goes below a certain (small) threshold value. However, this method presents a large set of false positive scenarios. One example is when the obstacle is moving away from the autonomous vehicle but in a path where the LOS is constant. In order to solve this issue recent methods rely on performing self maneuvers \cite{kochenderfer2008hazard} to estimate the distant to the obstacle and make a decision based on the relative position and velocity estimate. The main drawback of these methods is that when acquiring LOS rate from a camera many objects may appear in the field of view causing an excessive deviation from the mission to perform these identification maneuvers. % In addition, in the case of a critical time coordinated mission every maneuvers affects the outcome of the mission and is undesired, unless there is a critical need to avoid an object.
This work proposes a novel method for reducing the false positives in detected collisions and thereby providing an implementable collision detection method without relying on path deviations and identification maneuvers. Our algorithm is based on acquiring additional information already available in the video imagery. The work of \cite{lee1976theory},\cite{lee1980optic} describe an expansion rate using the obstacle’s image, which can be computed in real time to identify if the obstacle is approaching or moving away from the vehicle. Since collision is only possible when the obstacle is approaching the vehicle, this information is taken into account together with LOS rate threshold condition to consider probable collision candidates.
The detection method will be validated by using Monte Carlo simulation for an extensive collision scenarios, vehicles trajectories and initial conditions. Further, validation and real time evaluation will be carried out on mobile ground robots equipment equipped with low resolution video cameras.
\section{Problem Formulation}
In this section we present the objective of the autonomous ground robot and the different subsystems that form the integrated approach. Let's consider a ground robot that is performing an autonomous task represented by tracking a certain trajectory. An attached monocular camera mounted on a pan angle gimbal continuously searches for obstacles.
\begin{asm} \label{detRange}
Let $d(t) \in \mathbb{R}$ define the distance from the robot to the obstacle at a time $t$. It is assumed that the monocular vision camera only detects objects when $d(t) < R$, where $R$ is the detection radius. It is also assumed that the initial distance is larger that the safety distance, i.e. $d(t_0) > d_{safe}$.
\end{asm}
Once the obstacle enters the detection radius $R$, a computer vision system immediately starts tracking the position of the obstacle on the camera's field of view. A pan gimbal device uses the $x$ coordinates of the camera's image to track the obstacle and center the obstacle in the video frame. With the gimbal's angle and the video feed, we are able to measure $\lambda$, the LOS angle from the robot to the obstacle.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/ProblemForm.eps}
\caption{The collision avoidance problem.}
\label{fig:ProbForm}
\end{figure}
An obstacle in the detection range of the robot is not necessarily a collision threat. Therefore, a collision detection problem is solved by analyzing the LOS rate $\dot{\lambda}$, either measured from a gyroscope on the gimbal or estimated from the the LOS measurement \cite{1014675}. If the collision detection system declares a danger of collision, an evasive maneuver is initiated in order to avoid the possible collision.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/motivation.eps}
\caption{A typical scenario. Abruzzo, Italy. \emph{Google Maps}. Google, 9 March 2015. Map. 9 March 2015.}
\label{fig:motivation}
\end{figure}
\subsection{Trajectory Tracking}
\subsection{Vision and Obstacle Tracking}
\subsection{Detection}
The analysis and prediction of a collision is done in relative frame. Consider the relative velocity, two dimensional geometric formulation for collision avoidance in Figure X. The relative velocity between the
The necessary condition for collision is having the relative velocity vector inside a collision code.
\section{Collision Avoidance}
\subsection{Simulation Results}
\section{Experimental Results}
\subsection{Experimental Setup}
\subsection{Results}
\section{Conclusion}
\section{Acknowledgments}
\label{sec:Acknowledgments}
This work has been supported in part by AFOSR and NASA.
\bibliographystyle{unsrt}
\bibliography{CAS_bib_V2}
\end{document}