Provable Detection Of Moving Targets With Reliable Sensors by Hussain Al-Helal
A Thesis Submitted to the Faculty of the
Department of Electrical and Computer Engineering In Partial Fulfillment of the Requirements For the Degree of
Master of Science In the Graduate College
The University of Arizona
December, 2011
i
Statement by Author This thesis has been submitted in partial fulfillment of requirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author.
Signed:
Approval by Thesis Director This thesis has been approved on the date shown below:
Dr. Jonathan Sprinkle Assistant Professor
Date
ii
Table of Contents List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Chapter 2. Background . . . . . . . . . . . 2.1. Optimal Search . . . . . . . . . . . . . . 2.1.1. Maximum Probability Search . . 2.2. Quadrotor Helicopter . . . . . . . . . . . 2.3. Trajectories . . . . . . . . . . . . . . . . 2.3.1. Simulation Of A Search Scenario 2.3.2. Tight Formation Flying . . . . . 2.3.3. Bounded Speed Task Assignment 2.3.4. Predicting Target Motion . . . . 2.3.5. Search Patterns . . . . . . . . . . 2.3.6. Imprecise Sensor Models . . . . . 2.3.7. Searching Arbitrary Maps . . . . 2.4. Optimality of Spiral Search Patterns . . 2.5. Realistic Camera Sensors . . . . . . . . . 2.5.1. Articulated Camera . . . . . . . . 2.5.2. Sensors for Detection . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
4 4 5 8 9 9 10 10 10 11 11 12 12 15 16 17
Chapter 3. Approach . 3.1. Assumptions . . . . 3.2. Moving Targets . . 3.3. Target Acquisition 3.4. Detection Model . 3.5. Sensor Model . . . 3.6. Cases Covered . . . 3.7. Case 0 . . . . . . . 3.8. Case 1 . . . . . . . 3.9. Case 2 . . . . . . . 3.10. Case 3 . . . . . . . 3.11. Decision Controller
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
19 19 20 21 22 24 24 25 26 27 29 29
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
iii Table of Contents—Continued Chapter 4. Results . . . . . . . . 4.1. Case 0 . . . . . . . . . . . . . 4.2. Case 1 . . . . . . . . . . . . . 4.3. Case 2 . . . . . . . . . . . . . 4.4. Case 3 . . . . . . . . . . . . . 4.5. Decision Controller . . . . . . 4.6. Simulations . . . . . . . . . . 4.7. Competitive Analysis . . . . . 4.7.1. Search Ratio: Case 0 . 4.7.2. Search Ratio: Case 1 . 4.7.3. Search Ratio: Case 2 . 4.7.4. Search Ratio: Case 3 . 4.7.5. Search Ratio: Example
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cases
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
30 30 31 35 41 47 49 50 51 52 53 54 55
Chapter 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
Chapter 6. Future Work . . . . . . . . . . . . . . . . . 6.1. Coordination of Multiple UAV’s . . . . . . . . . . . . 6.2. Visual Obstructions . . . . . . . . . . . . . . . . . . . 6.3. Effects of Sensor Sweep for Non-Articulated Camera .
. . . .
61 61 61 62
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
. . . . . . . . . . . . .
. . . .
iv
List of Figures Figure 2.1. Figure 2.2. Figure 2.3. Figure 2.4.
8 11 Side view of sensor sweep at h = 300m with overlap multiplier n = 2.01 14 STARMAC quadrotor helicopter developed at Stanford University [19]
Example of search patterns, from [17]. . . . . . . . . . . . . . . Birds eye view of sensor sweep at h = 300m with overlap multiplier
n = 2.01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.5. Birds eye view of a trajectory with no overlap of sensor sweep . . .
Figure 2.6. Depiction of relative camera angles on a horizontal UAV . . . .
14 15 17
Figure 3.1. Sensor field of view of an articulated sensor, and an arbitrary radius of escape. The circle described by rˆ is not fully encapsulated by the sensor field of view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3.2. Pragmatic sensor models with cut off height . . . . . . . . . . . . . Figure 3.3. P is defined as a box of width/height ρˆ, τˆ . . . . . . . . . . . . . . Figure 3.4. Case 0: Sensor field of view encapsulates all possible locations of ˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . . ground target at h1 ≤ h. Figure 3.5. Case 1: Sensor field of view encapsulates all possible locations of ˜ . . . . . . . . . . . . . . . . . . . . ground target after climbing to h2 ≤ h
20 23 24 26 26
Figure 3.6. (a)Angle of View of UAV Camera. (b)Field of View of UAV Camera.
(c) Field of View of UAV Camera with absolute guarantee of locating ground vehicle. (d) Varying the height inherently varies the field of view of the UAV hence altering the probability of visualizing the ground vehicle. . . . . . . . .
Figure 4.1. Case 0: Simulation result showing concept. . . . . . . . . . . . . . Figure 4.2. Case 1: Simulation result showing concept. . . . . . . . . . . . . . Figure 4.3. Sensor overlap as search vehicle follows a spiral trajectory, generated with fixed overlap n=1.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4.4. Sensor overlap n∗ , between sensor sweep at time tθ1 , and tθ2 . . . . . Figure 4.5. Case 2: Bird’s eye view simulation result showing concept where detection occurs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4.6. Case 2: Side view simulation result showing concept. . . . . . . . . Figure 4.7. Case 3: Side view simulation result. . . . . . . . . . . . . . . . . . Figure 4.8. Search vehicle start location and coordinated mission. . . . . . Figure 4.9. Search vehicle start location and coordinated mission with profile of altitude initial conditions. . . . . . . . . . . . . . . . . . . . . . . . . Figure 4.10. Case 0: Simulation used for Case 0 Search Ratio calculation. . . . . Figure 4.11. Case 1: Simulation used for Case 1 Search Ratio calculation. . . . . Figure 4.12. Case 2: Simulation used for Case 2 Search Ratio calculation. . . . . Figure 4.13. Case 3: Simulation used for Case 3 Search Ratio calculation. . . . .
28 32 34 34 35 42 42 47 49 50 56 56 57 58
v
List of Tables Table 4.1. Competitive Analysis examples. . . . . . . . . . . . . . . . . . .
58
Abstract Aerial search is coordinated based on the performance and sensing capabilities of the searching vehicles, as well as the performance and goals of the targets being searched. The current state-of-the-art for aerial search employs human piloted vehicles that are unmanned, i.e., remotely piloted vehicles. However, these unmanned vehicles are limited in autonomy and thus may not be able to determine optimal search trajectories. Further issues arise in piloting the unmanned vehicles due to latency in communication, where the human pilot both determines the search trajectory and follows it. This thesis proposes a new scheme for semi-autonomous search missions of a moving target, without a human pilot, using aerial vehicles with a reliable sensor of limited range. The range limitations as well as differences in search and target performance are used to formulate a scheme that reliably (with p = 1) detects the moving target, or announces that capture cannot be guaranteed. Analytical methods are introduced to describe the probability of success of a mission given certain parameters. With an objective measure of mission success, the human operator understands the probability of mission success when deploying aerial assets. The approach permits human controllers to focus their effort on coordinating high-level missions rather than the low-level navigation. This approach gives a new method to characterize the rate of success of any search and rescue mission with reliable sensors and uncluttered terrain, which allows the human operators to rapidly explore other strategies in the case that a mission is deemed unlikely to succeed.
2
Chapter 1
Introduction The current state-of-the-art for aerial search employs at least one human operator per vehicle. The human operator is responsible for determining and following the search trajectory as well as tracking the target once it is detected [4]. Controlling aerial vehicles is an expensive task, with pilots prone to error given the latency associated with visual feedback [27]. In situations where a human operator is unable to perform tactical control of an unmanned aerial vehicle (UAV), it may be necessary to have the UAV make or suggest tactical decisions, e.g., when communication is lost with the UAV. The interaction between the UAV system with the human-in-the-loop requires that choices for the human decision maker be easy to interpret and intuitive to implement or approve [30]. Many unmanned vehicle systems are semi-autonomous, because they require interactions from a human operator in order to make high-level decisions, or in order to approve candidate decisions prior to their implementation. These so-called embedded human systems are a rich area for exploration, because the decisions of the human may, or may not, improve decisions that the vehicle would make on its own [32]. When a vehicle’s semi-autonomous behavior includes flight control, robust automation systems are required in order to ensure that the vehicle stays within its flight envelope. The level of human interaction varies for these systems: some vehicles, such as the Predator UAV, are remote piloted, and human pilots make all tactical decisions [29]. Other vehicles, notably the Global Hawk, are capable of waypoint tracking and loitering based on high-level commands [21]. However, for each of these systems a significant ground presence is used to validate tactical decisions
3
prior to their implementation, and this ground station is frequently required to translate high-level decisions such as “search for X near point (Lat,Lon)” into low-level decisions, such as “let vehicle Y fly to point (Lat,Lon)”. Problem statement: Given a search vehicle at location x0 , and a ground target with maximum velocity v˜, within a radius r0 at time t0 , then find: • A search trajectory to detect the location of the target in bounded time (if it exists). • The maximum area that can be searched by this vehicle, and the time necessary to search that area, if detection cannot be guaranteed. In the following chapters, we discuss the automation of one particular kind of search decision, which takes into account various known, and unknown, parameters of the system. We examine the decision to perform aerial search for a ground target, based on the known location(s) of a given set of aerial vehicles, and a probability cloud for a potential ground target. These search parameters are then translated into a trajectory and velocity that produces the altitude at which the vehicle(s) should fly to search for the ground target, as well as the sensor overlap to guarantee or maximize the probability of detection of the ground target. Finally, the probability of the coordinated mission success as well as the search parameters will be calculated and provided to the human-in-the-loop. Certain constraints, such as the time by which the ground target should be detected, can either be used to determine the altitude, or produced as computational results if an altitude is selected by the human controller. This approach gives a new method to characterize the rate of success of any search and rescue mission with reliable sensors and uncluttered terrain. This method allows the human operator to explore other strategies in the case that a mission is deemed unlikely to succeed.
4
Chapter 2
Background 2.1
Optimal Search
This section discusses the basic concepts of optimal search necessary for discussion of the contribution in this thesis, and is adapted from [31]. Optimal search assumes an object of interest, the target, and a searcher with a desire to find the target. It is assumed that the target is to be located either at a ball with radius rˆ in Euclidean n-space, Brˆ(t) ∈ Rn , or in one of a possibly infinite collection of cells, J ∈ Zn . The search space Brˆ(t) is called continuous and the space J discrete. Some approaches assume that there is a probability distribution, p, as the exact location of the target is not known to the searcher [28] [23] [22]. However for the work carried out in this thesis we assume the target’s position at time t, p(t) is known. For most of the search problems considered in [31], the target is assumed to be stationary, however this thesis assumes a moving target. Stone introduces a detection function, b, relating effort applied in a region or cell to the probability of detecting the target. Furthermore, a constraint on effort is considered. Simplistic search problems involve finding an allocation of effort in space to maximize the probability of detecting the target given a particular constraint on effort. Cases exist where optimal search for moving targets will be investigated. The target may move randomly and we consider the evolution of the set of possible locations of the target based on this. Cases are provided that yield the maximum probability of detection based on known parameters of the ground target and search vehicle [31].
5
2.1.1
Maximum Probability Search
Assume that b is a detection function on J such that b0 (j, ·) is continuous and decreasing for j ∈ J. Let c(j, z) = z (where Z is an arbitrary set of nonnegative real numbers) for z ≥ 0, j ∈ J, and suppose that ϕ is a search plan that always searches in the cell or cells having the highest posterior probability of containing the target and let ϕ(·, t) be the allocation of search effort resulting from following plan ϕ, for time t. The result of such a search is that the posterior probabilities are all equal in the cells in which search effort has been placed, that is,
p(i)[1 − b(i, ϕ(i, t))] = 1 − P [ϕ(·, t)] p(j)[1 − b(j, ϕ(j, t))] 1 − P [ϕ(·, t)]
p(i)[1 − b(i, ϕ(i, t))] ≤ 1 − P [ϕ(·, t)]
if ϕ(j, t) > 0 and ϕ(i, t) > 0
if ϕ(j, t) = 0 and ϕ(i, t) > 0
Where P [ϕ(·, t)] denotes the probability of detecting the target given the search plan ϕ(·, t). Note that the denominators in the two inequalities above are all the same. As a result these relations could be written exclusively in terms of the numerators. Thus a search plan ϕ is called a maximum probability search plan on J if for each ˆ such that, t ≥ 0, there exists a λ p(j)[1 − b(j, ϕ(j, t))]
ˆ if ϕ(j, t) > 0 = λ(t) ˆ if ϕ(j, t) = 0 ≤ λ(t)
Suppose f ∗ = ϕ(·, t) is optimal for cost C[f ∗ ] and 0 < C[f ∗ ] < ∞. Since b0 (j, ·) is continuous and decreasing and c(j, z) = z for z ≥ 0, j ∈ J, it follows from Corollary
2.1.6 in [31] that there exists a λ ≥ 0 such that λ (the Lagrangian multiplier) and f ∗ satisfy,
0
∗
p(j)b (j, f (j))
= λ(t) if f ∗ (j) > 0 ≤ λ(t) if f ∗ (j) = 0
6 Let ϕ be a maximum probability search plan such that 0 < C[ϕ(·, t)] < ∞ and ϕ is optimal for cost C[ϕ(·, t)] for t > 0. It follows that there exists a function k : [0, ∞) → [0, ∞) such that for t > 0: b0 (j, ϕ(j, t)) = k(t)[1 − b(j, ϕ(j, t))] for ϕ(j, t) > 0
(2.1)
Furthermore, let us define M as the cumulative effort function that is assumed to be always increasing. Thus, we define Φ(M ) as the class of search plans ϕ that satisfy, Z x
ϕ(x, t)dx = M (t) for t ≥ 0
(2.2)
Then ϕ∗ ∈ Φ(M ) is called uniformly optimal within Φ(M ) if, P [ϕ∗ (·, t)] = max{P [ϕ(·, t)] : ϕ ∈ Φ(M )} for t ≥ 0
(2.3)
The following theorem is given in [31] regarding uniform optimality. Theorem 1. Suppose that J has at least two cells and that b is a detection function on J such that b0 (j, ·) is continuous and decreasing and b(j, 0) = 0 for j ∈ J. Let c(j, z) = z for z ≥ 0, j ∈ J, and let M be a cumulative effort function that takes on all finite values between 0 and ∞. Each maximum probability search plan for each target distribution on J is uniformly optimal within Φ(M ) if and only if b is a homogeneous exponential detection function. Proof. From [31]: suppose that the maximum probability search plan for each target distribution is uniformly optimal in Φ(M ). Define kˆj (z) = k(t)[1 − b(j, ϕ(j, t))] for j ∈ J, z ≥ 0 Then kˆj is continuous, and
(2.4)
7
b(j, z) = 1 − exp[
Z 0
z
kˆj (y), dy] for j ∈ J
(2.5)
Let z1 and z2 be positive. Let the target distribution be such that p(j) = 0 for j 6= 1, 2 and, p(1)[1 − b(1, z1 )] = p(2)[1 − b(2, z2 )]
(2.6)
Let t be such that M (t) = z1 + z2 . Then there is a maximum probability search plan ϕ such that ϕ(1, t) = z1 and ϕ(2, t) = z2 . By (2.1),
b0 (1, z1 ) = k(t)[1 − b(1, z1 )] b0 (2, z2 ) = k(t)[1 − b(2, z2 )]
(2.7)
Differentiating (2.5), one finds that kˆ1 (z1 ) = kˆ2 (z2 ) = k(t). Since z1 and z2 are arbitrary positive numbers, it follows that there exists α ≥ 0 such that α = kˆ1 (z) = kˆ2 (z) for z ≥ 0. Extending this argument to all j ∈ J, one has, b(j, z) = 1 − e−az for z ≥ 0
(2.8)
This proves only part of the theorem. Suppose b is a homogeneous exponential detection function. Then there is an an α > 0 such that b(j, z) = 1 − e−az for z ≥ 0, j ∈ J. Let ϕ be a maximum probability
ˆ search. Since α[1 − b(j, z)] = b0 (j, z) for z ≥ 0, for each t ≥ 0 a λ, namely αλ(t), such that λ ≥ 0 and, 0
p(j)b (j, ϕ(j, t)) =
λ if ϕ(j, t) > 0 ≤ λ if ϕ(j, t) = 0
(2.9)
Letting f = ϕ(·, t), one finds that (λ, f ) maximizes the pointwise Langrangian and that by Theorem 2.1.2 from [31] is optimal for cost C[ϕ(·, t)] = M (t). Thus ϕ is uniformly optimal within Φ(M ) and the theorem is proved [31].
8
Maximum probability search applies to the work provided within this thesis as we aim to provide a decision controller that coordinates a search plan that maximizes each aerial asset’s probability of detecting the ground target. The state of each aerial asset is analyzed on a case by case basis and our decision controller will deploy the aerial asset with the maximum probability of detecting the ground target. Ideally, the aerial asset requiring minimal effort will be deployed.
2.2
Quadrotor Helicopter
Simulated scenarios have been developed investigating the use of quadrotor helicopters for search and rescue missions. Quadrotor helicopters are remarkably stable compared to traditional single rotorcraft. As a historical note, the four rotor helicopter was first built in 1907 by the Breguet Brothers. The simulated UAV that motivated the work in this thesis was based on Stanford’s STARMAC (Fig. 2.1) rotorcraft [19].
Figure 2.1: STARMAC quadrotor helicopter developed at Stanford University [19] The quadrotor helicopter is a nonlinear system, and the simulator used in [18] takes into account complex dynamical effects such as the moments of the vehicle, as well as the blade distortion due to high-rpm motion. As such, a controller can use this dynamical simulator as the plant, as its behavior is representative of the actual vehicle. For this thesis, the quadrotor is a useful candidate platform for search, due to its ability to follow the family of trajectories generated, which may not be smooth.
9
2.3 2.3.1
Trajectories Simulation Of A Search Scenario
The search patterns discussed in this work are implemented by trajectory following, and part of the contribution of this thesis is the generation of optimal search trajectories that can be followed. When implemented naively, a trajectory following controller can chatter, or oscillate, resulting in non-smooth, or periodic error for tracking the desired trajectory. In order to address these issues, a sliding-mode controller can be used [35]. Sliding mode control implements high-frequency switching control; control switches from one smooth condition to another are inherent in its structure. This switching mitigates oscillation, and results in convergence on smooth trajectories. Previous work utilizes a single quadrotor helicopter at a height of 150m [18], under the command from a central ground station. A spiral search pattern is initiated which the quadrotor follows until the target is acquired within the quadrotor’s cameras field of view. Once the quadrotor ‘sees’ the target, a target tracking controller is used which allows the quadrotor helicopter to follow the target at a safe distance. Empirical results show that detection is not guaranteed for the current system, due to several reasons. Many simulations provide evidence that target acquisition can be a slow procedure, often taking an order of magnitude longer than a human operator would to locate the target. Many limitations existed in the approach used in [18], in particular, using a fixed height for searching does not necessarily mean the probability of detection is maximized; the ground target might evade the search vehicle if the fixed height does not provide a large enough sensor sweep. Furthermore, commanding the search vehicle to an unnecessarily large height might increase the time taken to acquire the ground target which is ineffective if we want to minimize the time taken for target acquisition. Instead, we should determine a search height which maximizes the probability of detection of a target over a certain field of view.
10
2.3.2
Tight Formation Flying
Barnes et al. discuss improved sensor coverage utilizing tight formation flying in [5]. Formations are a function of artificial potential fields, where the vehicles are controlled as a group rather than individually along a trajectory. Sensor sweep could utilize these results in changing the sensor coverage topology dynamically with more vehicles. As presented, our work assumes a single vehicle; however, with an ability for centralized control of a swarm, our controller is easily generalizable to this strategy. Thus, we present the algorithm as a function of the sensor image (either a single camera, or multiple cameras), and the control could be either of a single vehicle, or a swarm. 2.3.3
Bounded Speed Task Assignment
In [25] the authors investigate the task assignment process of a search algorithm (of search tasks to UAVs) as a bounded speed task assignment problem (BSTAP). This work took into account the necessary condition that vehicles maintain forward airspeed while classifying, attacking, and verifying target “kill” in addition to search. Our particular vehicle is a rotorcraft and does not need forward airspeed for flight stability, though our approach generally favors maximum velocity flight. 2.3.4
Predicting Target Motion
In [9] the authors discuss the coordination of several mobile sensing platforms to search for a (non-evading) target. The approach uses a Bayesian filter to predict target motion from some initial probability density function (PDF). The prediction of the target state is obtained using a Chapman-Kolmogorov equation in conjunction with a probabilistic Markov motion model. Advantages of this approach are the ability for rescue vehicles to contribute to the refinement of the filter values, while leaving or joining the search. A major appeal of the approach is that nodal computation
11
costs are kept constant regardless of their number thus offering a high potential for scalability. The effectiveness of the framework is demonstrated for a team of airborne search vehicles looking for a drifting target lost in a storm at sea. 2.3.5
Search Patterns
The authors of [17] provide a method in which the decision of spatial decomposition of a search space can use intrinsic advantages of a vehicle’s characteristics to optimize that decomposition. The paper utilizes several default search routines of that search space once it is determined, including the square spiral, smooth spiral, and lawnmower strategies, shown in Fig. 2.2. While other, more time-optimal, search patterns may exist, these patterns support the ability of a human to resume control, and continue to follow a trajectory , or evaluate what area has already been searched. The goal of the work in this thesis is to optimize trajectories such as these, in order to dynamically modify the trajectory to reduce redundant search.
Figure 2.2: Example of search patterns, from [17].
2.3.6
Imprecise Sensor Models
In [3] the authors propose flight formations that increase the sensor detection area, improving the probability of detection. The authors of [6] go further in discussing how with imprecise sensors multiple “looks” may be required to confidently conclude that an area does not contain a target. Such models use, again, a PDF to model the sensor and environment. This work is extended in [7] where the target is dynamic
12
and maps become uncertain, and in [8] where the uncertainty in target motion is considered. Note that in these papers a fully autonomous approach is required, due to the complexity of evaluation of the PDF after some time of search. 2.3.7
Searching Arbitrary Maps
For single-vehicle search of constrained topologies, the authors of [26] discuss how a vehicle can inspect and monitor the boundaries of a river. Our work in this paper involves the searching of an arbitrary map for ground vehicle traffic, and with additional information (such as a road network) such search patterns could improve our time-to-locate a target in future work, providing more concise upper bounds of search time. The presented work is there for a conservative approach, as all ground trajectories are permitted.
2.4
Optimality of Spiral Search Patterns
A major contribution of this thesis is generating trajectories with the assumption that a sensor can follow the generated trajectory. Much work has been done in the past proving the usability of different search patterns, breadth-first-search (BFS) and depth-first-search (DFS). However, both BFS and DFS, although effective in theory, have several drawbacks that render them less useful for practical autonomous robotic search [14]. Burlington et al. also describe the fallbacks of BFS and DFS as search strategies for robotic platforms in [10]. They go on to suggest an efficient method of iteration based on spiral search. Burlington et al. finally conclude that spiral search is indeed an efficient search technique for mobile robots. However, before we can discuss the optimality of spiral search patterns we must remember the meaning of optimal search patterns as described in Section 2.1.
13
Adams et al. define optimality in terms of two conditions, completeness, and efficiency. Completeness is defined by the search algorithm covering the entire search space. Efficiency implies the ground target location is acquired ‘quickly’. With this, Adams et al. go on to suggest that the optimal search pattern is a ‘spiral that begins at the center of the distribution and spirals out’ [1], where the center of the distribution refers to the last known location of the ground target. The authors conclude that given conditions where a long period of time since the last known location of the ground target has elapsed, and the ground target has random motion, spiral search is the optimal search pattern. In [20] Langetepe proves the optimality of spiral search as a strategy for search games. Langetepe proposes two theorems, the first that ‘The optimal spiral for searching a point in the plane achieves a competitive ratio of 17.289’ (where competitive ratio is defined as ratio between online and offline algorithm performance, more specifically, the length traversed by the search vehicle in the worst case, over the length traversed in the best case) and secondly that, ‘Spiral search is optimal’. Although Langetepe proves spiral search strategies are optimal for search games, some spirals are impractical for realistic sensors, as discussed next. Langetepe uses a logarithmic spiral and assumes that the sensor is a string of unbounded length attached to the origin, which ‘trips’ if the target crosses it. Optimal evasive maneuver by the target considered in our work is not valid for this, since our sensor sweep is a finite fixed width as shown in Fig. 2.3 and Fig. 2.4 rather than infinitely expanding. If we use a logarithmic spiral as Langetepe does, the spiral separation would become large very quickly and zero sensor sweep overlap would result for a fixed sensor sweep width as shown in Fig. 2.5. This behavior would allow our target vehicle to loiter within the region that has no sensor sweep overlap and evade our search vehicle. For this reason we have adopted our spiral based on previous work conducted in [13].
14
0 100
0
−100
−200 −200
−300
−400 −400
−600
−500
East (m)
North (m)
Figure 2.3: Side view of sensor sweep at h = 300m with overlap multiplier n = 2.01
−600
−400
−200
0 East (m)
200
400
600
Figure 2.4: Birds eye view of sensor sweep at h = 300m with overlap multiplier n = 2.01
15
1000
500
0
−500
−1000
−1500
−1000
−500
0
500
1000
1500
East (m)
Figure 2.5: Birds eye view of a trajectory with no overlap of sensor sweep
2.5
Realistic Camera Sensors
If the entire search area is encapsulated by the UAV’s camera then one may have confidence that the ground vehicle is detected if one assumes no visual obstructions exist. A camera’s field of view is related to the focal length, fef , of the lens implemented, as well as the sensor dimensions, ccdx ,ccdy . The focal length can be determined as follows:
fef =
ccdy 2 tan( π7 )
(2.10)
From (2.10) the angle of view of the camera (see Fig. 3.6a) can be defined as τ = 2 arctan( ρ = 2 arctan(
ccdy 2
fef ccdx 2 fef
) )
Now the field of view (see Fig. 3.6b) can be described as
(2.11)
16
τˆ = h tan(τ ) ρˆ = h tan(ρ) 2.5.1
(2.12)
Articulated Camera
Complications regarding the camera model arise when the camera is not pointing in a fixed direction from the bottom of the UAV; the attitude of the UAV as well as the relative orientation of the camera to the UAV must be considered. The sensor sweep is not the same when the UAV attitude changes [18]. To mitigate this, an articulated camera can be used. If the UAV is pitched α radians, the articulated camera should be relatively pitched −α radians as when the UAV has 0 radians pitch and the camera is fixed. In the event of disturbances (such as wind), and high latency connections, having an articulated camera will reduce pilot confusion. Recognizing this allows us to simplify the model. Rather than considering UAV attitude and camera orientation individually, we instead unify the orientation and attitude thus treating the camera as if it were fixed. Then, we can model an articulated camera’s orientation as we model the UAV’s orientation when the UAV has a fixed camera. This is shown in Figure 2.6. Additional obstacles arise with this model. Precisely, a particular combination of the UAV’s attitude and the camera’s orientation could direct the camera towards infinity. Although an object of interest might reside within the field of view of the camera, the distance between the camera and object of interest might be so large that realistically, we cannot detect the object of interest. To account for this we set ˜ past which we cannot identify objects within our field of view. That a threshold h, is, if the camera reports an object at a certain pixel location and the distance to that location from the current position is outside of the threshold, we ignore that there is no object in our field of view. Such criteria can be used to determine the altitude of the UAV and thus we should choose not to command our UAV to an altitude greater
17
zˆw yˆw
τ oi
ρ xˆw
Figure 2.6: Depiction of relative camera angles on a horizontal UAV ˜ as this violates reliable detection [13]. than h 2.5.2
Sensors for Detection
Using sensor information to track and detect targets is a bountiful field of research. In [33] Wang and Yagi adopt adaptive particle filters in order to improve the performance of the mean-shift tracking algorithm. Thus, the time taken to search the neighborhood of a known object of interest is shortened. Chang et al. provide an application of tracking-by-parts (TBP) for improving global-appearance information for object tracking in [11]. Xue et al. discuss Multiple-target tracking in [36] and results using coupled Markov random fields are provided. Neifeld et al. provide an in-depth discussion of a refined method for sensor detection (within a cluttered environment) of a specific classification of target [24]. As that work is beyond the scope of this thesis, we assume the detection/classification of a ground target is a technology available to us. We can therefore ensure that detection and classification is in the form of a pixel neighborhood on a time-stamped image. Much of the previous work is best-suited to multi-vehicle approaches, where the
18
uncertainty of vehicle movement or initial condition requires advanced techniques to consider. We will examine such approaches in our future work, though this thesis concentrates on the single search vehicle case, and we are interested in generating search trajectories for a search vehicle to follow. As we discuss next, the coverage of our sensor is critical to the control decisions made.
19
Chapter 3
Approach Before we proceed, it is important to not only describe our assumptions but also introduce our sensor model and detection model. Furthermore, a brief overview of the cases of detection we prove as part of our work will be introduced here. The aim of our approach is to provide a decision controller for coordinating autonomous search missions. Given the current height of the search vehicle, the distance between the search vehicle and ground target, we should be able to coordinate missions with provable success or inform the human-in-the-loop that we cannot guarantee success.
3.1
Assumptions
For the work within the scope of this thesis, several assumptions are made: Assumption 1. The speed of the ground target is less than that of the searching vehicle. If the ground target is faster than the search vehicle then it is always possible for the ground target to evade the search vehicle. This leads to interesting future work which is mentioned but not considered for this thesis. Assumption 2. No visual obstructions exist between the search vehicle and ground target. Without this assumption the ground target could simply hide under an obstacle, and hence all coordinated search missions would fail as we are using simple camera sensors to detect the ground target. We propose future work for addressing such an issue but do not consider this case for the work proposed in this thesis.
20
Assumption 3. The sensor operates within a planar local map, with the sensor normal to the plane. Because we have an articulated camera, we can assure Ass. 3 even under windy conditions, and with aggressive maneuvers by an aerial vehicle. For this thesis we rely on existing control methods, and make the assumption that we are able to use such a sensor [12] [34]. We assume a separate controller that maintains a perspective normal to the plane, and this controller is implemented in the simulations performed, but the details are out of the scope of this thesis.
⌧ˆ ⇢ˆ
rˆ
Figure 3.1: Sensor field of view of an articulated sensor, and an arbitrary radius of escape. The circle described by rˆ is not fully encapsulated by the sensor field of view.
3.2
Moving Targets
The introduction of moving targets introduces several interesting problems. In order to keep a moving target in focus, motion parameters for the moving target must be obtained allowing compensation for the target trajectory [37]. Typically, such motion parameters are difficult to obtain, and thus Zhu et al. propose a method for target imaging regardless of the target’s motion parameters. As the main contribution of this thesis is the ability to generate trajectories for the search vehicle, we will not concern ourselves with the difficulties introduced for tracking moving targets. However, we will concern ourselves with how the search space dynamically alters as a function of
21
time for moving targets. Brˆ(t) , the set of all possible locations of the ground target, expands as time elapses. Therefore, the vehicle’s trajectory must be refined to take this into account. Once an area has been covered, it is not feasible to assume that the ground target could not be found in that area at a later time, since it can move based on its own strategy. Brˆ(t) is a function of elapsed time, but it is also governed by a function of time, rˆ (t), if Brˆ0 = (0, 0). We can define Brˆ(t) as a level-set function, where Brˆ(t) (t) = {(x1 , x2 )|(x21 + x22 )1/2 < rˆ(t)} where rˆ(t) = v˜t. An overestimate of how Brˆ(t) grows in time is to define a radius rˆ and have Brˆ(t) grow as a circle of radius rˆ. This model is a conservative overestimate because (realistically) obstacles and ground conditions would limit the areas to which the ground target could navigate.
3.3
Target Acquisition
One maximizing factor to determine is the desired altitude of the UAV during search, rewriting (2.12) results in an equation for the altitude of the UAV in body-fixed-frame τˆ h = max( tan(τ , ρˆ ) ) tan(ρ)
(3.1)
Maximizing the probability of capturing the ground vehicle implies that the field of view must be greater or equal to the circular area describing the reachable set of the ground vehicle (see Fig. 3.6c). From (2.12) and (3.10), one can infer that the following relationship assures the visualization of the ground vehicle
τˆ = ρˆ ≥ 2ˆ r
(3.2)
Rewriting (3.2) results in a more refined relationship between the altitude of the UAV and the encapsulation of the possible ground vehicle locations within the field of view of the camera. This relationship is defined below
h ≥ max(
2ˆ r 2ˆ r , ) tan(τ ) tan(ρ)
(3.3)
22
In order to guarantee target acquisition, the maximum value of the two equations presented in (3.3) is used for the altitude. The altitude is a contributing factor to the field of view (see Fig. 3.6d). Guaranteeing target acquisition in this case assumes that no visual obstructions exist such as tunnels or bridges.
3.4
Detection Model
The field of view of the sensor is defined in (2.12) as: τˆ = h tan(τ ) ρˆ = h tan(ρ)
(3.4)
˜ at which sensor resFor the scope of this thesis, we define a maximum height h olution permits image processing techniques to guarantee detection. If the position of the ground target resides within the field of view of the sensor and the sensor ˜ then detection is guaranteed. Additionally, for this thesis, we will over is below h, approximate the location of the ground target. The set of all possible locations of the ground target Brˆ(t) , is defined as a circle with radius rˆ (t). Detection can be defined as a function, let detection operate on some region of R2 , say the set P (for field of view). Let the target be located at (ˆ x,ˆ y ). Then, detection is some function, p=
˜ 1 if (ˆ x, yˆ) ∈ P ∧ h ≤ h 0 otherwise
Where p is the probability of detection, and the sensor model provides the definition of P . Prior to this point, the sensor was assumed ideal, i.e. the probability of detection was 1 regardless of distance between target and sensor. However, realistically this may not be the case, Fig. 3.2 provides more pragmatic detection models. This model in turn yields restrictions on the distance between the sensor and target prompting an alternative approach to coordinating the UAV. As exhibited in the example above,
23
a trivial scenario could take a single UAV almost 500 seconds at a height of approximately 2.5km to guarantee target acquisition [2]. Using pragmatic sensors, such a naive approach not only coordinates search missions with the potential of failure but also does not take advantage of the scalability of the altitude calculations.
1
Probability of Detection
0 Distance Cut off Height
h_min
h_max
1
Probability of Detection
0 Distance Cut off Height
h_min
h_max
Figure 3.2: Pragmatic sensor models with cut off height Given two different detection models, defined in Fig. 3.2, a cut off height can be introduced. Such a cut off height is based on a compromise between maximizing the potential field of view of a sensor and maximizing the probability of detection of the target. Understanding the characteristics of a given sensor model can help distinguish the range of operational heights for the UAV. Thus, the cut off height becomes an important parameter for coordinating search missions. We choose to adopt a sensor model which assumes that we can guarantee detection (probability = 1) if the target ˜ meters. is within the sensor field of view and the sensor is at height 0 < h ≤ h
24
3.5
Sensor Model
Sensors are reliable only based on what they are looking for and detection is a function of the target as well as the properties of the sensor. We define P as a box of (width,height) (ˆ ρ, τˆ). As shown in (2.12), the size of P is a function of the height of the sensor. In order to test detection, P , is simply used as the result.
P = Brˆ(t) (ˆ τ , ρˆ, x, y, θ)
ρˆ
(3.5)
θ
(x, y)
τˆ Figure 3.3: P is defined as a box of width/height ρˆ, τˆ
3.6
Cases Covered
We will concern ourselves with four particular search cases. These four cases should be adequate to determine whether a successful mission can be coordinated with provable detection. Each case is a function of the initial distance between the search vehicle and ground target as well as the known velocities of the search vehicle and ground target.
25 • Case 0: The search vehicle flies at the current altitude directly to the last reported location of the ground target. Upon arrival, the camera sweep encapsulates all possible locations of the ground target. • Case 1: The search vehicle’s camera sweep would not encapsulate all possible locations of the ground target upon arrival. The search vehicle increases in altitude while flying to the last reported location, and upon arrival, the camera sweep now encapsulates all possible locations of the ground target. The maximum search height is not exceeded. • Case 2: In order for the camera sweep to encapsulate all possible locations of
˜ must be exceeded. Upon arrival, a spiral search pattern the ground target, h with fixed overlap is initiated, and guaranteed to detect the ground target in ˜ bounded time. The spiral search may (or may not) begin at height h.
• Case 3: In order for the camera sweep to encapsulate all possible locations of
˜ must be exceeded. A spiral search pattern with varying the ground target, h overlap is initiated, and a maximum guaranteed detection search radius and time is calculated, but detection is not guaranteed.
The following sections explore these cases in more detail, and sketch the approaches taken in the next chapter.
3.7
Case 0
We consider the simple case in which the field of view of the sensor encapsulates all possible locations of the ground target upon arrival. The potential location of the ground target is located and reported to the human operator. We then calculate the distance between the search vehicle and ground target later in (4.1). Accepting this, we can now calculate the time it takes for the search vehicle to get to the reported location of the ground target, as shown later in Chapter 4.
26
t=n
t=0
P = {p0 , p1 , ...pn }
h d
oi
rˆ X = {x0 , x1 , ...xn }
Figure 3.4: Case 0: Sensor field of view encapsulates all possible locations of ground target ˜ at h1 ≤ h.
3.8
Case 1 t=n t=0 h1
P = {p0 , p1 , ...pn }
h2
d
oi
rˆ X = {x0 , x1 , ...xn }
Figure 3.5: Case 1: Sensor field of view encapsulates all possible locations of ground target ˜ after climbing to h2 ≤ h
This case covers initial conditions where by the time the search vehicle arrives at the reported location of the ground target, the circle defined by the radius of escape is not fully encapsulated by the field of view when the search vehicle is at height h1 . If this is the case, the search vehicle should alter its height such that the field of view of the sensor increases, to some h2 ≥ h1 .
27
3.9
Case 2
Performing aerial search and rescue missions for ground targets describes a large parameter design space. In order to provide a level of autonomy to search and rescue missions, we propose a closed form expression for calculating the flight altitude. A subset of the parameters impacting the suitable flight altitude are taken into consideration. Initially, the length of the spiral between the search vehicle and ground target at time t = 0 should be calculated. Rudimentary mathematics provides us with a simple expression for the length of the spiral in polar coordinates centered at the origin (0, 0). In polar coordinates, the radius of a spiral at any angle θ ∈ R+ , with multiplier m ∈ R+ : r = mθ
(3.6)
dr =m dθ
(3.7)
Taking the derivative of (3.6) gives
Hence the length of the spiral can be found in closed form as follows ˆ = s(θ)
R θmax 0
√ m 1 + θ2 dθ
√ √ ˆ = [ mθ 1 + θ2 + 1 ln |θ + 1 + θ2 |]θmax s(θ) 0 2 2 ˆ = s(θ)
mθmax 2
(3.8)
p p 2 2 1 + θmax + 21 ln |θmax + 1 + θmax |
With (4.10) and the UAV’s average speed it is now possible to determine the time it takes for the UAV to navigate to the start point of the ground vehicle.
t=
ˆ s(θ) v
(3.9)
Where v is the velocity of the search UAV. The location of the ground target after the elapsed time needs to be predicted. One method for mapping the possible
28
location includes describing circles with radius rˆ about the location of the ground vehicle at time t = 0. The radius of the circle is simply calculated as follows
rˆ(t) = v˜t
(3.10)
Where v˜ is the velocity of the ground vehicle. Once the area the ground vehicle may be located is described, one may determine a suitable height such that the field of view of the UAV camera encapsulates the entire area.
aovy
aovx
f ovy
f ovx (a)
(b)
p=1
h
ˆ h
(c)
p �= 1
(d)
Figure 3.6: (a)Angle of View of UAV Camera. (b)Field of View of UAV Camera. (c) Field of View of UAV Camera with absolute guarantee of locating ground vehicle. (d) Varying the height inherently varies the field of view of the UAV hence altering the probability of visualizing the ground vehicle.
If the search vehicle is already at its maximum search height and upon arrival the field of view of the sensor does not fully encapsulate all possible locations of the
29
ground target then we must introduce a search pattern. Our search pattern is defined as a spiral characterized as follows, using a polar coordinate system centered at the location of the target’s last known location.
r=
hθ nho
(3.11)
In order to make it impossible for the ground target to escape we must introduce the concept of sensor sweep overlap. Additional discussion of this requirement is given in Section 4.3. The goal of this case is to overcome a loitering strategy with fixed overlap based on a maximum radius of escape, but capture is guaranteed without the need to vary the overlap multiplier, n.
3.10
Case 3
A more optimal approach in terms of search area coverage would involve dynamically altering our overlap while searching. As θ increases, it makes sense that the time for the search vehicle to traverse the spiral between θ and θ + 2π should increase. Therefore, as the escape radius is a function of tθ1 ,θ2 we should choose to increase our overlap.
3.11
Decision Controller
Now that we have categorized the cases with which we are concerned, a decision controller is used to select the most appropriate case, given initial conditions and parameters. Based on the distance between the start location of the object of interest and the search vehicle, the decision controller should be able to select which mission to deploy. We should be able to define regions in which a certain mission is guaranteed to be successful as our cases are a function of d, h, velocity of the search vehicle, and velocity of the target.
30
Chapter 4
Results 4.1
Case 0
For this case we want to prove that by the time the search vehicle travels between its original location and the last reported location of the ground target that all of the potential locations of the ground target are covered by the sensor sweep of the search vehicle. More formally, Theorem 2. Consider a search vehicle located at (x, y, h), that flies at the current altitude, h, directly to the last reported location of the ground target, (ˆ x, yˆ) at velocity v. There is a maximum distance from (x, y) to (ˆ x, yˆ) such that upon arrival at (ˆ x, yˆ, h), the camera sweep, P , encapsulates all possible locations of the ground target, Brˆ(t) . Proof. The proof is by construction. First calculate the level set that defines all potential locations of the ground target, which maximally fills, but does not lie outside the sensor sweep of the search vehicle at height h. The sensor sweep set at that time, t1 , is then a superset of the level set that bounds all possible locations of the target vehicle. P , is clearly defined, by another function, namely x(t1 ), which is the location of the search vehicle at time t1 , specifically its (x1 , y1 , h1 ) values. As the size of Brˆ(t) is a function of the elapsed search time we need to calculate the time it takes for the search vehicle to navigate to the last known position of the ground target.
d=
p (ˆ x − x1 )2 + (ˆ y − y1 )2
(4.1)
d v
(4.2)
t1 =
31
With this information, we can now calculate the initial ground target escape radius, rˆ, at the time at which P − Brˆ(t) is minimal. rˆ(t1 ) = v˜t1
(4.3)
Recall that the circle describing all possible locations of the ground target, Brˆ(t) at time t1 is:
Brˆ(t) = rˆ(t1 )
(4.4)
The maximal set in which the ground target must reside may be shown analytically if the velocity of the vehicle and time in which it can travel are known. Where ρ ≤ τ ,
1
Brˆ(t1 ) = {(x1 , x2 )|((x21 − xˆ22 ) + (y12 − yˆ22 )) 2 ≤ rˆ(t1 )} v˜t1 ≤ h tan (ρ) v˜d ≤ h tan (ρ) v
(4.5)
Thus solving for d we get,
d≤
h tan (˜ ρ)v v˜
(4.6)
Thus, given a maximum separation of (4.6) between the ground target and search vehicle, and known velocities and initial altitude, Theorem 2 shows a criteria to employ Case 0. An example simulation is shown in Fig. 4.1.
4.2
Case 1
Similar to Section 4.1 we want to prove that by the time the search vehicle travels between its original location and the last reported location of the ground target as
32
300
250
Height (m)
200
150
100
50
100 0
50 0 −50 −100 −150
−100
−50
0
50
100
150 North (m)
East (m)
Figure 4.1: Case 0: Simulation result showing concept.
33
well as traveling from height h1 to h2 , that all of the potential locations of the ground target are covered by the sensor sweep of the search vehicle. Recall that in this case, arrival at height h1 does not satisfy Case 0. Theorem 3. Consider a search vehicle at (x1 , y1 , h1 ), with maximum velocity v. There exists a maximum distance, d, measured from the target’s location, such that ˜ and that upon arrival at (ˆ the search vehicle can increase to altitude h2 ≤ h, x, yˆ, h2 ) that the camera sweep P encapsulates all possible locations of the ground target. Proof. Altering the height impacts the time taken to reach the reported last sight of the ground target. This adjustment can be characterized as follows; to determine the time t1 such that,
t1 = ˜ and Where h2 ≤ h,
dh dt
d h2 − h1 + dh v dt
(4.7)
is the maximum change in height per unit time for the
˜ is the maximum height at which the on-board sensor can reliably sensor. Recall that h detect the target. Assuming ρ ≤ τ , then: 1
Brˆ(t1 ) = {(x1 , x2 )|((x21 − xˆ22 ) + (y12 − yˆ22 )) 2 ≤ rˆ(t1 )} v˜t1 < h tan (ρ) d h2 − h1 ) < h tan (ρ) v˜( + dh v dt
(4.8)
Therefore we can redefine the distance based on the change in height as:
d≤(
h2 tan (ρ) h2 − h1 − )v dh v˜ dt
(4.9)
As long as (4.9) is satisfied we have proven Theorem 3 shows that Case 1 may be used in search.
34
300
250
Height (m)
200
150
100
50
0 200 150
−200 −150
100
−100
50
−50 0
0
50
−50
100 150
−100
200
North (m) East (m)
Figure 4.2: Case 1: Simulation result showing concept.
−1500
−1000
−500
0 East (m)
500
1000
Figure 4.3: Sensor overlap as search vehicle follows a spiral trajectory, generated with fixed overlap n=1.05.
35
4.3
Case 2
We want to prove that the search vehicle captures the ground target after initiating ˜ centered at the last reported location of the ground a spiral search at height h2 = h target. The spiral in this case shall use a fixed overlap multiplier, n. Recall the spiral search pattern in polar coordinates (from (3.11)):
r(θ) =
hθ nho
Definition 1. Let θ0 = 0 represent the origin of the spiral, and θi > θ0 represent arbitrary angular positions in polar coordinates along the search trajectory.
t ✓1
⌧ˆ
n⇤
t ✓2
⇢ˆ Figure 4.4: Sensor overlap n∗ , between sensor sweep at time tθ1 , and tθ2 .
Definition 2. Let m = h/(nh0 ) be a constant; for θ1 < θ2 , the length of the search pattern arc (defined in (3.11)), is taken from first principles as follows. Z θ2 1 s(θ1 , θ2 ) = m(1 + θ2 ) 2 dθ
(4.10)
θ1
s(θ1 , θ2 ) =
θ 2 mθ 1 2 12 2 12 (1 + θ ) + ln |θ + (1 + θ ) | 2 2 θ1 (4.11)
Given the length between any two angles of the search trajectory and the search vehicle’s maximum velocity, the time required for the search vehicle to travel from θ1
36
to θ2 is clearly: tθ1 ,θ2 =
s(θ1 , θ2 ) v
(4.12)
Definition 3. Let the growth in the radius of escape be defined as ∆ˆ r(t1 , t2 ) = rˆ(t2 ) − rˆ(t1 ). The region of escape is therefore Brˆ(t2 ) /Brˆ(t1 ) , defining an annulus. Lemma 1. Consider an angle on the spiral search trajectory, θ1 . Starting at this angle at time t1 , the search vehicle is guaranteed to capture the ground target before it reaches θ2 = θ1 + 2π if: 1. rˆ(t1 ) > r(θ1 ) + ρˆ/2; 2. rˆ(t2 ) < r(θ2 ) + ρˆ/2; and 3. rˆ(t1 ) − n∗ > r(θ2 ) − ρˆ/2 where t2 represents the time at which the search vehicle arrives at angle θ2 . Proof. The proof is by contradiction. Assume that if the search vehicle travels along the trajectory given by the spiral arc s(θ1 , θ2 ), that Case 0 will not be satisfied for any time t1 < t < t2 . At time, t1 , let the ground target be at the polar coordinates, x˜(t1 ), relative to the origin (not necessarily a point on the search trajectory), and the search vehicle at polar coordinates r(θ1 ). Expanding the first of the assumptions (using Definition 1 and (3.10)):
rˆ(t1 ) > r(θ1 ) + ρˆ/2 v˜s(θ1 , θ2 ) hθ1 ρˆ > + v nho 2
(4.13)
Where rˆ(t1 ) relates to the radius of escape at time, t1 , we use arc length, s(θ1 , θ2 ), rather than Cartesian distance to calculate the growth of the set.
37
Similarly, the second assumption may be rewritten as: rˆ(t2 ) < r(θ2 ) + ρˆ/2 v˜ hθ2 ρˆ [s(θ1 , θ2 ) + s(θ2 , θ3 )] < + v nho 2
(4.14)
Finally, the third assumption may be expanded as follows: rˆ(t1 ) − n∗ > r(θ2 ) − ρˆ/2 1 hθ2 ρˆ v˜s(θ1 , θ2 ) − (1 − )ˆ ρ > − v n nho 2
(4.15)
Substituting the distance s(·) for d in (4.6) the maximum distance required for Case 0 to be satisfied is:
s(θ1 , θ2 ) ≤
h tan (˜ ρ)v v˜
Yet we know Case 0 will not be satisfied due to our assumption in (4.13) which can be rewritten as:
v˜s(θ1 , θ2 ) hθ1 ρˆ >[ + ] v nho 2 hθ1 ρˆ v s(θ1 , θ2 ) > [ + ] nho 2 v˜
(4.16)
We would clearly violate the assumption in (4.13) in satisfying Case 0. Therefore, in violating any one of the three assumption, we cannot prove detection. However, the search vehicle is guaranteed to capture the ground target before it reaches θ2 = θ1 +2π if all three of the assumptions are satisfied, because, violation of our assumption that Case 0 does not hold, requires detection.
Definition 4. From Lemma 1 we can define θˆ as the last θ value, where θˆ − 2π is covered by Lemma 1 given a fixed value for n.
38
Theorem 4. Consider a search vehicle with maximum velocity, v, located at (x0 , y0 , h0 ), that flies at the commanded altitude, h2 directly to the last reported location of the ground target, r(θ0 ) at velocity v. The distance from (x0 , y0 ) to r(θ0 ) is such that upon arrival at r(θ0 ), the camera sweep, P , does not encapsulate all possible locations of the ground target, Brˆ(t) . For a ground target located at r(θ0 ) at time t0 , with maximum velocity v˜, a spiral search trajectory (with fixed overlap) is guaranteed to detect a ground target if the distance between the ground target and search vehicle, d, t ˆv˜ satisfies the inequality d ≤ θˆ − ( θ0v,θ ).
ˆ Proof. The proof is by construction. Given a known value of n, a range (θ0 , θ), velocities of the search vehicle, v, and ground target, v˜, two necessary conditions are ˆ required to guarantee detection of the ground target. First, rˆ(t) ≤ θ(t), and second,
n∗ (t) ≥ rˆ(t) − rˆ(t − 1) where rˆ ∈ R+ and θˆ ∈ R+ describes the sensor overlap.
Assume it takes the search vehicle time, t1 , to navigate from its initial position at time, t0 , to r(θ0 ). Assuming, Case 0 and Case 1 are not satisfied, thus we can accept that at time, t1 :
d>(
h2 tan (ρ) h2 − h1 − )v dh v˜ dt
(4.17)
Therefore at time t1 there is already some radius of escape defined by:
rˆ(t1 ) = v˜t1
(4.18)
As we cannot guarantee capture at time, t1 , we must initiate a spiral search pattern beginning at r(θ0 ). The arclength of the spiral between θ1 and θ2 , defined in polar coordinates is: Z
θ2
s= θ1
Assuming
dr dθ
(r2 +
dr 1 ) 2 dθ dθ
is a constant, c, we can simplify (4.19) as follows:
(4.19)
39
Z
θ2
s=
1
(r2 + c2 ) 2 dθ
(4.20)
θ1
Solving the integral (4.20) gives: s(θ1 , θ2 ) =
1 b2 (h(a2 − a1 + c2 nho log { }) 2h b1
(4.21)
Where,
r
hθ1 2 )) c2 + ( nho r hθ2 2 a2 = θ2 c2 + ( ) nho r hθ1 2 b1 = c 2 + ( ) + ho nho r hθ2 2 b2 = c 2 + ( ) + ho nho a1 = θ1
(4.22)
The time it takes for the search vehicle to traverse this arc length is clearly tθ1 ,θ2 =
s(θ1 , θ2 ) v
(4.23)
and with the previous definition of the radius of escape in (3.10), the time required to traverse the arc length can be used to bound the target’s potential location. As we initiate a spiral search from r(θ0 ), we are concerned with the value of the arc length s(0, 2π). Repeated application of Lemma 1 will result in the search vehicle’s sensor sweep completely enveloping the radius of escape, or, the radius of escape will grow faster than the overlap and a loitering tactic can not be avoided. The change in radius of escape with respect to time is conveniently written as
∆ˆ r(tj , tk ) = rˆ(tk ) − rˆ(tj )
(4.24)
where tk > tj . Therefore, we want to ensure that the change in radius of escape ˆ for every 2π interval. Let is less than the overlap of the sensor sweep from (θ0 , θ)
40
r(tj ) = θj , r(tk ) = θk = θj + 2π. Then suppose the growth of the radius of escape is less than the sensor overlap at the next 2π angular value:
∆ˆ r(tj , tk ) < n∗ 1 )ˆ ρ n 1 < (1 − )ˆ ρ n
rˆ(tk ) − rˆ(tj ) < (1 − v˜tθj ,θk
(4.25) Using (4.23) 1 v˜ (s(θj , θk )) < (1 − )ˆ ρ v n v 1 s(θj , θk ) < ((1 − )ˆ ρ) v˜ n
(4.26)
To ensure that the radius of escape does not evolve greater than the overlap, the arc length between (θj , θj + 2π) must obey the inequality in (4.26). We are also concerned with the maximum search space of the sensor based on ˆ we can calculate the maximum search space and initial sensor parameters. Given θ, corresponding escape radius as:
rˆ(t) =
ˆ + d + s(θ0 , θ)
˜−h h dh dt
!
v˜ v
(4.27)
Thus, for our first necessary condition to hold, we require the radius of escape to be fully contained within our search space, more formally:
rˆ ≤ ˆ + d + s(θ0 , θ)
˜−h h dh dt
!
hθˆ nho hθˆ
v˜ ≤ v nho d ≤
˜ hθˆ ˆ − h−h − s(θ0 , θ) dh nho dt
!
v˜ v
(4.28)
41
If (4.28) holds, then we need to consider the second necessary condition. Remember, we need n∗ ≥ rˆ(tk ) − rˆ(tj ), to guarantee detection. We define sensor sweep overlap as:
n∗ = (1 −
1 )ˆ ρ n
(4.29)
Now this enables us to determine whether the sensor sweep overlap n∗ , is large enough to overcome the loitering tactic. Given (4.23), (4.29), we can determine the sensor sweep overlap parameter n to guarantee detection at a specific θ as:
n∗ ≥ rˆ(tk ) − rˆ(tj )
(1 −
1 )ˆ ρ ≥ s(θj , θk )˜ v n ρˆ − ≥ s(θj , θk )˜ v − ρˆ n ρˆ n ≤ − s(θj , θk )˜ v − ρˆ ρˆ n ≤ ρˆ − s(θj , θk )˜ v
(4.30)
Fig. 4.3 shows graphically the overlap of our sensor at a given search height. Our aim is to ensure that the sensor overlap is great enough such that we can guarantee detection of the ground target. As long as (4.28) and (4.30) are satisfied we have proven Theorem 4 for this case, and we define the mission according to Case 2.
4.4
Case 3
Similar to Section 4.3 we want to prove that the search vehicle captures the ground ˜ centered at the last reported target after initiating a spiral search at height h2 = h
42
400
300
200
100
0
−100
−200
−300 −300
−200
−100
0
100 East (m)
200
300
400
500
Figure 4.5: Case 2: Bird’s eye view simulation result showing concept where detection occurs.
−100 0
100
100 0
200 300
−100 400
−200 500 −300 North (m)
East (m)
Figure 4.6: Case 2: Side view simulation result showing concept.
43
location of the ground target. However, the spiral in this case shall use a variable overlap multiplier, n, rather than a fixed value for n that is to be recalculated every 2π period. Recall the spiral search pattern in polar coordinates (from (3.11)): hθ nho Recall from Definition 1, that θ0 = 0 represents the origin of the spiral, and θi > θ0 r(θ) =
represent angular positions in polar coordinates along the search trajectory. Recall from Definition 2, m = h/(nh0 ) is a constant; for θ1 < θ2 , the length of the search pattern arc (defined in (3.11)), is taken from first principles as follows. θ2 mθ 1 2 21 2 21 (1 + θ ) + ln |θ + (1 + θ ) | (4.31) s(θ1 , θ2 ) = 2 2 θ1 Given the length between any two angles of the search trajectory and the search
vehicle’s maximum velocity, the time required for the search vehicle to travel from θ1 to θ2 is clearly: s(θ1 , θ2 ) (4.32) v Recall from Definition 3 the growth in the radius of escape is defined as ∆ˆ r(t1 , t2 ) = tθ1 ,θ2 =
rˆ(t2 ) − rˆ(t1 ). The region of escape is therefore Brˆ(t2 ) /Brˆ(t1 ) , defining an annulus. Also recall from Definition 1 an angle on the spiral search trajectory, θ1 , starting at this angle at time t1 , the search vehicle is guaranteed to capture the ground target before it reaches θ2 = θ1 + 2π if: 1. rˆ(t1 ) > r(θ1 ) + ρˆ/2; 2. rˆ(t2 ) < r(θ2 ) + ρˆ/2; and 3. rˆ(t1 ) − n∗ > r(θ2 ) − ρˆ/2 where t2 represents the time at which the search vehicle arrives at angle θ2 . Finally, recall from Definition 4 we can define θˆ as the last θ value, where θˆ − 2π is covered by Lemma 1 given a fixed value for n.
44
Theorem 5. Consider a search vehicle with maximum velocity, v, located at (x0 , y0 , h1 ), that flies at the commanded altitude, h2 directly to the last reported location of the ground target, r(θ0 ) at velocity v. The distance from (x0 , y0 ) to r(θ0 ) is such that upon arrival at r(θ0 ), the camera sweep, P , does not encapsulate all possible locations of the ground target, Brˆ(t) . For a ground target located at r(θ0 ) at time t0 , with maximum velocity v˜, a spiral search trajectory (with variable overlap) is guaranteed to detect a ground target if the set of all possible locations of the ground target, Brˆ(t) (tθˆ), is encapsulated within the maximum search space r˜(tθˆ). ˆ Proof. The proof is by construction. Given a known value of n, a range (θ0 , θ), velocities of the search vehicle, v, and ground target, v˜, two necessary conditions are ˆ k ), and second, required to guarantee detection of the ground target. First, rˆ(tk ) ≤ θ(t ˆ k ). n∗ ≥ rˆ(tk ) − rˆ(tj ), where θ(tj ) + 2π = θ(t
Assume it takes the search vehicle time, t1 , to navigate from its initial position at time, t0 , to r(θ0 ). Assuming, Case 0 and Case 1 are not satisfied, thus we can accept that at time, t1 , we follow Theorem 4. However, unlike Case 2, we need to optimize our search by changing the value of n. Given that: ˜ θ1 , θ2 ) s = f (n1 , h,
(4.33)
tθ1 ,θ2 = f (v, s)
(4.34)
rˆ = f (˜ v , tθ1 ,θ2 )
(4.35)
n2 = f (n1 , rˆ)
(4.36)
45 Algorithm 4.1 Iteratively calculate the value of n2 Select n1 Calculate r1 r2 = 0 while r2 6= r1 do Update r1 ˜ θ1 , θ2 ) Calculate s = f (n1 , h, Calculate tθ1 ,θ2 = f (v, s) Calculate rˆ = f (˜ v , tθ1 ,θ2 ) Calculate n2 = f (n1 , rˆ) Update r2 if X ⊆ P then Return Mission Success else Continue end if end while In Algorithm 4.1 we iterate as long as r2 6= r1 . Analytically, r1 =
hθ1 n1 ho
hθ2 n2 ho Assuming θ2 = (θ1 + 2π) we can define r2 as: r2 =
r2 =
h(θ1 + 2π) n2 ho
(4.37)
(4.38)
(4.39)
Thus, we iterate until we converge on a circle which occurs when: hθ1 h(θ1 + 2π) = n1 ho n2 ho
(4.40)
Solving for n2 : hn1 ho θ1 hn1 θ1 = (4.41) ho (θ1 + 2π) θ1 + 2π Selecting this value for n2 implies that we are indeed commanding the search n2 =
vehicle to converge on a circle. Performing such a strategy results in a maximum search space with radius r˜(t).
46
However, until this terminal value is reached, we iteratively select n2 to provide a desirable overlap such that the ground vehicle cannot loiter outside the sensor sweep to avoid detection. An over approximation of the desired overlap is defined as:
r2 ≤ r1 − rˆ + P
(4.42)
Thus, solving for n2 we get: n2 =
h(θ1 + 2π)n1 hθ1 − rˆho n1 + h tan (˜ ρ)ho n1
(4.43)
Realistically, our overlap does not have to be as large as we are currently over approximating the escape radius of the ground target. As long as our trajectory does not converge on a circle before the search vehicle detects the ground target we define the mission according to Case 3. We can determine the maximum search space in terms of a circle with radius r˜. From (4.41) we know that selecting such a value of n2 results in r1 = r2 thus the trajectory converges on a circle. We can calculate the value of r˜ that defines the radius of the search vehicle’s maximum search space.
r1 = r2 =
(θ1 + 2π)2 n1 ho θ1
(4.44)
Thus, as (4.44) defines the position when r1 = r2 , and also defines the point at which our search trajectory converges onto a circle, we define the radius of the maximum search space as, (θ1 + 2π)2 r˜(tθˆ) = n1 ho θ1
(4.45)
From repeated application of Lemma 1, the search vehicle is guaranteed to detect the ground target if the set of all possible locations of the ground target, Brˆ(t) (tθˆ), is encapsulated within the maximum search space r˜(tθˆ).
47
If the search vehicle has not found the ground target before the trajectory converges onto a circle, then the human in the loop will have to decide whether to call off the search as we cannot guarantee the ground target will ever reenter the search space.
−400
−200
0
200
400
600
800
East (m)
Figure 4.7: Case 3: Side view simulation result.
4.5
Decision Controller
The final contribution of this thesis is to provide a new method to characterize the rate of success of any search and rescue mission with reliable sensors and uncluttered terrain. This method allows the human-in-the-loop to explore other strategies in the case that a mission is deemed unlikely to succeed. With this in mind, we propose Algorithm 4.2 as a decision controller. By the theorems presented in this thesis, this decision controller coordinates maximum probability search plans for each case, given our homogeneous detection function, p=
˜ 1 if (ˆ x, yˆ) ∈ P ∧ h ≤ h 0 otherwise
Where p is the probability of detection.
48
Algorithm 4.2 Decision controller for aerial search: for i := 1 → size(sv) do sv(i).h1 θ1 sv(i).r1 = sv(i).nsv(i).h o sv(i).r2 =
sv(i).h1 (θ1 +2π) sv(i).nsv(i).ho
coord(sv(i).d, sv(i).h1 , sv(i).h2 , sv(i).v, sv(i).v) end for
coord(d, h1 , h2 , v, v˜, n, r1 , r2 ) if d ≤ h tanv˜(˜ρ)v then Execute Case 0 else if d ≤ ( h tanv˜ (˜ρ) − Execute Case 1
h2 −h1 dh dt
)v then
else if r2 ≤ r1 − rˆ + h2 tan (˜ ρ) then Execute Case 2 else Execute Case 3 end if
49 350 300 250 200
North (m)
150 100 50 0 −50 −100 −150 −200
−100
0
100 East (m)
200
300
400
Figure 4.8: Search vehicle start location and coordinated mission.
4.6
Simulations
A Monte Carlo simulation [25] was created in order to validate our approach. In Fig. 4.8 we have a visual representation of the start location of the search vehicle and the coordinated mission selected by Algorithm 4.2. This simulation assumed that the ground target began at (0, 0, 0). It is easy to discern the areas in which certain missions are coordinated. Case Zero missions are displayed as red circles, Case One missions as blue + marks, and Case Three as yellow asterisks. We witness distinct circles in which similar missions are coordinated as one might expect. The further away from the ground target the search vehicle resides, the more complex the search trajectory has to be in order to guarantee mission success.
50
Figure 4.9: Search vehicle start location and coordinated mission with profile of altitude initial conditions.
4.7
Competitive Analysis
In order to quantitively measure the performance of our algorithms, we have chosen to perform competitive analysis on each algorithm, using the formulations in Chapter 4.
51
Competitive analysis describes a range of ratios. For the purpose of this thesis, we adopt the Search Ratio performance metric. Definition 5. Let t0 denote the time it takes for the search vehicle to acquire the target for a particular case. Assuming t˙ defines the shortest possible search time, we define the search ratio sˆ as follows: t0 sˆ = t˙ 4.7.1
(4.46)
Search Ratio: Case 0
Recall, for Case 0: The search vehicle flies at the current altitude directly to the last reported location of the ground target. Upon arrival, the camera sweep encapsulates all possible locations of the ground target. Also recall that for Case 0 to be valid the following inequality has to hold: h tan (˜ ρ)v v˜
(4.47)
h tan (˜ ρ) d ≤ v v˜
(4.48)
d≤ Therefore, t0 is derived as:
t0 ≤
Obviously, the shortest time occurs when the search vehicle navigates in a straight line towards the ground target and acquires the target along this straight line trajectory. Assuming the straight line distance to be d, t˙ is derived as: d h tan (˜ ρ) t˙ ≤ ≤ v v˜ Thus, from Definition 5, the search ratio for Case 0 is as follows:
(4.49)
52
sˆ =
( h tanv˜ (˜ρ) ) ( h tanv˜ (˜ρ) ) sˆ = 1
(4.50)
Conveniently, for Case 0, our search ratio is 1. This means that the worst case search time matches the best case search time. Regardless of the ground targets trajectory, if Case 0 is coordinated, we are guaranteed to capture the ground target in bounded time. 4.7.2
Search Ratio: Case 1
Recall, for Case 1: The search vehicle’s camera sweep would not encapsulate all possible locations of the ground target upon arrival. The search vehicle increases in altitude while flying to the last reported location, and upon arrival, the camera sweep now encapsulates all possible locations of the ground target. The maximum search height is not exceeded. Also recall that for Case 1 to be valid the following inequality has to hold:
d≤(
h2 tan (ρ) h2 − h1 − )v dh v˜ dt
(4.51)
d (h2 − h1 ) + v ( dh ) dt
(4.52)
Therefore t0 is derived as: t0 ≤
Similarly, the shortest time occurs when the search vehicle navigates in a straight line, with no change in height and acquires the ground target along this trajectory. Therefore t˙ is as follows: d t˙ ≤ v Thus, from Definition 5, the search ratio for Case 1 is as follows:
(4.53)
53
sˆ =
( vd +
(h2 −h1 ) ) ( dh ) dt
( vd )
) + v(h2 − h1 ) (d dh sˆ = dt ) (d dh dt
(4.54)
The search ratio for Case 1 is heavily dependent on the necessity to change height. The larger the change in height, the less efficient the online search algorithm becomes. 4.7.3
Search Ratio: Case 2
Recall, for Case 2: In order for the camera sweep to encapsulate all possible locations ˜ must be exceeded. Upon arrival, a spiral search pattern with of the ground target, h fixed overlap is initiated, and guaranteed to detect the ground target in bounded time. Also recall that for Case 2 to be valid the following inequality has to hold:
d≤
˜ hθˆ ˆ − h−h − s(θ0 , θ) dh nho dt
!
v˜ v
(4.55)
Given this, we know the worst case scenario would involve the search vehicle traveling a distance d while (perhaps) changing altitude, and then traversing the ˆ If time to change altitude is less than the time to travel the spiral from (θ0 , θ). distance d, then t0 is derived as: t0 ≤
ˆ d + s(θ0 , θ) v
(4.56)
Similarly, the shortest time occurs when the search vehicle navigates in a straight line, with no change in height and acquires the ground target along this trajectory. ˙ is as follows: Therefore t, d t˙ ≤ v Thus, from Definition 5, the search ratio for Case 2 is as follows:
(4.57)
54
ˆ
0 ,θ) ( d+s(θ ) v sˆ = ( vd ) ˆ d + s(θ0 , θ) sˆ = d
(4.58)
The search ratio for Case 2 is heavily dependent on the the necessity to traverse a spiral. The larger the spiral, the less efficient the actual trajectory becomes. 4.7.4
Search Ratio: Case 3
Recall, for Case 3: In order for the camera sweep to encapsulate all possible locations ˜ must be exceeded. A spiral search pattern with varying overlap of the ground target, h is initiated, and a maximum guaranteed detection search radius is calculated. Also recall that for Case 3 to be valid the following should hold:
r1 = r2 =
(θ1 + 2π)2 n1 ho θ1
(4.59)
Thus, as (4.60) defines the position when r1 = r2 , and also defines the point at which our search trajectory converges onto a circle, we define the radius of the maximum search space as,
r˜(tθˆ) =
(θ1 + 2π)2 n1 ho θ1
(4.60)
From repeated application of Lemma 1, the search vehicle is guaranteed to detect the ground target if the set of all possible locations of the ground target, Brˆ(t) (tθˆ), is encapsulated within the maximum search space r˜(tθˆ). Given this, we know the worst case scenario would involve the search vehicle ˆ Therefore, t0 is travelling a distance d, and then traversing the spiral from (θ0 , θ). derived as: t0 ≤
ˆ d + s(θ0 , θ) v
(4.61)
55
Similarly, the shortest time occurs when the search vehicle navigates in a straight line, with no change in height and acquires the ground target along this trajectory. Therefore, t˙ is as follows: d t˙ ≤ v
(4.62)
Thus, from Definition 5, the search ratio for Case 3 is as follows:
ˆ
0 ,θ) ( d+s(θ ) v sˆ = d (v) ˆ d + s(θ0 , θ) sˆ = d
(4.63)
The search ratio for Case 3 is heavily dependent on the necessity to traverse a spiral. The larger the spiral, the less efficient the online search algorithm becomes. 4.7.5
Search Ratio: Example Cases
Case Zero: The search vehicle flies at the current altitude, z = 122, from location (x, y, z) = (43, 64, 122) directly to the last reported location of the ground target, (ˆ x, yˆ, zˆ) = (0, 0, 0). Upon arrival, the camera sweep encapsulates all possible locations of the ground target. As shown in Figure 4.10 the search vehicle is coordinated using Case 0, the search time, t0 =
d v
=
77 25
= 3.08s, and the shortest search time, t˙ =
d v
=
77 25
= 3.08s. Thus,
from (4.50), sˆ = 1. Case One: The search vehicle’s camera sweep would not encapsulate all possible locations of the ground target upon arrival. The search vehicle increases in altitude from h1 = 0 to h2 = 73 while flying from (x, y, z) = (0, 44, 0) to the last reported location (ˆ x, yˆ, zˆ) = (0, 0, 0), and upon arrival, the camera sweep now encapsulates all possible ˜ is not exceeded. locations of the ground target. The maximum search height, h,
56
Figure 4.10: Case 0: Simulation used for Case 0 Search Ratio calculation.
Figure 4.11: Case 1: Simulation used for Case 1 Search Ratio calculation.
57
As shown in Figure 4.11 the search vehicle is coordinated using Case 1, the search time, t0 = 85 25
d v
+
(h2 −h1 ) ( dh ) dt
=
85 25
+
(73−0) 0.5
= 3.40s. Thus, from (4.54), sˆ =
= 149.40s and the shortest search time, t˙ = 149.40 3.40
d v
=
= 43.94.
Case Two: In order for the camera sweep to encapsulate all possible locations of the ˜ must be exceeded. Upon arrival at, (x, y, z) = (0, 0, 300), from, ground target, h (x, y, z) = (486, 644, 51), a spiral search pattern with fixed overlap is initiated, and guaranteed to detect the ground target in bounded time.
Figure 4.12: Case 2: Simulation used for Case 2 Search Ratio calculation. As shown in Figure 4.12 the search vehicle is coordinated using Case 2, the search time, t0 =
ˆ d+s(θ0 ,θ) v
=
807+21679 25
= 899.43s and the shortest search time, t˙ =
32.28s. Thus, from (4.58), sˆ =
899.43 32.28
d v
=
807 25
=
= 27.86.
Case Three: In order for the camera sweep to encapsulate all possible locations of ˜ must be exceeded. Upon arrival at, (x, y, z) = (0, 0, 300), from, the ground target, h (x, y, z) = (536, 617, 40), a spiral search pattern with varying overlap is initiated, and a maximum guaranteed detection search radius is calculated. Remember, in Section 2.4, we mentioned that Langetepe proves the optimality of spiral search as a
58 Case 0 1 2 3
Figure Figure Figure Figure Figure
Reference 4.10 4.11 4.12 4.13
t0 3.08 149.40 899.43 899.88
t˙ 3.08 3.40 32.28 32.72
Search Ratio 1.00 43.94 27.86 27.50
Table 4.1: Competitive Analysis examples. strategy for search games. Langetepe proposed that ‘The optimal spiral for searching a point in the plane achieves a competitive ratio of 17.289’. Although Langetepe proves spiral search strategies are optimal for search games, some spirals are impractical for realistic sensors and so an example competitive analysis search ratio for each of our spiral search patterns is provided.
Figure 4.13: Case 3: Simulation used for Case 3 Search Ratio calculation. As shown in Figure 4.13 the search vehicle is coordinated using Case 2, the search time, t0 =
ˆ d+s(θ0 ,θ) v
=
818+21679 25
= 899.88s and the shortest search time, t˙ =
32.72s. Thus, from (4.63), sˆ =
899.88 32.72
= 27.50.
d v
=
818 25
=
59
Chapter 5
Conclusions Aerial search is coordinated based on the performance and sensing capabilities of the searching vehicles, as well as the performance and goals of the targets being searched. The current state-of-the-art for aerial search employs human-controlled vehicles that are unmanned. However, the unmanned vehicles are limited in autonomy and thus may not be able to follow trajectories tightly. Further issues arise in piloting the unmanned vehicles due to latency in communication. This thesis proposes a new scheme for fully autonomous search missions of a moving target, using aerial vehicles with a reliable sensor with limited range. The range limitations as well as differences in search and target performance are used to formulate a scheme that reliably (with p = 1) detects the moving target, or announces that capture cannot be guaranteed. We discussed how as long as (4.6) is satisfied, we coordinate the mission using Case 0, where the search vehicle flies at the current altitude directly to the last reported location of the ground target. Upon arrival, the camera sweep encapsulates all possible locations of the ground target. If (4.9) is satisfied, we coordinate the mission according to Case 1: where the search vehicle’s camera sweep would not encapsulate all possible locations of the ground target upon arrival. The search vehicle increases in altitude while flying to the last reported location, and upon arrival, the camera sweep now encapsulates all possible locations of the ground target. The maximum search height is not exceeded. In the case that (4.28) and (4.30) are satisfied, we coordinate the mission according to Case 2: in order for the camera sweep to encapsulate all possible locations of the ˜ must be exceeded. Upon arrival, a spiral search pattern with fixed ground target, h overlap is initiated, and guaranteed to detect the ground target in bounded time and
60
can reliably detect the moving target. Finally, if Algorithm 4.1 returns with mission success, we coordinate the mission according to Case 3: In order for the camera sweep to encapsulate all possible locations ˜ must be exceeded. A spiral search pattern with varying overlap of the ground target, h is initiated, and a maximum guaranteed detection search radius is calculated, and can reliably detect the moving target. However, if none of these cases satisfy the scenario, we report that mission success is not guaranteed. Analytical methods have been introduced to describe the probability of success of a mission given certain parameters. With an objective measure of mission success, the human-in-the-loop now understands the probability of mission success prior to deploying a single vehicle. Now, given an autonomous unmanned vehicle we mitigate the need for a navigator (reducing the operational cost of the vehicle), and also allow the human-in-the-loop to focus their efforts on coordinating high-level missions rather than the low-level navigation. This new approach not only reduces the cost associated with search and rescue missions, but we can now characterize the rate of success of any search and rescue mission. This strategy should reduce the number of failed missions and allow the human-in-the-loop to explore other strategies in the case that a mission is deemed unlikely to succeed.
61
Chapter 6
Future Work 6.1
Coordination of Multiple UAV’s
Analysis into the speedup of this implementation and efficiency of the spiral search pattern used in this thesis against other accepted search patterns would provide useful results for those researching this area. Having a single operator coordinate multiple UAV’s can improve the efficiency of search and rescue missions. However, [15] describes how it is important to model the sources of wait times (WTs) caused by human-vehicle interaction. Authors of [15] [25] maintain WTs could potentially lead to a system failure. Rasmussen et al. believe sources of vehicle WTs include cognitive reorientation and interaction WT (WTI), queues for multiple-vehicle interactions, and loss of situation awareness (SA) WTs. Swarming mechanisms can also be employed when coordinating multiple UAV’s as described in [16]. Dasgupta shows how swarming strategies for distributed automatic target recognition (ATR) outperforms centralized ATR strategies. This performance ranking provides further reason for choosing to coordinate multiple UAV’s.
6.2
Visual Obstructions
The work presented relied on the assumption that no visual obstructions existed between the search vehicle and the ground target. Consider a ground target which was once being tracked and fell out of sight. Research regarding how best to relocate a ground vehicle which may have disappeared from sight would be invaluable. One such scheme could take advantage of the hovering ability inherent to rotorcraft, the search vehicle might increase its altitude until all exits of the obstacle (a tunnel for example) are present in its field of view and then hover at this altitude until the
62
ground target appears back in the field of view or until a certain amount of time has elapsed.
6.3
Effects of Sensor Sweep for Non-Articulated Camera
Further work arises from the control of a moving sensor which does provide a consistent field of view throughout the mission. We assumed for this work that we were able to take advantage of an articulated camera sensor. However, it would be beneficial to understand how to control the search vehicle if the payload did not permit an articulated camera system.
63
References [1] Julie A. Adams, Joseph L. Cooper, Michael A. Goodrich, Curtis Humphrey, Morgan Quigley, Brian G, and Bryan S. Morse. Camera-equipped mini UAVs for wilderness search support: Task analysis and lessons from field trials. [2] H. Al-Helal and J. Sprinkle. UAV search: Maximizing target acquisition. In Engineering of Computer Based Systems (ECBS), 2010 17th IEEE International Conference and Workshops, pages 9 –18, March 2010. [3] Yaniv Altshuler, Vladimir Yanovsky, Israel A. Wagner, and Alfred M. Bruckstein. Efficient cooperative search of smart targets using UAV swarms. Robotica, 26(4):551–557, 2008. [4] M. Andriluka, P. Schnitzspan, J. Meyer, S. Kohlbrecher, K. Petersen, O. von Stryk, S. Roth, and B. Schiele. Vision based victim detection from unmanned aerial vehicles. In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference, pages 1740 –1747, October 2010. [5] L.E. Barnes, M.A. Fields, and K.P. Valavanis. Swarm formation control utilizing elliptical surfaces and limiting functions. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 39(6):1434–1445, December 2009. [6] L.F. Bertuccelli and J.P. How. Robust UAV search for environments with imprecise probability maps. In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC ’05. 44th IEEE Conference on, pages 5680–5685, December 2005. [7] L.F. Bertuccelli and J.P. How. Search for dynamic targets with uncertain probability maps. In American Control Conference, 2006, pages 6 pp.–, June 2006. [8] L.F. Bertuccelli and J.P. How. UAV search for dynamic targets with uncertain motion models. In Decision and Control, 2006 45th IEEE Conference on, pages 5941–5946, December 2006. [9] F. Bourgault, T. Furukawa, and H.F. Durrant-Whyte. Process model, constraints, and the coordinated search strategy. In Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE International Conference on, volume 5, pages 5256–5261 Vol.5, April-1 May 2004. [10] Scott Burlington and Gregory Dudek. Spiral search as an efficient mobile robotic search technique.
64
[11] Wen-Yan Chang, Chu-Song Chen, and Yi-Ping Hung. Tracking by parts: A bayesian approach with component collaboration. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 39(2):375–388, April 2009. [12] Junzhou Chen and Kin Hong Wong. Calibration of an articulated camera system. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1 –8, June 2008. [13] Diyang Chu, Jacob Gulotta, Jonathan Sprinkle, Himanshu Neema, Harmon Nine, Nicholas Kottenstette, Graham Hemingway, and Janos Sztipanovits. Model-based configuration of a heterogeneous human-in-the-loop command and control simulation environment. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, tbd(tbd):(in review), 2011. [14] Thomas H. Cormen, Clifford Stein, Ronald L. Rivest, and Charles E. Leiserson. Introduction to Algorithms. McGraw-Hill Higher Education, 2nd edition, 2001. [15] M.L. Cummings and P.J. Mitchell. Predicting controller capacity in supervisory control of multiple UAVs. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 38(2):451 –460, March 2008. [16] P. Dasgupta. A multiagent swarming system for distributed automatic target recognition using unmanned aerial vehicles. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 38(3):549 –563, May 2008. [17] M. Gosnell, S. O’Hara, and M. Simon. Spatially decomposed searching by heterogeneous unmanned systems. In Integration of Knowledge Intensive Multi-Agent Systems, 2007. KIMAS 2007. International Conference on, pages 52–57, 30 2007May 3 2007. [18] Jacob Gulotta, Diyang Chu, Ximing Yu, Hussain Al-Helal, Tapasya Patki, Jason Hansen, Maribel Hudson, and Jonathan Sprinkle. Using integrative models in an advanced heterogeneous system simulation. In IEEE International Conference on the Engineering of Computer-Based Systems, pages 3–10, Los Alamitos, CA, USA, 2009. IEEE Computer Society. [19] Gabriel M. Hoffmann, Haomiao Huang, Steven L. Wasl, and Claire J. Tomlin. Quadrotor helicopter flight dynamics and control: Theory and experiment. In in Proc. AIAA Guidance, Navigation, and Control Conf, 2007. [20] Elmar Langetepe. On the optimality of spiral search. In Symposium on Discrete Algorithms, pages 1–12, 2010.
65
[21] G. Loegering and D. Evans. The evolution of the global hawk and mald avionics systems. In Digital Avionics Systems Conference, 1999. Proceedings. 18th, volume 2, pages 6.A.1–1 –6.A.1–8 vol.2, 1999. [22] A. Macwan and B. Benhabib. A multi-robot coordination methodology for autonomous search and rescue. In Science and Technology for Humanity (TICSTH), 2009 IEEE Toronto International Conference, pages 675 –680, September 2009. [23] Duane J. Matthiesen. Optimal search and optimal detection. In Radar Systems, 2007 IET International Conference on, pages 1 –7, October 2007. [24] Mark A. Neifeld, Amit Ashok, and Pawan K. Baheti. Task-specific information for imaging system analysis. J. Opt. Soc. Am. A, 24(12):B25–B41, 2007. [25] S.J. Rasmussen, T. Shima, J.W. Mitchell, A.G. Sparks, and P. Chandler. Statespace search for improved autonomous UAVs assignment algorithm. In Decision and Control, 2004. CDC. 43rd IEEE Conference on, volume 3, pages 2911–2916 Vol.3, December 2004. [26] S. Rathinam, P. Almeida, ZuWhan Kim, S. Jackson, A. Tinka, W. Grossman, and R. Sengupta. Autonomous searching and tracking of a river using an UAV. In American Control Conference, 2007. ACC ’07, pages 359–364, July 2007. [27] R.D. Ritter. Pilot error in automated systems shown by altitude deviation reports. Aerospace and Electronic Systems Magazine, IEEE, 9(4):15 –19, April 1994. [28] S. Singh and V. Krishnamurthy. The optimal search for a markovian target when the search path is constrained: the infinite-horizon case. Automatic Control, IEEE Transactions on, 48(3):493 – 497, March 2003. [29] R. Sparrow. Predators or plowshares? arms control of robotic weapons. Technology and Society Magazine, IEEE, 28(1):25 –29, Spring 2009. [30] J. Sprinkle, J.M. Eklund, and S.S. Sastry. Deciding to land a UAV safely in real time. In American Control Conference, 2005. Proceedings of the 2005, pages 3506 – 3511 vol. 5, June 2005. [31] L.D. Stone. Theory of optimal search. Mathematics in science and engineering. Academic Press, 1975. [32] Hongru Tang, Xiaosong Cao, Aiguo Song, Yan Guo, and Jiatong Bao. Humanrobot collaborative teleoperation system for semi-autonomous reconnaissance robot. In Mechatronics and Automation, 2009. ICMA 2009. International Conference on, pages 1934 –1939, August 2009.
66
[33] Junqiu Wang and Y. Yagi. Adaptive mean-shift tracking with auxiliary particles. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 39(6):1578–1589, December 2009. [34] W. Whitacre and M. Campbell. Cooperative geolocation and sensor bias estimation for UAVs with articulating cameras. In AIAA Guidance Navigation and Control Conference, pages 1934 –1939, August 2009. [35] Rong Xu and U. Ozguner. Sliding mode control of a quadrotor helicopter. Decision and Control, 2006 45th IEEE Conference on, pages 4957–4962, December 2006. [36] Jianru Xue, Nanning Zheng, J. Geng, and Xiaopin Zhong. Tracking multiple visual targets via particle-based belief propagation. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 38(1):196–209, February 2008. [37] Shengqi Zhu, Guisheng Liao, Yi Qu, Zhengguang Zhou, and Xiangyang Liu. Ground moving targets imaging algorithm for synthetic aperture radar. Geoscience and Remote Sensing, IEEE Transactions on, 49(1):462 –477, January 2011.