Distributed DDDAS

Report 9 Downloads 149 Views
Distributed DDDAS Vijay Gupta University of Notre Dame

AFOSR PI Meeting, Fall 2017 1

Overall Project Aim and Relation to DDDAS Data Collection (Sensing / Simulations / Surveys / …)

Action (Control / Actuation / …)

2

Overall Project Aim and Relation to DDDAS

Internet of Things

Internet of Things

Data Collection (Sensing / Simulations / Surveys / …)

Action (Control / Actuation / …)

Action (Control / Actuation / …)

3

Overall Project Aim and Relation to DDDAS Challenge: Distributed estimation in the presence of heterogeneous sensors Approach: contract design and information theoretic

Internet of Things

Data Collection (Sensing / Simulations / Surveys / …)

Internet of Things

Challenge: Demonstration with software and hardware

Action (Control / Actuation / …)

Action (Control / Actuation / …) Challenge: Hierarchical control with heterogeneous platforms Approach: Nash bargaining based

4

Basic Setup of a Distributed Estimation Problem h

x ˆ = arg min E (x

x ˆ) (x

Fusion Center

Sensor 1

Sensor 2

y1

y2

Sensor 3

y3

X=x 5

x ˆ)

T

i

Sensor n

yn

Examples of Proposed Setup: Crowdsensing

6

Examples of Proposed Setup: Crowdsensing

7

Examples of Proposed Setup: Rewards for Justice Rewards for Justice - Most Wanted - All Regions

Rewards for Justice Stop a terrorist. Save lives.

WANTED FOR TERRORISM Up to $25 Million Reward

Abu Bakr alBaghdadi

Ayman alZawahiri

Up to $10 Million Reward

Sirajuddin Haqqani

8

Muhammad alJawlani

Hafiz Saeed

Yasin al-Suri

The Basic Issue in Such a Setup is the Cost of Action Information Content

Cost of Action

9

The Basic Issue in Such a Setup is the Cost of Action Information Content

Cost of Action

10

the hunt for al-Qaeda, it has proved a bust.

The Basic Issue in Such a Setup is the Cost of Action

Known as Rewards for Justice, the program dates to 1984 and was originally used to track down fugitive terrorism suspects of all persuasions, from the Balkans to the Palestinian Information territories. After the Sept. 11, 2001, attacks, the most-wanted list was expanded -- and the Content rewards boosted exponentially -- as part of a push to eliminate al-Qaeda's leadership. So far, however, Rewards for Justice has failed to put a dent in al-Qaeda's central command. Offers of $25 million each for al-Qaeda founders Osama bin Laden and Ayman al-Zawahiri have attracted hundreds of anonymous calls but no reliable leads, of Action officials familiar with theCost program say. For a time, the program was generating so little useful information that in Pakistan, where most al-Qaeda chiefs are believed to be hiding, it was largely abandoned. "It's certainly been ineffective," said Robert L. Grenier, a former CIA station chief in Pakistan and former director of the agency's counterterrorism center. "It hasn't produced results, and it hasn't particularly produced leads." The failures of Rewards for Justice can be traced to several factors: weak publicity campaigns in places where al-Qaeda's leadership is based; skepticism that the United States would deliver the money and protect informants; and a mistaken assumption that anyone's loyalty can be bought if the price is high enough. "The program could use some, well, 'rejuvenation' is the word," said Walter B. Deering, a 11

trillion dollars invading Afghanistan, and then we’re going to be there for now the hunt for al-Qaeda, it has proved a bust. over 10 years, and this is going cost so a many U.S. lives.” course, The Basic Issue intoSuch Setup is theOfCost ofit’sAction Known as Rewards Justice,the the offer program to 1984 andcould was originally usedbin to track impossible to knowfor whether of adates higher bounty have netted down fugitive terrorism suspects of all persuasions, from the Balkans to the Palestinian Information Laden right after the 9/11 attacks and thereby preventedlist the invasion of -- and the territories. After the Sept. 11, 2001, attacks, the most-wanted was expanded Content rewards boosted exponentially -- terms as partof ofthe a push to eliminate leadership. Afghanistan. According to the ultimatum thatal-Qaeda's George W. Bush

delivered to the Rewards Taliban in 2001—“they will in hand over the terrorists So far, however, forSeptember Justice has failed to put a dent al-Qaeda's central command. million each for al-Qaeda Osama bin LadenBy and or they willOffers shareofin$25 their fate”—invasion wasn’tfounders a foregone conclusion. his Ayman al-Zawahiri have attracted hundreds of anonymous calls but no reliable leads, own account, Robert Grenier, the CIA station chief in Islamabad at the time, was of Action officials familiar with theCost program say. For a time, the program was generating so little useful information that inthe Pakistan, where most al-Qaeda chiefsAmerican are believed to be hiding, scrambling to forestall invasion right up until the first airstrikes it was largely abandoned. started. In the counterfactual where an al-Qaeda insider betrayed bin Laden for been ineffective," saidTabarrok Robert L.said, Grenier, a former CIA money and the"It's U.S.certainly didn’t invade Afghanistan, “maybe $500 station chief in Pakistan and former director of the agency's counterterrorism center. "It million doesn’t look so expensive” compared with what actually occurred. hasn't produced results, and it hasn't particularly produced leads." The there failures Rewards Justice can be traced several factors: weak publicity But areofalso costsfor associated with offers to high enough to attract an campaigns in places where al-Qaeda's leadership is based; skepticism that the United overwhelming number of false leads. “We’d get a lot of tips that were totally off States would deliver the money and protect informants; and a mistaken assumption that the wall,”loyalty Walter Deering, a former State Department anyone's can be bought if the price is high enough. official who oversaw

Rewards for Justice, told The Washington Post’s Craig Whitlock in 2008. Tabarrok "The program could use some, well, 'rejuvenation' is the word," said Walter B. Deering, a 12

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

Sensor 1

Sensor 2

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 ) y1

Sensor 3

y2

Sensor n

(ˆ x 3 , ⌃3 )

(ˆ x n , ⌃n )

y3 X = x X ⇠ N m,

yn 2

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

(ˆ xr,n , ⌃r,n ) (ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 1

Sensor 2

Sensor 3

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 ) y1

(ˆ xr,3 , ⌃r,3 )

y2

Sensor n

(ˆ x 3 , ⌃3 )

(ˆ x n , ⌃n )

y3 X = x X ⇠ N m,

yn 2

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 1

Sensor 2

y1

y2

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 ) Sensor 3

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

Sensor n

(ˆ x 3 , ⌃3 )

(ˆ x n , ⌃n )

y3 X = x X ⇠ N m,

yn 2

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 1

Sensor 2

y1

Strategic Sensor:

y2

Ui =

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 ) Sensor 3

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

h(ai )

Sensor n

(ˆ x 3 , ⌃3 ) y3

g (ˆ x ri

X=x

x ˆ i , ⌃r i

(ˆ x n , ⌃n ) ⌃i ) + pi

X ⇠ N 0,

2

yn

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 2

Sensor 1

y1

Strategic Sensor:

y2

Ui =

Sensor 3

h(ai )

effort cost

⌃i = fi (ai )

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 )

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

Sensor n

(ˆ x 3 , ⌃3 ) y3

g (ˆ x ri

X=x

x ˆ i , ⌃r i

(ˆ x n , ⌃n ) ⌃i ) + pi

X ⇠ N 0,

2

yn

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 2

Sensor 1

y1

Strategic Sensor:

y2

Ui =

Sensor 3

h(ai )

effort cost

⌃i = fi (ai )

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 )

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

Sensor n

(ˆ x 3 , ⌃3 ) y3

g (ˆ x ri

X=x

x ˆ i , ⌃r i

(ˆ x n , ⌃n ) ⌃i ) + pi

X ⇠ N 0,

falsification cost

2

yn

How to Pay Sensors to Obtain an Estimate of Desired Quality? h i x ˆg = arg min E (x

x ˆ) (x

x ˆ)

T

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 2

Sensor 1

y1

Strategic Sensor: Malicious Sensor:

y2

Ui =

h(ai )

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 ) Sensor 3

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

Sensor n

(ˆ x 3 , ⌃3 ) y3

g (ˆ x ri

X = 2x Ui = (ˆ xg x ˆi )

x ˆ i , ⌃r i

(ˆ x n , ⌃n ) ⌃i ) + pi

X ⇠ N 0, 2 x ri x ˆ i , ⌃r i 1 g (ˆ

yn ⌃i ) +

2 pi

Timeline and Formulation

Timeline and Formulation

# "" # X X minE E min pPi (.) ppii(.)

ii

1

s.t. ⌃g x ˆg =

X

1

⌃i x ˆi

i

⌃g < specified bound Rational strategic sensors: {a⇤i , x ˆ⇤ri , ⌃⇤ri } = arg max E[Ui ] Individual rationality: E[Ui ]

0

Timeline and Formulation

Main Result: It is possible to constrain the falsification by strategic and malicious sensors through the design of suitable payment schemes.

Possible Payment Scheme

Scheme 1:

pi = c

Possible Payment Scheme

Scheme 1:

pi = c Ui =

h(ai )

g (ˆ x ri

Optimal ai = 0

x ˆ i , ⌃r i

⌃i ) + pi

Possible Payment Scheme

Hidden action Scheme 1:

pi = c Ui =

h(ai )

g (ˆ x ri

x ˆ i , ⌃r i

⌃i ) + pi

Optimal ai = 0 Assymteric information about hidden action leads to moral hazard

Possible Payment Scheme

Hidden action Scheme 1:

pi = c

Possible Payment Scheme

Hidden action Scheme 2:

pi = f (⌃ri )

Possible Payment Scheme

Hidden action Scheme 2:

pi = f (⌃ri ) Ui =

h(ai )

g (ˆ x ri

Optimal ai = 0 ⌃ri ! 0

x ˆ i , ⌃r i

⌃i ) + pi

Possible Payment Scheme

Hidden action Scheme 2:

Hidden information

pi = f (⌃ri ) Ui =

h(ai )

g (ˆ x ri

x ˆ i , ⌃r i

⌃i ) + pi

Optimal ai = 0 ⌃ri ! 0 Assymteric information about realized error covariance leads to adverse selection

impossible to know whether the offer of a higher bounty could have netted bin Laden right after the 9/11 attacks and thereby prevented the invasion of Possible Payment Scheme Afghanistan. According to the terms of the ultimatum that George W. Bush delivered to the Taliban in September 2001—“they will hand over the terrorists or they will share in their fate”—invasion wasn’t a foregone conclusion. By his own account, Robert Grenier, the CIA station chief in Islamabad at the time, was scrambling to forestall the invasion right up until the first American airstrikes Hidden information action started. In the counterfactual Hidden where an al-Qaeda insider betrayed bin Laden for

money and the U.S. didn’t invade Afghanistan, Tabarrok said, “maybe $500

p = f (⌃ ) i r i million doesn’t look so expensive” compared with what actually occurred. Scheme 2:

But there are also costs associated with offers high enough to attract an overwhelming number of false leads. “We’d get a lot of tips that were totally off the wall,” Walter Deering, a former State Department official who oversaw Rewards for Justice, told The Washington Post’s Craig Whitlock in 2008. Tabarrok acknowledged that possibility, but noted that false leads are a cost of any kind of information-gathering operation—not least the the NSA’s bulk metadata

Related Literature 1) F. Restuccia, S. K. Das, and J. Payton, “Incentive mechanisms for participatory sensing: Survey and research challenges,” arXiv preprint arXiv:1502.07687, 2015. Most proposed mechanisms assume sensor actions to be either not hidden or not costly, so that sensors participate voluntarily 2) N. Miller, P. Resnick, and R. Zeckhauser. "Eliciting informative feedback: The peer-prediction method." Management Science 51.9 (2005): 1359-1373. Peer prediction literature does not consider the sensor actions to be a costly effort 3) F. Farokhi, A. M. Teixeira, and C. Langbort, “Gaussian cheap talk game with quadratic cost functions: When herding between strategic senders is a virtue,” in American Control Conference (ACC), 2014. IEEE, 2014, pp. 2267–2272. Assumes that the sensors share estimator’s goal of generating accurate global estimates 4) F. Farokhi, I. Shames, and M. Cantoni, “Budget-constrained contract design for effort-averse sensors in averaging based estimation,” arXiv preprint arXiv:1509.08193, 2016. Considers selfish sensors that need to be incentivized to generate accurate measurements; however, assumes that sensors do not lie

Intuition Behind our Approach Falsification by Strategic Sensors

One Nash Equilibrium is for all strategic sensors to report a constant Possible to constrain falsification when honest sensor is noisy Pay sensor proportional to 2 (ˆ xh x ˆi ) Fidelity of an Honest Sensor

A priori information about the random variable can serve as the honest sensor!

An Honest Sensor is Needed to “Tether” what the Strategic Sensors Report Theorem: Let X ⇠ N m, 2 be the random variable being estimated. If 2 ! 1 , then there exists no payment scheme that leads to a non-zero action and bounded falsification by the strategic sensors. Falsification by Strategic Sensors

One Nash Equilibrium is for all strategic sensors to report a constant Possible to constrain falsification when honest sensor is noisy Pay sensor proportional to

(ˆ xh

x ˆi )

2

Fidelity of an Honest Sensor

Case 1: If A Priori Information is not Known to Sensors Theorem: Let X ⇠ N m, 2 be the random variable being estimated. Let 2 be !finite 1 and consider the payment scheme 2 1 2 pi = ci ↵i ⌃ri ↵i (ˆ xri m) . If m is unknown to the sensors, then the following statements hold: 1. There is a unique Nash equilibrium for the sensors. 2. At this equilibrium, each sensor reports its estimate correctly 3. The optimal value of

↵i1 is zero.

⌃ri = ⌃i 2 5. If is !also 1 unknown to the sensors, then truth-telling is a dominant 4.

strategy

Case 2: If A Priori Information is Known to Sensors Theorem: Let X ⇠ N m, 2 be the random variable being estimated. Let 2 be !finite 1 and consider the payment scheme 2 1 2 pi = ci ↵i ⌃ri ↵i (ˆ xri m) . If m is known to the sensors, then the following statements hold: 1. There is a Nash equilibrium for the sensors. 2. At this equilibrium, each sensor misreports its estimate linearly

x ˆ ri = 1 3. The optimal value of ↵i is zero.

4.

⌃ri = ⌃i

ˆi ix

How to Choose Coefficients? The payment scheme has some parameters under the control of the designer

pi = c i ‣



1 ↵ i ⌃ri

2 ↵i

(ˆ xri

2

m) .

The optimal effort expended by the sensors (and hence the accuracy of the global 2 1 ↵ estimate) is an increasing function of ↵i and i For ex-ante individual rationality constraint to hold, choose ?2 ai 2? ? ci > ↵i (⌃ + ⌃i (ai ))

2

Bottomline: higher payment for higher global accuracy!

How to Choose Coefficients? Bottomline: higher payment for higher global accuracy!

Covariance of a priori information

General Properties of the Solution Corollary: The payment to the sensor is higher if it knows the a priori information (the sensor can extract information rent). Information unknown Information known

General Properties of the Solution Corollary: The payment to the sensor is higher if it knows the a priori information (the sensor can extract information rent).

Corollary: The accuracy of the estimate that each sensor transmits improves as the accuracies of the estimates transmitted by the other sensors (or the a priori information) decreases.

Extensions to dynamic estimation possible.

Estimation with Malicious Sensors h

x ˆg = arg min E (x

x ˆ) (x

Fusion Center

p1

p2

p3

(ˆ xr,1 , ⌃r,1 ) (ˆ xr,2 , ⌃r,2 ) Sensor 2

Sensor 1

y1

Strategic Sensor: Malicious Sensor:

y2

Ui =

h(ai )

(ˆ xr,n , ⌃r,n )

(ˆ xr,3 , ⌃r,3 ) Sensor 3

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 )

pn

x ˆ)

T

i

Sensor n

(ˆ x 3 , ⌃3 ) y3

g (ˆ x ri

X = 2x Ui = (ˆ xg x ˆi )

x ˆ i , ⌃r i

(ˆ x n , ⌃n ) ⌃i ) + pi

X ⇠ N 0, 2 x ri x ˆ i , ⌃r i 1 g (ˆ

yn ⌃i ) +

2 pi

Either Falsification Cost or Payment are Needed Ui = (ˆ x

x ˆi )

2

x ri 1 g (ˆ

x ˆ i , ⌃r i

⌃i ) +

2 pi

Theorem: If both the constants 1 and 2 are zero, then the malicious sensor can arbitrarily degrade the global estimate. Further, any global estimate that is achievable by a given falsification cost and zero payment is also achievable by zero falsification cost and a suitable payment scheme

Simplification x ˆ g = w1 x ˆr,1 + w2 x ˆr,2 Fusion Center

p1 (ˆ xr,1 , ⌃r,1 )

p2

(ˆ xr,2 , ⌃r,2 )

Sensor 1

Sensor 2

(ˆ x 2 , ⌃2 )

(ˆ x 1 , ⌃1 ) y1

Honest Sensor: Malicious Sensor:

y2 y3 Ui = constant X = 2x Ui = (ˆ xg x ˆi )

yn X ⇠ N 0, 2 x ri x ˆ i , ⌃r i 1 g (ˆ

⌃i ) +

2 pi

It is Possible to Limit Falsification by the Malicious Sensor Theorem: Let X ⇠ N m, 2 be the random variable being estimated. Consider the payment scheme ✓ ◆2 x ˆ r1 + x ˆ r2 pi = c i ↵ i x ˆ ri

2

The following statements hold:

1. There is a unique Nash equilibrium for the sensors. 2. At this equilibrium, the honest sensor reports its estimate truthfully. The malicious sensor reports for appropriate constants B and C ✓ ◆ 2 C wi (w1 + w2 B) x ˆ ri = 1 + x ˆ i 2 ↵i C wi 3. The fusion center can choose weights such that 1 1 1 2 ⌃g = ⌃1 + ⌃ 2 x

2

if ↵i

C 4

Further, it can choose weights such that the global error covariance is less than that obtained by using the data from the honest sensor alone if

C2 ↵ ¯  ↵i  4

Properties of the Solution Corollary: The fusion center can ensure that the falsification from the malicious sensor is constrained for sufficiently high falsification costs / payment.

0.4

0.6

mse σ2 0.8

1.0

1.2

mse with one loyal and one adversary mse with one loyal sensor mse with two loyal sensors

0.00

0.05

0.10

β

0.15

0.20

Properties of the Solution Corollary: The fusion center can ensure that the falsification from the malicious sensor is constrained for sufficiently high falsification costs / payment.

Corollary: The individual rationality constraint is satisfied for the malicious sensor. Thus, the sensor participates even though it may not be able to degrade the quality of the estimate.

References Contract Design for Estimation with Strategic Sensors ‣ “An incentive-based approach to distributed estimation with strategic sensors,” DG Dobakhshari, N Li, V Gupta, Decision and Control (CDC), 2016 IEEE 55th Conference on, 6141-6146 ‣ “A reputation-based contract for repeated crowdsensing with costly verification,” DG Dobakhshari, P Naghizadeh, M Liu, V Gupta, American Control Conference (ACC), 2017, 5243-5248 ‣ "On auction design for crowd sensing." K. Chen, V. Gupta, and Y.-F. Huang. Information Fusion (FUSION), 2016 19th International Conference on. IEEE, 2016.

Contract Design for Estimation with Malicious Sensors ‣ “Minimum Variance Unbiased Estimation in the Presence of Adversary,” K. Chen, V. Gupta, and Y.-F. Huang. Decision and Control (CDC), 2017 IEEE 56th Conference on, Accepted.

Data Injection Attacks in Estimation and Control ‣ “On Kalman Filtering with Compromised Sensors: Attack Stealthiness and Performance Bounds." C.-Z. Bai, V. Gupta, and F. Pasqualetti. IEEE Transactions on Automatic Control (2017). ‣ "Data-injection attacks in stochastic control systems: Detectability and performance tradeoffs." C.Z. Bai, V. Gupta, and F. Pasqualetti. Automatica 82 (2017): 251-260.

References Nash Bargaining based Distributed Trajectory Generation ‣ “A DDDAS Approach to Sensor Trajectory Generation,” S. Lin, V. Gupta, G. Madey, and C. Poellabauer, 1st International Conference on InfoSymbiotics / DDDAS (Dynamic Data Driven Applications Systems), Hartford, CT, Aug 2016

Demonstration ‣ "Radio Frequency Based Indoor Localization in Ad-Hoc Networks", M. Golestanian, J. Siva, and C. Poellabauer, Ad Hoc Networks, Jesus Hamilton Ortiz and Alvaro Pachon de la Cruz, eds., InTech, ISBN 978-953-51-4924-8, 2017. ‣ "Poster: Indoor Localization using Multi-Range Beaconing", M. Golestanian and C. Poellabauer, Poster at the 17th International Symposium on Mobile Ad Hoc Networking and Computing, Paderborn, Germany, July 2016.

Previous Reporting Period ‣ Vijay Gupta, Wann-jiun Ma, Greg Madey, Daniel Quevedo, A DDDAS Approach to Distributed Control in Computationally Constrained Environments (UAV Swarms), Informs Annual Meeting, Philadelphia, PA, November 2015, ‣ Vijay Gupta, Gregory Madey and Christian Poellabauer, Distributed DDDAS through Receding Horizon Control, Workshop on Architectural Support and Middleware for InfoSymbiotics/ Dynamic Data Driven Applications Systems (DDDAS), IEEE International Conference on High Performance Computing (HiPC 2015), Bengaluru, India, December 2015 ‣ Wann-Jiun Ma, Vijay Gupta, and Daniel E. Quevedo, Collaborative Processing in Distributed Control for Resource Constrained Systems, IET Control Theory and Applications, March 2017.

Summary h

x ˆ = arg min E (x

x ˆ) (x

Fusion

Sensor

Sensor

y1

y2

x ˆ)

T

i

Sensor



Contract design for distributed estimation with selfish, strategic, and malicious sensors



Future work: reputation and trust

Sensor

y3

yn ✓

X=x

Hidden action

Ongoing work: Collaboration with AFRL (Eloy Garcia, Krishna Kalyanam, and David Casbeer)

Hidden information

DDDAS: Distributed ComputaCon Across UAVs / UGVs Central unit communicates with all UAVs 1. UAVs transmit possible waypoints and informa8on about en88es 2. Central unit transmits actual waypoints for UAVs to follow

UAV with sensor and processor on board

Original Final Goal Stealth UAV with sensor and processor on board

Sensor footprint

Sensor footprint

UAV with sensor and processor on board

UAVs and UGVs with sensors and processor on board 1. Generates measurements 2. Generates local knowledge map based on own and neighbors data 3. Generates possible waypoints by solving an op8miza8on problem including its dynamics

Central Coordina8ng and Processing Unit

Ensures all waypoints are consistent and chooses waypoints for individual UAVs. For rapid classifica8on, may overrule all proposed waypoints

Stealth UAV with sensor and processor on board

UAV with sensor and processor on board UGV with sensor and processor on board

Sensor footprint UAVs and UGVs can transmit to neighbors. 1. Local maps are exchanged for data fusion to generate consistent waypoints. 2. Processor tasks can be shared to ensure local limited processors are not overwhelmed. Central Application System Completely Centralized

Sensor footprint UGV with sensor and processor on board UGV with sensor and processor on board

Sensor footprint

UAV / UGV Swarms

Distributed DDDAS Application System – Co-resident with Sensor Systems Completely Decentralized