Autonomous Calibration Algorithms for Networks of Cameras Domenica Borra, Enrico Lovisari, Ruggero Carli, Fabio Fagnani and Sandro Zampieri
Abstract— We deal with the important applicative problem of distributed cameras calibration. We model a network of N cameras as an undirected graph in which communicating cameras can measure their relative orientation in a noisy way. These measures can be used in order to minimize a suitable cost function. The shape of this cost function depends on a vector of integers K. We propose two algorithms which in a distributive way estimate such K, comparing advantages and disadvantages of both. Simulations are run on a grid network to prove effectiveness of the algorithms.
I. I NTRODUCTION In the last fifteen years many efforts have been spent by the scientific community to distribute tasks over a network of communicating and interacting agents, in order to avoid the major problems of centralized strategies, such as reliability of multi-hops, and reliability of the agents themselves, both physically (they could fail) and under a security point of view (they could be malicious and intentionally damage the network). On the contrary, distributed strategies are based on local exchange of information among hierarchically equal agents. In a network of cameras deployed in a plane one of the most important problems is the calibration. Namely, each camera has to know how it is oriented, at any instant, with respect to a certain common reference frame. The importance of this is clear: assume that an external agent, which has to be tracked, is exiting from the range of the i-th camera and entering in that of the j-th one. In this case camera i has to communicate to camera j to move and follow the agent before camera i looses it. Clearly, both cameras must share the same reference frame. Usually, this is set off-line by a human operator, or by a centralized unit. Instead, the algorithms we propose aim to complete autonomy, and do not require any central control to carry on computations. This allows improved accuracy and possibility of periodical autonomous re–calibration. The model for the network is a graph G = (V, E) in which V is the set of cameras, and E the set of edges, the couples of communicating cameras. The information which is used to calibrate the network, inspired by the works by Barooah and Hespanha [1], [2], [3] on localization, is the
relative orientation among cameras, which can be computed if their field of view overlaps. Cameras calibration through relative orientations can be transformed into a consensus like problem over the manifold S1 : this problem has already attracted much attention by research community. In [4], [5] a consensus algorithm on S1 based on the gradient flow of a potential defined using the chordal distance is proposed. In [6] a similar approach based on the geodesic distance is proposed to study the more general calibration problem on SE(3). The issue with both these approaches is that the defined potentials are characterized by several nontrivial local minima in which, apart from particular initial conditions, it is easy to fall. On the other hand, in [7] the noisy measurements of relative orientations are a priori constrained to sum to zero on cycles. Based on this construction, a least-square estimation algorithm, which is proved to be optimal on a ring graph, is presented. We choose to concentrate on the simple case of calibration in SO(2) ∼ S1 , and we use the geodesic distance. Our main idea is to break the estimation problem into two parts: first we estimate a sort of combinatorial object which is a vector in ZE and takes care of the fact that noise around cycles in general does not sum up to 0. Once this is done we estimate by solving a quadratic optimization problem like in the localization problem. Out method is consistent in the sense that if there is no error, the solution coincides with the true one. We propose two different algorithms: one based on spanning trees, another one based on minimal cycles. This procedure has been used in [8] to improve the estimates in the case of localization in RN . The paper is organized as follows: in Section II we present several definitions and algebraic properties of graphs, which will be extensively used throughout the paper. In Section III we formulate the problem we want to deal with. Section IV presents our two algorithms and gives some first properties. In Section V we compare the two algorithms giving closed formulae of the results, and we estimate their performance. Examples and numerical simulations are provided in Section VI and VII. II. G RAPH THEORETICAL TOOLS
R. Carli, E. Lovisari and S. Zampieri are with the Department of Information Engineering, University of Padova, via Gradenigo 6/B Padova, Italy, {lovisari, carli, zampi}@dei.unipd.it D. Borra and F. Fagnani are with Department of Mathematics, Polytechnic of Turin {domenica.borra,
fabio.fagnani}@polito.it The research leading to these results has received funding from the European Community’s Seventh Framework Programme under agreement n. FP7-ICT-223866-FeedNetBack.
An undirected graph is a couple G = (V, E) where V is the set of nodes, and E is a subset of unordered pairs of elements of V called edges. We put N = |V| and M = |E|. A spanning tree T = (V, ET ) of G = (V, E) is simply a tree subgraph of G which has the same set of nodes, and consequently |ET | = N − 1.
An orientation on G = (V, E) is a pair of maps s : E → V and t : E → V such that e = {s(e), t(e)} for every e ∈ E. In these notations s(e) (t(e)) is called the starting (terminating) node of the edge e. Assume from now on that we have fixed an orientation (s, t) on G. The incidence matrix A ∈ {±1, 0}E×V of G is defined by putting Ae s(e) = 1, Ae t(e) = −1, and Aev = 0 if v 6= s(e), t(e). An oriented cycle h of length n (with n > 2) is an ordered sequence of nodes h = (v1 v2 · · · vn ) (up to cyclic permutations) such that {vi , vi+1 } ∈ E for all i = 1, . . . , n (interpreting n + 1 = 1). The set of cycles is denoted by H. Given a cycle h, we denote by −h the same cycle with reversed orientation. Given h = (v1 v2 · · · vn ) ∈ H, we associate a row vector r h ∈ {±1, 0}E as follows. Put ei = {vi , vi+1 } and define rh (ei ) := Aei vi for every i = 1, . . . , n. While, rh (e) = 0 for any other edge not in the cycle. It is immediate to see that r −h = −r h . Given two oriented cycles h, h0 such that rh (e)rh0 (e) = 0 for all edges except one e∗ for which rh (e∗ )rh0 (e∗ ) = −1, we can consider the sum cycle denoted by h + h0 determined by setting r h+h0 := r h +r h0 . The sum cycle h+h0 is simply obtained by joining the edges of the two cycles and dropping e∗ (see Fig. 1). An oriented cycle is said to be minimal if it h + h0 h
h0
Fig. 1. The cycle h + h0 is the sum of the two minimal cycles h and h0 . It holds r h + r h0 = r h+h0 .
can not be written as the sum of two other cycles. Clearly, if h is minimal, also −h is minimal. We denote by H0 any subset of minimal cycles with the property that h ∈ H0 iff −h 6∈ H0 . A standard result says that |H0 | = M − N + 1. Fix now a spanning tree T = (V, ET ) of G. A (T -)fundamental cycle is any cycle of type h = (v1 v2 · · · vn ) where {vi , vi+1 } ∈ ET for every i = 1, . . . , n− 1. Notice that {v1 , vn } is the only edge in the cycle which is not in ET . Clearly, for each edge e ∈ E \ ET , there are two fundamental cycles sharing the edge e, he and −he , and we assume that he has been chosen so that rhe (e) = 1. HT will denote the set of he ’s, and evidently also |HT | = M −N +1. This is not a coincidence as the following result will show. Let R ∈ ZH×E be an integer matrix whose rows are all the vectors r h as h varies among all possible cycles, let R0 be the sub-matrix of R consisting of those rows in H0 , and, finally, let RT be the sub-matrix of R consisting of those rows in HT . Interpreting R, R0 , and RT as group homomorphisms acting on ZE and the incidence matrix A as a group homomorphism from ZV to ZE , we have the following result. Lemma 2.1: It holds kerR0 = kerRT = kerR = ImA. While there is an obvious bijection between HT and E \ ET , the construction of a bijection between H0 and E \ ET needs a bit of extra work. The following result describes this
construction and it will be instrumental to the description of the cycle algorithm below. Proposition 2.1: Given the graph G with an orientation and a spanning tree T , there is an ordering e1 , . . . , eM −N +1 of the edges in E \ ET such that, for every i, ei forms a minimal cycle in G with all remaining edges lying in ET ∪ {e1 , . . . ei−1 }. III. P ROBLEM FORMULATION We model the network of cameras as a connected undirected graph G = (V, E) equipped with an orientation (s, t). Each node is equipped with a camera. Fix an external reference frame and let θ¯v ∈ R to be the orientation of the camera of agent v w.r.t. such reference frame. Nodes can obtain relative noisy measurements along the available edges in the way we are going to describe. First, given any real number x, we set (x)2π = x − 2πq where q = b x+π 2π c ∈ Z is such that (x)2π ∈ [−π, π). If e = {v, w} ∈ E, we assume that the fields of view of v and w overlap, and thus that, by means of known algorithms, the following noisy measurement can be computed by both v and w ¯ e (1) ηe = (θ¯s(e) − θ¯t(e) − εe )2π = θ¯s(e) − θ¯t(e) − εe − 2π K ¯ e ∈ Z. The assumption that the measurement is where K an angle ηe ∈ [−π, π) does not clearly entail any loss of generality. The incidence matrix A of the graph G allows to rewrite this relation in vector form as ¯ − ε − 2π K, ¯ η = Aθ
(2)
¯ ∈ ZE . where K We assume noises along different edges to be independent and to be equally distributed as εe ∼ U[−¯ ε, ε¯]. This last choice is made for the sake of simplicity. Notice that it is a realistic hypothesis, since each camera has a well defined resolution, and by construction it has a failure tolerance thus it cannot mislead more than a certain number of pixels. The calibration problem consists in giving an estimate θˆv ∈ R of the correspondent θ¯v for each node n ∈ V. These estimates will be constructed using the available relative measurements and exchanging information along the graph. ¯ and θ ¯ + 2πl for some l ∈ ZV , will give rise Notice that θ to the same measurements η, so even in the case when no ¯ can only be determined up to these 2π noise is present, θ ¯ + 2πl will be integer translations. The vectors of the form θ ¯ called representatives of θ. A. Cost function In order to solve the calibration problem we define the following cost X V (θ) = (θs(e) − θt(e) − ηe )22π = k(Aθ − η)2π k22 . (3) e∈E
The cost V (θ) attains the value zero for any representative of ¯ in case of noiseless measurements. However, even in this θ ideal case, it has multiple local minima. In order to let the algorithm avoid these points, we define the following regions RK (η) := {θ ∈ RV : |Aθ − η − 2πK| ≤ π1}1 ,
(4)
where K ∈ ZE . These regions are convex and form a partition of RV . However, some of them can be empty, since they are defined by M constraints on N variables and in general M > N . It is trivial to see that if θ ∈ RK (η), and only for these points, then V (θ) = kAθ − η − 2πKk22 , which is purely quadratic and convex in RK (η). For this reason, in each RK (η) there can be at most one local minimum of V (θ). The main idea is thus the following: first ˆ of K. ¯ Then minimize of all, obtain a reasonable estimate K the reshaped cost ˆ 2 VK ˆ (θ) := kAθ − η − 2π Kk2
(5)
which is defined by first restricting V to the region RK ˆ (η) and then extending the quadratic form to RV . Notice that by the way η has been defined, ¯ − η − 2π K| ¯ = |ε| ≤ π1 |Aθ
(6)
¯ ∈ R ¯ (η). If we have obtained an estimation K ˆ so that θ K ˆ = K ¯ + Al for some l ∈ ZV , then, clearly, such that K ¯ + 2πl ∈ R ˆ (η). A simple continuity argument then θ K shows that, when the threshold ε¯ tends to 0, the estimation ˆ converges to θ ¯ + 2πl. In other terms, with such a K, ˆ we θ have a guarantee that our solution is close to a feasible one, ¯ In Section V, we namely to a representative of the true θ. will analyze in much deeper detail the performance of the algorithms. IV. D ESCRIPTION OF THE ALGORITHMS Both algorithms we now describe are based on the idea illustrated above of breaking the estimation problem into two ˆ of K ¯ and then minimize the steps: first give an estimate K quadratic function defined in Eq. 5. The two algorithms only differ in the first step, since the first one uses the fundamental cycles with respect to a chosen spanning tree, the second one instead the minimal cycles. Regarding the first step, the main idea underlying both algorithms exploits the fact that the relative differences of the actual orientations θ¯v along a cycle must necessarily sum up to a multiple of 2π. More precisely, let h be an oriented cycle and let r h be its representative vector. Then, from Eq. 2, using the fact that r h A = 0, we obtain that ¯ = −q2π (r h η)−q2π (r h ε). Therefore, if it happens that rh K the algebraic sum of the noise along the cycle h is below π in modulus, i.e. |r h ε| < π, we obtain that ¯ = −q2π (r h η). rh K
(7)
1 |v| ≤ p, where both v, p ∈ Rn , means −p ≤ v < p for all i i i i = 1, . . . , n.
¯ can be exactly computed In other terms, in this case r h K on the basis of the measurements η along the cycle h. This ˆ in such a way that r h K ˆ = would suggest to define K −q2π (r h η) for any cycle h, but this in general will not be possible since the various r h ’s are linearly dependent. What must be done is to restrict the cycles to a subset for which the corresponding r h ’s form a Z-basis for the Z-module generated by the rows of the matrix R. The choice of this basis is the essential difference among the two algorithms. A. The Tree-algorithm Fix a spanning graph T and consider the corresponding ˆ = −q2π (r h η) fundamental cycles. Let us impose that r h K for any fundamental cycle h. From Lemma 2.1 we know ˆ up to elements in the image of A that this determines K as required. A concrete solution can be easily found by ˆ e = 0 for every e ∈ ET . Then, we easily obtain imposing K that, for any e ∈ E \ ET , ˆ e = −q(r h η) K e where, we recall, he is the fundamental cycle associated with e such that rhe (e) = 1. This algorithm is very simple and easily implementable. As we will point out, however, its performances are for large graphs rather poor. ˆ is proposed below. Fix an A distributed way to compute K ∗ anchor node, denoted by v , which will serve as a root in the tree T . First of all, we propagate the measurements along the tree starting from the root, namely, given a node v and called f (v) its father, we set θˆF E, v = θˆF E, f (v) + ηvf (v) . As a side effect, in this way, we also obtain a first estimate ˆ F E of θ. ¯ θ ˆ For each edge e = {v, w} ∈ E\ET , the Now we construct K. nodes v and w exchange their first estimates θˆF E, v , θˆF E, w ˆ e as and compute K % $ ˆF E,s(e) − θˆF E,t(e) − ηe θ ˆe = . K 2π ˆF E ∈ ˆ for which θ This is actually the only choice of K RK ˆ (η). ¯ call it θ, ˆ by Finally we obtain a final estimate of θ, minimizing the quadratic cost function in Eq. 5. This problem can be for example easily solved using a distributed Jacobi algorithm as shown in [2], [3]. B. Minimal cycles-algorithm The second algorithm exploits instead the minimal cycles of the graph G. ˆ = −q2π (r h η) for any minimal Let us impose that r h K ˆ cycle h. From Lemma 2.1 we know that this determines K up to elements in the image of A as required. ˆ once the values A concrete method for constructing such K −q2π (r h η) have been computed for all minimal cycles, can be easily based on Proposition 2.1. We start, as in the
Algorithm 1 Tree-Algorithm (Input variables) 1: θ¯v ∗ , value of the anchor 2: ηe , e = 1, . . . , M 3: T spanning tree
4: 5: 6: 7:
Algorithm 2 Minimal cycles-algorithm (Input variables) 1: ηe , e = 1, . . . , M 2: T spanning tree 3: H0 = r h1 , . . . , r hM −N +1 minimal cycles set
ˆF E ) (Step A: first estimate θ ˆ ¯ θF E,v∗ = θv∗ ; for i = 1, . . . , N do for j = 2, . . . , N do if j is a son of i in T then θˆF E,j = θˆF E,i + ηj,i
4:
5:
8: 9:
10: 11:
ˆ (Step B: estimate K) for e ∈ E jdo k ˆ ˆ ˆ e = θF E,s(e) −θF E,t(e) −ηe K 2π
6:
ˆ (Step C: second estimate θ) ˆ ˆ Initial condition: θ(0) = θ F E
2 ˆ = argmin ˆ compute θ
Aθ − η − 2π K
7: 8:
P
(Step A: computation ofj b = −q2π (R k 0 η)) e∈h ηe +π for h ∈ H0 do bh = − 2π ˆ (Step B: estimate K) ˆ for e ∈ ET do Ke = 0 ˆ e is known for all e ∈ h except for for h ∈ H0 s.t. K P ˆ ˆe one e¯ do Ke¯ = bh − e∈h,e6=e¯ r he (e)K ˆ (Step C: second estimate θ) ˆ Initial condition: θ(0) = (0, . . . , 0)
2
ˆ = argmin Aθ − η − 2π K ˆ compute θ
2
2
previous algorithm, from a spanning subtree T = (V, ET ) ˆ e = 0 on the edges in ET . We then of G and we assign K consider the remaining edges e1 , e2 , . . . , eM −N +1 ordered as ˆ iteratively as in Proposition 2.1 and we define K X ˆ e = −q(r h η) − ˆe K r h (e)K i
i
with the orientation of h2 . Once this is done, we know the ˆ of all the edges of h3 and h4 , apart from 12 and value of K 11 respectively. In order the sum over the cycles to be equal ˆ 12 = −1 and K ˆ 11 = 2. At last, to b3 and b4 , we obtain K all the edges of h5 have a value apart from 13, so it is set ˆ over the five minimal cycles ˆ 13 = −1. Now the sum of K K correspond entry by entry to b.
i
e6=ei
where, hi is the minimal cycle that ei forms with edges in ET ∪ {e1 , . . . ei−1 } chosen with the orientation such that rhi (ei ) = 1. The last step of the Minimal cycle-algorithm is the same as that of the Tree-algorithm. The only difference being that, if we want to use iterative algorithms, the initial condition cannot be set to some particular value, so it is taken to be (0, . . . , 0) for simplicity. This second algorithm allows much better performances than the Tree-algorithm, but it requires a greater order of communication and collaboration among nodes. In fact, we assume that, through some local collaboration among nodes, each minimal cycle corresponds to a “superagent”, able to sense all the measurements along the edges of its cycle. Clearly, this is far more than just locally exchanging information. Let us illustrate how the Minimal cycles-algorithm works with the following simple example. Example 4.1: Consider the simple graph in Fig. 2. There h1 , . . . , h5 are minimal cycles and 1, . . . , 13 are edges. Assume that b1 = 1, b2 = 2, b3 = · · · = b5 = 0, where b = −q2π (R0 η). The edges 1, . . . , 8 form a spanning tree ˆ1 = . . . K ˆ 8 = 0. Now, the T of the graph. First of all, thus, K cycles h1 and h2 are made of edges of the tree apart from ˆ 9 = 1 while the edges 9 and 10 respectively. Thus h1 sets K ˆ 10 = −2, since the direction of 10 is incoherent h2 sets K
1
3
h1
2
9
h2 7
Fig. 2.
11
h5
5
4
6
h4
10
h3
13
12
8
A simple graph to show how the second algorithm works.
V. A NALYSIS AND COMPARISON OF THE ALGORITHMS In this section we give a closed formula for the final ˆ of θ ¯ for both algorithms once one has obtained estimate θ ˆ an estimate K. Secondly we state a deterministic threshold for the noise in order to guarantee a correct estimate of ˆ must minimize the cost ¯ Recall that the final estimate θ K. 2 ˆ VK ˆ (θ) = kAθ −η −2π Kk2 . Since A has a kernel (which is ˆ is only determined up to multiples spanned by 1) of course θ 2 of 1 . This non uniqueness can be avoided as follows. As already done in Section IV, fix an anchor node v ∗ ∈ V that knows the true value of its orientation, and assume that it never changes its estimate. Then consider the vector ξ ∈ RV 2 1 represents a vector of suitable dimension whose entries are all equal to 1.
defined by ξ(v ∗ ) = 1 and ξ(v) = 0 for any v 6= v ∗ . Now define the Green matrix associated to G and v ∗ as the solution of the following equations ( GAT A = I − 1ξ T (8) Gξ = 0. Writing down the stationary point equation for our quadratic problem and using the Green function we have just defined, it is straightforward to show that the following result holds true. ˆ is the estimate of K, ¯ the minimum Proposition 5.1: If K of VK (θ) is attained at ˆ ˆ=θ ¯ + GAT ε − 2πGAT (K ¯ − K). ˆ θ (9) The previous proposition basically says that the difference ˆ and the actual orientations θ ¯ is between the final estimate θ made of two terms. The first one, GAT ε, is unavoidable and only depends on the fact that the measurements are noisy. This term is the localization error in the works by Barooah ¯ − K) ˆ and Hespanha on RN . The second term, −2πGAT (K ˆ and it is due to the geometry depends on the estimation K, ˆ =K ¯ + Al, with l ∈ ZV , it is easy to of S1 . Actually, if K see that ˆ=θ ¯ + 2π(I − 1ξ T )l + GAT ε = θ ˜ + GAT ε θ ˜=θ ¯ + 2π(I − 1ξ T )l is a representative of where clearly θ ¯ θ. Let L0 be the maximum length of a minimal cycle and LT the maximum length of a T -fundamental cycle. It is clear that L0 ≤ LT , and in general LT depends on the number of nodes of the graph, as the examples in Section VI will point out. We present now the main result of this section, which basically gives a noise threshold for both algorithms, ˆ −K ¯ ∈ ImZ A. under which it is guaranteed to have K Proposition 5.2: If π Tree algorithm : ε¯ < LT π Cycle algorithm : ε¯ < L0 we have ˆ =K ¯ + Al, l ∈ ZV . K Proof: The two cases can be analyzed in the same exact way. Below we give a proof for the Minimal cycles¯ satialgorithm. Notice that the assumption implies that K sfies Eq. (7) for any cycle h ∈ H0 . Since, by definition, also ˆ satisfies the same equation, we obtain that R0 (K ˆ − K) ¯ = K 0. By invoking Lemma 2.1, the thesis follows. ˆ can actually be Remark 5.1: A closed formula for K derived for both algorithms, and it will be objective of our future research to deepen the performance analysis, centered ¯ on the error in the estimation of K. If the assumptions of Proposition 5.2 hold, we have as a ˆ=θ ˜ + GAT ε, where θ ˜ is a straightforward consequence θ ¯ representative of θ. In this case the error term is exactly the same appearing in the vector space case and can be analyzed exploiting the
probabilistic assumptions made on the noise in Section III, namely that εe ∼ U[−¯ ε, ε¯], ∀ e ∈ E. In case this holds, from paper [1] we can obtain an estimate of the variance of the final estimate error in terms of the effective resistance of a suitable electrical network. Namely, take an electrical network whose nodes are the nodes of the graph G and where there is a resistance of 1 Ohm among all nodes for which an edge exists in G. Denote by Ruv the effective resistance among any pair of nodes u, v ∈ V × V. Then we have the following result ˆ = θ ˜ + GAT ε is Proposition 5.3 ([1]): The estimate θ ˆ ˜ unbiased, namely Eθ = θ, and its v-th component has variance E[(θˆv − θ˜v )2 ] = Rvv∗ (10) where v ∗ is the anchor. As a consequence, the normalized scalar estimation variance is X 1 ˆ − θ) ˜ = 1 var(θ Rvv∗ (11) N N v∈V
which is the average effective resistance among the anchor and the other nodes of the network. Remark 5.2: The previous Proposition gives mean and ˆ − θ. ˜ However, this is variance of the estimation error δ = θ not entirely correct, since what we are really interested in is ˆ − θ) ˜ 2π , which has still zero mean, and variance δ 2π = (θ less then that in Eq. (11). Nonetheless, if the noise is big ˆ is not near θ, ˜ the probability to end up in a point near and θ ¯ is intuitively very small, so we another representative of θ preferred to give only the results in Proposition 5.3. VI. E XAMPLES In this section we compare the two algorithms we have proposed for several different graph topologies. We concentrate on grid-like topologies since they can be used to model real networks of cameras. First of all, we give a simple example of how our algorithms can avoid the local minima in the original cost. Consider the simple ring graph with 3 agents in Fig. 3, and assume θ¯1 = θ¯2 = θ¯3 = 0 for sake of simplicity (the problem becomes thus consensus on S1 ). Consider the ideal noiseless case, so ¯ 12 = K ¯ 23 = K ¯ 31 = 0. In that η12 = η23 = η31 = 0 and K ˆ = K. ¯ the noiseless case we correctly estimate K Consider the case in which we have as initial conditions θˆ1 (0) = 0, θˆ2 (0) = 32 π and θˆ3 (0) = 34 π. If we use directly the original cost in Eq. (3), it is easy to see that the initial condition is a local maximum (in case of 5 or more agents the analogous configuration is a local minimum), so any gradient-descent like algorithm gets stuck. ˆ we However, if we reshape the cost using our guess K, 2 2 ˆ = kAθk have to minimize VK (θ) = kAθ − 2π Kk ˆ 2 2 , and we converge to the actual orientations fixing the anchor at θˆ1 = θ¯1 = 0. In order to draw now a comparison among the two algorithms, consider the graphs shown in Fig.4. In both cases we have a line–like graph with many nodes deployed along one dimension, and the chosen spanning trees are shown in
1
θˆ2 (0)
complete problem. θˆ1 (0)
2
3 θˆ3 (0)
Fig. 3.
A simple ring with 3 agents. On the right the initial conditions.
thick line. They are rooted on the anchor on the most lefttop node. The set of minimal cycles H0 is simply the set of squares which form the graph.
Fig. 4. Two examples of spanning trees for a line–like graph. The proposed algorithms work in a similar manner for the one on the right, while the Minimal cycles-algorithm is far more effective for the one on the left.
For the graph on the left, if we take the tree and we add the last edge on the right we obtain a cycle with maximum length LT = N . On the contrary, the minimal cycles are of length L0 = 4. As an immediate consequence, the Minimal cyclesalgorithm has much better performances since the upper bound ε¯ < π4 is independent on the number of nodes. On the contrary, in order the tree algorithm to produce a good ˆ the magnitude of the noise should decrease with estimate K, the dimension of the graph. If we consider instead the spanning tree on the right, we can see that LT = 4 as well, since the spanning tree is chosen in a much better way. In this case, the two algorithms have comparable and good performance. It is not always true, however, that the Minimal cyclesalgorithm has good performances. For example, if we consider the ring graph in Fig. 5 we can easily see that there is only one minimal cycle. Here the two proposed algorithms basically coincide, comparing performances. In such a case, the Tree-algorithm is better, since it is easier to implement and completely distributed, it requires less information on the topology of the network, as well as less communications. As a last example, consider the 2D grid on the right in Fig. 5. The comb-shaped spanning tree is the one in thick line. As √ before, here LT ∼ N adding one of the edges on the bottom, while L0√= 4. However, it can be shown that for the grid LT ∼ N is actually the best one can do. So in this case the Minimal cycles-algorithm has always better performances than the Tree-algorithm. Notice that the choice of the spanning tree is fundamental to draw a comparison between the algorithms. Even if the tree is such that LT is minimum, the choice of the better algorithm depends on the topology of the graph, since it could hold LT > L0 , as highlighted in the previous example. Furthermore the construction of such an optimal spanning tree is a NP-
Fig. 5. On the left a ring graph, for which the two algorithms have the same performance. On the right, a grid graph.
VII. N UMERICAL RESULTS In this Section we provide a numerical comparison between the two approaches we propose in this paper. Specifically, in the experiments we simulate the Tree-algorithm and the Minimal cycles-algorithm on square grid graphs of size N = n2 for n ranging from 3 up to 19. An example of square grid graph is depicted in Fig. 6 (left panel), where n = 4 and, in turn, N = 16. anchor
anchor
Fig. 6. On the left a square grid graph for n = 4. On the right the correspondent spanning tree used in simulations.
In all our simulations we set θ¯1 = 0, while, for i ∈ {2, . . . , N }, θ¯i is randomly sampled from a uniform distribution on [−π, π]. The values of the noises εe , e ∈ E, are also randomly sampled, in this case from a uniform distribution on [−¯ ε, ε¯] where ε¯ = π8 . The simulation results obtained are reported in Fig. 7 and in Fig. 8. For each n the values we plot are averaged over ¯ and a different set of noises are 200 trials (a different θ generated for each trial). The kind of spanning tree we use to run our algorithms is illustrated in Figure 8. Here we have n = 4, but for different values of n the spanning tree used is similarly built. Observe that, for the square grid graphs and the corresponding spanning tree we consider, we have L0 = 4 independently from n, and LT = 2n + 2. In Fig. 7 we show the value of the estimate error e = 1 ¯ − θ) ˆ 2π k2 for both the Tree-algorithm and the Minimal k( θ N 1 ¯− cycles-algorithm; in Fig. 8 we plot the value eK = M k(K 2 M ˆ Im A k , where if X ∈ Z , (X))Im A represents the K) Z Z projection out of the Z-submodule spanned by the columns of A. This quantity is taken as a measure of the distance
¯ and the estimates obtained between the actual value K through the algorithms. Notice that, since π π π = > = ε¯ L0 4 8 it follows from Proposition 5.2 that the Minimal cycles¯ thus eK = 0. On algorithm always correctly estimates K, the contrary, observe that in the case of the Tree-algorithm eK is increasing with the dimension of the graph. This is √ not surprising since LT grows linearly with N , and so intuitively the probability of the estimate to be bad becomes larger and larger. As expected, one can check from Fig. 7 that the Minimal cycles-algorithm outperforms the Tree-algorithm. Orientations 0.35 Tree Circles
Error on orientation - e [rad]
0.3
A PPENDIX Proof: [Proof of Lemma 2.1] We will prove the thesis following the steps: (a) ImA ⊆ ker R0 ; (b) ker R0 ⊆ ker R; (c) ker R ⊆ ker RT ; (d) ker RT ⊆ ImA. (a): Let h = (v1 v2 , · · · vn ) be a minimal cycle and put ei = {vi , vi+1 }. Then, (r h A)v =
0.25
X
rh (ei )Aei v =
i
X
Aei vi Aei v .
i
Now, if v 6∈ {v1 , . . . , vn }, the above sum has all addends equal to 0 and is thus 0. If instead v = vj ∈ {v1 , . . . , vn } we obtain X Aei vi Aei v = Aej−1 vj−1 Aej−1 vj + Aej vj Aej vj
0.2 0.15 0.1
i
= −1 + 1 = 0. 0.05 0 345 6 7 8 9 10 11 12 13 14 15 16
17
18
19
Number of nodes in each dimension - n Average error on the orientations (modulo 2π).
Fig. 7.
K
−3
3
x 10
(b): Take any cycle h, and decompose it into minimal cycles, say h1 , . . . , hl . It holds r h = r h1 + . . . + r hl . This immediately yields (b). (c): It is trivial. (d): Let K ∈ ker RT . Let us fix a node i0 ∈ V, consider any other i ∈ V and γ, the path in the spanning tree connecting i0 to i. Define now X θi := (−1)(γ,e) Ke e∈γ
Tree Circles
Error on region identifier K
algorithms are proposed in order to reshape a suitable cost function which is used by each camera to obtain an estimate of its actual orientation w.r.t. an external reference frame. Future research will be focused on deeper characterization of such estimates. Moreover, the more general case of cameras deployed in R3 , and thus calibration in SO(3) will be addressed.
where (γ, e) = 0 if e is oriented coherently w.r.t. the path γ, and (γ, e) = 1 otherwise. Our aim is to show that it exists θ ∈ RV s.t. K = Aθ, more precisely Ke = θt(e) − θs(e) , for any e ∈ E. Consider the fundamental cycle h = (γs , e, γt ), where γs is the path connecting i0 to s(e), and γt is the path connecting t(e) to i0 . It follows that
2.5
2
1.5
0 = rh K =
1
X
(−1)(γs ,e) Kf +
f ∈γs
X
(−1)(γt ,e) Kf + (RT )h Ke
f ∈γs
= θs(e) − θt(e) + Ke
0.5
0 345 6 7 8 9 10 11 12 13 14 15
16
17
18
19
Number of nodes in each dimension - n Fig. 8.
ˆ Average error on K.
VIII. C ONCLUSIONS This paper deals with the problem of distributively calibrate a network of cameras deployed in a plane. Two
whence the thesis holds. Proof: [Proof of Proposition 2.1] Given ET choose e1 ∈ E \ ET such that it forms a cycle of minimal length, which is of course a minimal cycle. We then proceed by induction. Suppose to have constructed e1 , . . . ei−1 satisfying the properties of the Proposition. Choose now ei as the edge in E \ (ET ∪ {e1 , . . . , ei−1 }) such that added to ET ∪ {e1 , . . . , ei−1 } creates a cycle h of minimal possible length. Such a cycle h must necessarily be minimal. If not, indeed, it would be possible to decompose it as h = h0 + h00 ,
where h0 and h00 are two cycles possessing at least one common edge e∗ . By the way ei has been chosen, necessarily e∗ ∈ E \ (ET ∪ {e1 , . . . , ei−1 }). This clearly implies that e∗ creates with ET ∪ {e1 , . . . , ei−1 } a cycle shorter than h, precisely the one between h0 and h00 not containing ei . This is an absurd. R EFERENCES [1] P. Barooah and J. P. Hespanha, “Distributed estimation from relative measurements in sensor networks,” in Proc. of the 2nd Int. Conf. on Intelligent Sensing and Information Processing, Dec. 2005. [2] P. Barooah, N. M. da Silva, and J. P. Hespanha, “Distributed optimal estimation from relative measurements for localization and time synchronization,” in Distributed Computing in Sensor Systems, ser. Lect. Notes in Comput. Science. Berlin: Springer, June 2006, vol. 4026, pp. 266–281. [3] P. Barooah and J. P. Hespanha, “Estimation on graphs from relative measurements: Distributed algorithms and fundamental limits,” IEEE Contr. Syst. Mag., vol. 27, no. 4, pp. 57–74, Aug. 2007. [4] A. Sarlette, “Geometry and symmetries in coordination control,” Ph.D. dissertation, University of Li`ege, 2009. [Online]. Available: http://www.montefiore.ulg.ac.be/services/stochastic/pubs/2009/Sar09 [5] A. Sarlette and R. Sepulchre, “Consensus optimization on manifolds,” SIAM J. Control and Optimization, vol. 58, no. 1, pp. 56–76, 2009. [Online]. Available: http://www.montefiore.ulg.ac.be/services/stochastic/pubs/2009/SS09a [6] R. Tron and R. Vidal, “Distributed image-based 3-d localization of camera sensor networks,” in CDC, 2009, pp. 901–908. [7] G. Piovan, I. Shames, B. Fidan, F. Bullo, and B. D. O. Anderson, “On frame and orientation localization for relative sensing networks.” Automatica, 2011. [8] W. Russell, D. Klein, and J. Hespanha, “Optimal estimation on the graph cycle space,” in American Control Conference (ACC), 2010, 30 2010-july 2 2010, pp. 1918 –1924.