2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
October 18-21, 2015, New Paltz, NY
HIGH RESOLUTION IMAGING OF ACOUSTIC REFLECTIONS WITH SPHERICAL MICROPHONE ARRAY Lucio Bianchi, Marco Verdi, Fabio Antonacci, Augusto Sarti, Stefano Tubaro Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano Piazza Leonardo da Vinci 32 – 20133 Milano, Italy ABSTRACT This paper proposes a methodology for the accurate visualization of acoustic reflections in a room from acoustic measurements by a spherical microphone array. The goal is to provide insight on the relationship between architectural and acoustic features. This task requires high-resolution acoustic images. In this contribution, we achieve this goal by introducing two main modifications of existing approaches. At first, we adopt an explicit model that takes into account the scattering due to the rigid spherical surface where the microphone capsules are hosted. Then, we obtain an estimate of the acoustic power coming from a grid of directions using a spectral analysis approach based on the matching of array data covariance matrix. We conclude this manuscript by showing applications of the devised methodology in real world cases. Index Terms— Acoustic reflections, spherical microphone array, space-time audio processing. 1. INTRODUCTION Early reflections have an important influence on the perception of sound in enclosed spaces [1, 2]. They can be regarded as a set of spatio-temporal events occurring at distinct time instants and originating from distinct spatial locations. Undesired reflections coming from walls and obstacles could represent a problem. For this reason, researchers in acoustics have worked in the past on an accurate determination of the direction of arrival of the early reflections in an impulse response (see, e.g., [3, 4, 5, 6] and references therein). Furthermore, the determination of the direction of the early reflections is important also in applications of room equalization and correction [7, 8, 9], in the identification and control of industrial and aerospace noise sources [10] and for speech enhancement [11]. This paper describes a methodology for estimating the temporal and spatial distribution of reflections in an enclosed space. Particular attention is devoted to the data visualization paradigm, chosen carefully to provide an accessible and intuitive source of information. We base our analysis on the impulse responses acquired by exciting the environment with an impulsive sound source [12]. All the state-of-the art imaging techniques use a microphone array to acquire several impulse responses simultaneously, by steering the response of the array towards different directions. In this context, spherical arrays represent the most widely used geometry. In [3] an open spherical microphone array is used to compute the impulse responses over a grid of directions and results are shown on a cartographic-like map. In [13, 14] a rigid spherical microphone array is described and the steering is achieved through the decomposition of the incident sound field in terms of spherical harmonics. High-resolution images can be obtained within this framework by performing a sub-space pre-processing [15].
In [16] a panoramic projection of the visual image of the enclosure is layered with the magnitude of directional impulse response, obtained through a beamforming based on spherical harmonic decomposition. In [17, 18] the previous approach is refined by using measured directional array transfer functions to deconvolve the acquired signals. The main issue of the latter two approaches lies in the fact that a preliminary measurement session in an anechoic room is required, in order to determine the response of each microphone in the array to sound waves impinging from different directions. In this paper we adopt a rigid spherical microphone array configuration, and we formulate the directional imaging problem as a spectral analysis problem, representing the sound field as a spectrum of plane waves propagating in the environment. A state-of-the art spectral analysis tool [19, 20], based on the covariance matrix of array data, is used to obtain an high resolution estimate of the plane waves spectrum. Since the magnitude plane wave spectrum is related to the sound intensity coming from a specific direction [21, Sec. 4.4], we obtain an estimate of the intensity of the reflections coming from a specific direction. Finally, we map the reflection intensity diagram on a panoramic visual image of the environment. The rest of the paper is structured as follows. In Section 2 we formulate the imaging problem as a spectral analysis problem and we introduce the scattering model for the rigid sphere. In Section 3 we review the spectral estimation approach. In Section 4 we show some experimental results. Finally, Section 5 draws some conclusions. 2. DATA MODEL AND PROBLEM FORMULATION In this section we introduce the data model adopted throughout the rest of the manuscript. We consider the sound field generated by a single sound source in a reflective environment and acquired by a spherical microphone array. Under the assumption of far-field propagation (i.e. the excitation source and the reflectors are considered to be in the far field with respect to the microphone array [16]), we can parametrize the reflections according to their angle of incidence on the microphone array. Let y(t) ∈ RM ×1 denote the impulse response vector acquired by a spherical microphone array composed by M microphones. In Sec. 4 we discuss the impulse response measurement technique adopted in this work; however, we remark that the proposed approach is independent on the particular technique adopted. Our analysis is based on short-time segments of the acquired impulse responses, each weighed by a suitable window function. Let also y(ωl , tn ) ∈ CM ×1 be the lth bin of its Short-Time Discrete Fourier Transform [22] for the segment centered in tn , i.e. ωl = 2πfl /Fs , being fl and Fs the temporal frequency and the temporal sampling frequency, respectively. We denote by c the speed of sound. With
2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
z
z θ
In the following sections we show how to compute an accurate estimation of |˜ su (ωl , tn )|2 given the array data, and how we associate them to a panoramic visual image of the environment under test in order to provide an high-resolution reflection intensity map. We remark that the estimation problem based on the data model in Eq. (4) is of widespread use in many application fields; in particular, this estimation problem is widely studied in the context of spectral analysis. This fact allows us to resort to widely studied spectral analysis approaches to solve the estimation problem.
θ˜υ−1 d(γq ) θ˜
d(γ)
υ
φ˜ψ−1φ˜ψ x
φ
y
y
x
(a) Reference frame.
October 18-21, 2015, New Paltz, NY
(b) Angular grid.
Figure 1: Notation adopted in this work.
3. PARAMETER ESTIMATION
the above notation at hand, the signal acquired by the microphone array can be written as y(ωl , tn ) =
Q X
a(γq , ωl )sq (ωl , tn ) + v(ωl , tn ),
(1)
q=1
where v(ωl ) ∈ CM ×1 is an additive noise term; γq = (θ, φ)q ∈ Γ and sq ∈ C are the unknown parameters of the qth reflection, i.e. the direction of arrival and the associated signal, respectively; and a(·) : Γ → CM ×1 is the propagation function. We denote with θ the polar angle and with φ the azimuth angle. Figure 1a shows the reference frame. The function a(γq , ωl ) encodes the modifications undertaken by a planar wave field component impinging on the microphone array from direction γq . More formally, the mth component of a(γq , ωl ) represents the pressure generated at the mth microphone due to an impulsive excitation coming from direction γq . Denoting by xm ∈ R3 the position vector relative to the mth microphone located on the surface of a rigid sphere of radius x, and by dq ∈ R3 the unit vector in the direction γq , we can write [23, Eq. (10)] {a(γq , ωl )}m =
i (xωl /c)2
∞ X µ=0
in (2µ + 1)Pµ (hxm , dq i) , (2) h0µ (xωl /c)
where i is the imaginary unit; Pµ (·) is the Legendre polynomial of degree µ [24, Chap. 18]; hxm , dq i is the standard inner product in R3 ; and h0µ (·) is the first derivative of the spherical Hankel function of first kind and order µ [24, Sec. 10.47]. In this work we aim at estimating the parameters γq and |sq |2 associated with each reflection observed in the nth frame of the acquired signals. However, the total number of reflections Q and their associated directions of arrival are unknown a-priori. Hence we adopt a non-parametric estimation method [25, Sec. 6.3] on a predefined angular grid. We denote by {˜ γu }, u = (υ, ψ) ∈ Z2 the element of a grid that covers Γ and we assume that each γq is close to a grid point, as depicted in Fig. 1b; in other words, there exist angles (θ1 , φ1 ), . . . , (θΥ , φΨ ) such that γq ≈ γ˜u . We define U = ΥΨ as the total number of grid points. We let au (ωl ) = a(˜ γu , ωl ) and ( sq (ωl , tn ), if γ˜u = γq s˜u (ωl , tn ) = (3) 0, otherwise. Using this notation, the model in (1) can be rewritten as y(ωl , tn ) = A(ωl )S(ωl , tn ) + e(ωl , tm ),
(4)
where we have defined A(ωl ) = [a(1,1) (ωl ), . . . , a(Υ,Ψ) (ωl )] and {S(ωl , tn )}u = s˜u (ωl , tn ) so that A(ωl ) ∈ CM ×U and S(ωl , tn ) ∈ CU ×1 .
In this section we briefly review the estimation method introduced in [19, 20] and we show how it can be applied to the problem at hand. In order to simplify the notation, in this section we omit the dependency of the data on ωl and tn . Under the assumption of uncorrelated noise and sources, the covariance matrix of the array data can be written as [25, Eq. (6.4.3)] R = E[yyH ] = AH ΣA + V,
(5)
where the matrix Σ contains the power of individual signal components arranged on its diagonal, i.e. Σ = diag([σ1 , . . . , σU ]), being σu = |˜ su |2 , and V = E[vvH ] = diag([1 , . . . , M ]), m denoting the noise variance at mth microphone. We observe that in the considered problem, in general the source signals are not uncorrelated inside a temporal frame. However, as shown in [19], the method reviewed in this section is robust to the assumption of uncorrelated source signals. By defining Σ 0U ×M ˇ = A IM ×M ˇ = A and Σ , (6) 0M ×U V being I and 0 the identity matrix and the zero matrix, respectively, we can rewrite (5) in the compact form ˇ HΣ ˇ A, ˇ R=A
R ∈ CW ×W ,
W = U + M.
(7)
In the following we denote with σ ˇw the wth element on the diagonal ˇ and with a ˇ ˇ w the wth column of A. of Σ The problem of estimating {ˇ σw }W w=1 is solved by fitting the modeled covariance matrix R to its sample estimate from array data yyH , i.e. by finding the set {ˇ σ w }W w=1 that minimizes [19, Eq. (19)] min kR−1/2 (yyH − R)k2F ,
{ˇ σw }
(8)
where here k·k2F denotes the Frobenius norm for matrices. By expanding the cost function, the problem in (8) can be equivalently formulated as [19, Eq. (22)] min tr(yH R−1 y) +
UX +M
{ˇ σw }
h2w σ ˇw ,
(9)
w=1
where hw = kˇ aw k/kyk and tr(·) denotes the matrix trace. The minimization problem in (9) is convex and has a global minimum [19, Sec. III-A]. In the following we review the iterative algorithm derived in [19, Sec. III-B] to solve (9). Consider the modified problem min
{ˇ σw },B
ˇ tr(BH ΣB) +
UX +M w=1
h2w σ ˇw
ˇ = yyH . s.t AB
(10)
2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
W W X X kβw k2 min + h2w σ ˇw . {ˇ σw } σ ˇ w w=1 w=1
(11)
w = 1, . . . , W
(12)
ˇΣ ˇ (i) A ˇ H. R(i) = A
Following [19], for the initialization of (12) we use the beamform(0) 2 ing estimator σ ˇw = |ˇ aH aw k4 . The algorithm is stopped w y| /kˇ (i) (i−1) when the condition kΣ − Σ k/kΣ(i−1) k < τ is satisfied for a given value of the threshold τ . 4. EXPERIMENTAL VALIDATION In this section we show some experimental results of the proposed high-resolution imaging approach. All the experiments are performed with mh acoustics’ Eigenmiker spherical microphone array, composed of M = 32 capsules mounted on a rigid sphere, whose exact locations can be found in [26]. The technique can be straightforwardly applied to any rigid spherical microphone array. (Υ,Ψ) The angular grid {˜ γu }u=(1,1) is designed to uniformly cover the spherical angular region of interest Γ. For this purpose, the angular axes θ and φ are uniformly sampled over Υ and Ψ points, respectively. In this setting, the double index u can be conveniently sorted as u = (υ − 1)Ψ + ψ, with υ = 1, . . . , Υ, ψ = 1, . . . , Ψ and u = 1, . . . , U . In all the experiments, we consider a sampling frequency Fs = 44.1 kHz and we consider frames of length 1.5 ms. Impulse responses are measured using exponential sine sweeps [27]. The acquired impulse responses are processed with an octave pass-band filter centered at fc = 4 kHz, as recommended in [17], for the purpose of a better identification of reflections. All the frequency bins in the passband of the filter concur to the generation of a single acoustic image, with the product of their geometric and harmonic means as suggested in [28]. The threshold τ is set to 10−12 . ˇ are mapped onto a 2-D imFinally, the power estimates in Σ age using the equiangular projection [29] (i.e. the azimuth angle is uniformly mapped to the horizontal axis while the polar angle is mapped to the vertical axis of a 2-D plot). 4.1. Validation in a controlled environment The first set of experiments is conducted in an acoustically controlled environment (T60 = 50 ms), according to the geometry depicted in Fig. 2a. The acoustic scene consists of a sound source placed at an height of 1.3 m, the spherical microphone array placed at the same height and distant 1.6 m from the sound source and a reflective panel placed 1 m behind the array. Reflections from the
Amplitude
3 4
1 2 Floor
(a) Geometry.
It is shown in [19] that the minimizer of (11) is σ ˇw = kβw k/hw , w = 1, . . . , W . Since the cost function in (10) is convex in both B and {ˇ σw }, a cyclic iterative minimization over B and {ˇ σw } leads to a global minimum. The ith iteration involves the following operations: ˇ (i−1) AR ˇ −1 (i − 1)yyH B(i) = Σ (i) (i) σ ˇw = kβw k/hw ,
1
Reflector
ˆ = The minimization over B for a fixed set {ˇ σw } is given by B H −1 H ˇ ˆ ΣA R yy [19, Appendix A]. Substituting B into (10) yields the original problem in (9). This considerations allows to conclude that the sets {ˇ σw } obtained from (9) and (10) must be identical. Upon defining B = [β1 , . . . , βW ]T , the problem (10) for a fixed set {βw } can rewritten as
October 18-21, 2015, New Paltz, NY
1
0.5
2
3
4
0 −0.5 −1
5
10 Time t [ms]
15
(b) Impulse response.
Figure 2: Geometry and four relevant acoustic paths (Fig. 2a). Impulse response recorded by the first capsule (Fig. 2b); peaks are labeled according to the acoustic paths shown in Fig. 2a.
ceiling and from one wall (the one behind the sound source) are highly damped through the use of absorbing panels. Reflections from the floor are damped by a thick carpet. Figure 2b shows the impulse response recorded by the first capsule of the microphone array. The peaks are labeled according to the acoustic paths shown in Fig. 2a. We observe that the location of the peaks in Fig. 2b is consistent with the length of the acoustic paths in Fig. 2a; in particular length1 = 1.6 m (∼ 4.7 ms), length2 = 3.05 m (∼ 8.8 ms), length3 = 3.6 m (∼ 10.5 ms), length4 = 4.4 m (∼ 12.8 ms). Acoustic paths reflected by the ceiling and the rear wall are not shown because they are so damped that their associated peaks are not visible in the impulse response. Fig. 3 shows four panoramic views of acoustic reflections computed from different time segments, each centered around one of the peaks identified from Fig. 2b and associated to the acoustic path depicted in 2a. Fig. 3a is captured around t = 4.67 ms, corresponding to the propagation time along the direct path 1 in Fig. 2a; we observe a sharp peak in the estimated energy distribution coming from the direction under which the sound source is seen by the microphone array, i.e. θ = 90◦ , φ = 0◦ . Fig. 3b is computed from the time segment centered at t = 8.77 ms, corresponding to the propagation time associated with the first-order acoustic path 2, originated at the sound source and reflected by the floor the measurement chamber; we observe that this reflection is highly damped (due to the presence of a carpet) but still detectable with the proposed technique. Fig. 3c is captured around 10.7 ms, corresponding to the propagation time associated with the acoustic path 3, i.e. the first-order path passing through the reflective panel. Finally, Fig. 3d is captured around t = 13.06 ms, corresponding to the propagation time associated with path 4, which is a second order path passing through the floor and the panel. 4.2. Experiments in a real-world acoustic environment The last set of experiments are conducted in the Auditorium “Giovanni Arvedi”, located in the Museo del Violino (Violin Museum), Cremona, Italy1 . Dimensions of the hall are 14 m × 35 m for a maximum height of 14 m, the volume of the hall being 5300 m3 . The sound source is placed in the center of the stage area (in the position marked by the cross) on a support of height 1.2 m, while the microphone array is placed in correspondence of the black dot in Fig. 4. 1 We thank Museo del Violino Fondazione Stradivari and its staff for their availability during the measurement session held in the Auditorium G.Arvedi
2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
θ [deg]
0
October 18-21, 2015, New Paltz, NY
0
45
−20
90
−40
135 180 −180−135 −90 −45
−60 0
45
90
135 180
dB
−80
φ [deg]
(a) t = 4.67 ms, peak 1. −100
45
Figure 4: Floor plan of the auditorium “Giovanni Arvedi”, Museo del Violino, Cremona, Italy.
−120
90
0
−140
135 180 −180−135 −90 −45
−160 0
45
90
135 180
dB
−180
φ [deg]
30 θ [deg]
θ [deg]
0
(b) t = 8.77 ms, peak 2. 45
−40
90
−60 −60
−60
135 180 −180−135 −90 −45
0
45
90
135 180
dB
0
−80
45
−100
90
−120
135 180 −180−135 −90 −45
−140 0
45
90
135 180
dB
−160
φ [deg]
(d) t = 13.06 ms, peak 4. Figure 3: Panoramic views of acoustic reflections in the acoustically controlled environment. The values of the power estimates expressed in dB are mapped to color scale.
The distance between the source and the center of the microphone array is 10.4 m. The results of the application of the proposed analysis methodology to impulse responses acquired with the setup described above are shown in Fig. 5. The two acoustic images are computed from two different time segments; the first one, Fig.5a, centered in t = 30.32 ms, corresponds to the direct acoustic path from the sound source to the center of the microphone array; the other, Fig. 5b, centered in t = 33.59 ms, corresponds to the propagation time along the acoustic path generated from the sound source and diffracted by the railing. The acquired impulse responses do not exhibit other relevant events from the frontal direction, except the direct and diffractive paths. This is due to the design of the environment, which does not contain planar surfaces between the source and the
0
30
60
80
dB
−80
φ [deg]
−40
30
−100
(c) t = 10.70 ms, peak 3.
−30
(a) t = 30.32 ms, direct path.
−80
φ [deg]
θ [deg]
−40
90 120
−20
θ [deg]
θ [deg]
0
−20
60
−60
60
−80
90 120
−100 −60
−30
0
30
60
80
dB
−120
φ [deg]
(b) t = 33.59 ms, railing diffraction. Figure 5: Panoramic views of acoustic reflections in the auditorium “Giovanni Arvedi”, Museo del Violino, Cremona, Italy. The values of the power estimates expressed in dB are mapped to color scale. seats, thus enabling only diffusive paths. We observe that the proposed methodology is able to discriminate acoustic events occurring within ∼ 3 ms, and coming from close directions (∼ deg 15). These results demonstrate the direct applicability of the proposed approach to a challenging real-world scenario, in which the acoustic properties of the environment are not controlled. 5. CONCLUSIONS In this paper we have proposed a methodology for the accurate visualization of acoustic reflections in a room. We have introduced two modifications of state-of-the-art schemes, consisting in an explicit model for the scattering due to the the rigid sphere that hosts the microphone array; and the adoption of a spectral analysis approach to detect the directional energy distribution impinging on the array. We have validated the proposed approach in an acoustically controlled environment and in an auditorium.
2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
6. REFERENCES [1] M. Barron, “The subjective effects of first reflections in concert halls - the need for lateral reflections,” J. Sound Vib., vol. 15, no. 4, pp. 475–494, 1971. [2] S. E. Olive and F. E. Toole, “The detection of reflections in typical rooms,” J. Audio Eng. Soc., vol. 37, no. 7/8, pp. 539– 553, July/Aug. 1989. [3] B. N. Gover, J. G. Ryan, and M. R. Stinson, “Microphone array measurement system for analysis of directional and spatial variations of sound field,” J. Acoust. Soc. Am., vol. 112, no. 5, pp. 1980–1991, Nov. 2002.
October 18-21, 2015, New Paltz, NY
[16] A. O’Donovan, R. Duraiswami, and D. Zotkin, “Imaging concert hall acoustics using visual and audio cameras,” in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Process. (ICASSP), 2008. [17] A. Farina, A. Amendola, A. Capra, and C. Varani, “Spatial analysis of room impulse responses captured with a 32capsules microphone array,” in Proc. AES 130th Conv., 2011. [18] A. Farina, A. Capra, L. Chiesi, and L. Scopece, “A spherical microphone array for synthesizing virtual directive microphones in live bradcasting and in post production,” in Proc. AES 140th Int. Conf., 2010.
[4] ——, “Measurement of directional properties of reverberant sound fields in rooms using a spherical microphone array,” J. Acoust. Soc. Am., vol. 116, no. 4, pp. 2138–2148, Apr. 2004.
[19] P. Stoica, P. Babu, and J. Li, “New method of sparse parameter estimation in separable models and its use for spectral analysis of irregularly sampled data,” IEEE Transactions on Signal Processing, vol. 59, no. 1, pp. 35–47, Jan. 2011.
[5] D. Khaykin and B. Rafaely, “Acoustic analysis by spherical microphone array processing of room impulse responses,” J. Acoust. Soc. Am., vol. 132, no. 1, pp. 261–270, July 2012.
[20] ——, “SPICE: A sparse covariance-based estimation method for array processing,” IEEE Transactions on Signal Processing, vol. 59, no. 2, pp. 629–638, Feb. 2011.
[6] F. Martellotta, “On the use of microphone arrays to visualize spatial sound field information,” Applied Acoustics, vol. 74, pp. 987–1000, 2013.
[21] F. Fahy, Sound Intensity, 2nd ed. London, UK: E & FN Spon, 1995.
[7] A. Canclini, D. Markovi´c, F. Antonacci, A. Sarti, and S. Tubaro, “A room-compensated virtual surround system exploiting early reflections in a reverberant room,” in Proc. European Signal Process. Conf. (EUSIPCO), Bucharest, RO, Aug. 27-31 2012. [8] A. Canclini, D. Markovi´c, L. Bianchi, F. Antonacci, A. Sarti, and S. Tubaro, “A robust geometric approach to room compensation for sound field rendering,” IEICE Trans. Fundamentals, vol. E97, pp. 1884–1892, 2014. [9] M. A. Poletti, T. Betlehem, and T. D. Abhayapala, “Higherorder loudspeakers and active compensation for improved 2d sound field reproduction in rooms,” J. Audio Eng. Soc., vol. 63, no. 1/2, pp. 31–45, Jan./Feb. 2015. [10] B. N. Gover, “Directional measurement of airborne sound transmission paths using a spherical microphone array,” J. Audio Eng. Soc., vol. 53, no. 9, pp. 787–795, Sept. 2005. [11] F. Ribeiro, C. Zhang, D. A. Florˆencio, and D. E. Ba, “Using reverberation to improve range and elevation discrimination for small array sound source localization,” IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 7, pp. 1781–1792, Sept. 2010. [12] S. M¨uller and P. Massarani, “Transfer-function measurements with sweeps,” J. Audio Eng. Soc., vol. 49, no. 6, pp. 443–471, 2001. [13] T. D. Abhayapala and D. B. Ward, “Theory and design of high order sound field microphones using spherical microphone array,” in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Process. (ICASSP), 2002. [14] J. Meyer and G. Elko, “A highly scalable spherical microphone array based on an orthonormal decomposition of the soundfield,” in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Process. (ICASSP), 2002. [15] N. Epain and C. T. Jin, “Super-resolution sound field imaging with sub-space pre-processing,” in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 2013.
[22] J. B. Allen, “Short term spectral analysis, synthesis, and modification by discrete fourier transform,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 25, no. 3, pp. 235–238, 1977. [23] D. N. Zotkin, R. Duraiswami, and N. A. Gumerov, “Planewave decomposition of acoustical scenes via spherical and cylindrical microphone arrays,” IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 1, pp. 2–16, Jan. 2010. [24] F. W. J. Olver, Ed., NIST Handbook of Mathematical Functions. New York, NY, USA: National Institute of Standards and Technology, 2010. [25] P. Stoica and R. Moses, Spectral Analysis of Signals. Saddle River, NJ, USA: Prentice Hall, 2004.
Upper
[26] Em32 Eigenmiker microphone array release notes., mh acoustics’, Apr. 2013. [27] A. Farina, “Simultaneous measurement of impulse response and distortion with a swept-sine technique,” in Proc. AES 108th Conv., Feb. 2000. [28] M. R. Azimi-Sadjadi, A. Pezeshki, L. L. Scharf, and M. Hohil, “Wideband DOA estimation algorithms for multiple target detection and tracking using unattended acoustic sensors,” in Proc. Defense and Security, 2004. [29] J. P. Snyder, Flattening the Earth: Two Thousand Years of Map Projections. Chicago, IL, USA: University of Chicago Press, 1993.