J Supercomput (2012) 62:673–680 DOI 10.1007/s11227-010-0515-y
Optical spatial image processor based on aliasing of pseudo-periodic sampling Alexander Zlotnik · Melania Paturzo · Pietro Ferraro · Zeev Zalevsky
Published online: 21 December 2010 © Springer Science+Business Media, LLC 2010
Abstract In this paper, we present a new configuration for a real-time spatial image processor that is based upon a spatially incoherent imaging setup in which a grating is attached to the object plane. By proper adjusting of the magnification of the imaging system to the spatial period of the grating and the sampling grid of the camera, the aliasing effect along the non-uniform digital sampling realizes a tunable spectral distribution that is applied over the spectrum of the object. Preliminary numerical demonstration of the operation principle is provided. Keywords Optical filtering · Aliasing · Pseudo-periodic sampling 1 Introduction and motivation Optical realization of a real time spatial processor is an important task while optics has a major advantage over electronics due to its capability to process in parallel 2-D information. Different configurations tried to realize such a processor based upon various dynamic electro-optical modulation devices as SLM that was usually positioned in an optical Fourier transforming configuration [1–4]. In other approaches, all-optical modulation was obtained via different non-linear effects, such as the photo refractive effect that was used for mixing the input data and the optical filter distributions together [5–10]. The aim of this paper is to build an ultra-fast tunable device affecting image spectral distribution utilizing a non-uniform sampling. The optical system includes fixed A. Zlotnik · Z. Zalevsky () School of Engineering, Bar-Ilan Univ., Ramat-Gan 52900, Israel e-mail:
[email protected] M. Paturzo · P. Ferraro CNR, Istituto Nazionale di Ottica Applicata, Sez. Napoli, Via Campi Flegrei 34, 80078, Pozzuoli, Italy
674
A. Zlotnik et al.
grating that is attached to the object plane and an imaging system which is either defocused or it has a fixed spatial non-uniform transmission mask that is attached to the aperture plane of the imaging lens, except that the proposed system does not require any time integration or movement of elements as appears in other conventional filtering configurations that involve imaging [11]. The setup is very simple and allows realization of a tunable spatial processor to be applied to the image of the object by means of merely changing the sampling parameters. Briefly, the idea is to attach a fixed grating to the object in an imaging setup and to use the aliasing effect generated due to the sampling in the detection array such that the differently replicated spectral (spatial spectrum) information will realize different spectral functions multiplying the spectrum of the input object. Contrary to the previously demonstrated approach [12] where the tunability was achieved through usage of a LiNbO3 grating or a grating having variable Fourier coefficients, here we exploit the non-uniform sampling done by the camera in order to realize various spatial spectral functions. In Sect. 2, we present the mathematical derivation for the proposed approach. In Sect. 3, we provide a preliminary numerical demonstration. Section 4 concludes the paper.
2 Mathematical derivation Let us perform a 1-D analysis while the generalization to 2-D case is straightforward. We have a pseudo-periodic sampling which is obtained by our imaging camera in the following way: We virtually divide the pixels of the detection array into groups of N “macro-pixels”. Within each macro-pixel, we perform several grouping of pixels. This grouping is the same for each macro-pixel and it can vary with time. For example, assuming that we decided to have a macro-pixel of N = 20 pixels, then in each of the macro-pixels we can group the pixels such that there will be only 5 readouts corresponding to summation of pixels 1 to 5, summation of pixels 6 to 8, summation of pixels 9 to 12, summation of pixels 13 to 18, and summation of pixels 19 to 20. As previously mentioned, this grouping will be changed with time and we will show that each grouping realizes a different spatial spectral function over the input image. Let us write this mathematically. The pixels of the detection array sample ¯ the analogue intensity distribution arriving to the array. Let us denote by h(x) this distribution. Assuming that the size of each pixel is x and that the size of each grouping inside the macro-pixel equals to kj pixels. We also denote by T the total number of groupings per macro-pixel and by L the total number of macro-pixels in the detection array (L × N is the total number of pixels in the array). In this case, one may write that T L ¯ δ(x − kj x) ⊗ δ(x − lN x) h¯ s (x) = h(x) j =1
l=1
(1)
Optical spatial image processor based on aliasing of pseudo-periodic
675
where ⊗ denotes convolution and δ is the Delta function of Dirac. If we Fourier transform the last expression while assuming that L 1, we obtain: H¯ s (μ) ≈ H¯ (μ) ⊗
T
exp(−2πiμkj x)
j =1
≈ H¯ (μ) ⊗
≈
l=1
j =1
l=1
L T l=1
T L
L δ μ−
j =1
l N x
2πilkj l exp − δ μ− N N x
2πilkj l ¯ exp − H μ− N N x
(2)
where H¯ and H¯ s are the Fourier transforms of h¯ and h¯ s , respectively. This means that the spectrum H¯ is replicated every spectral distance of 1/(N x) and is multiplied by a spectral transmission function which strongly depends on the grouping division kj (which we intend to modify with time). Afterward, we will introduce averaging of all kj values and define the resulting output as a discrete signal, sampled with interval of Nx. Sliding average (with rect(x/(Nx))) of an arbitrary signal t (x) may be described in the Fourier domain as Tav (μ) = T (μ) · sin c(μN x)
(3)
where T is the Fourier of the signal t and Tav is the Fourier of the sliding averaged signal. Sampling the result with sampling interval of N x results in a discrete signal tav [n] having the Fourier transform of ∞
Tdisc (Ω = μNx) =
Tav μ −
p=−∞
p N x
|Ω| ≤ 0.5
(4)
Therefore, we can write the full expression for discrete Fourier transform of h¯ d [n] (which is the sampled version of h¯ s ), as follows: H¯ s (μ) =
T L l=1
j =1
H¯ d (Ω = μN x) =
2πilkj l exp − H¯ μ − · sin c(μN x) N N x ∞ p=−∞
H¯ s
p μ− N x
(5)
|Ω| ≤ 0.5
¯ Now we shall introduce the distribution h(x) or its Fourier transform H¯ (μ). It results from the imaging system with H (μ) being the optical transfer function (OTF), having object s(x) with an attached grating g(x) as an input. The Fourier transform of the image of the object s(x) that is generated due to the grating that is attached to
676
A. Zlotnik et al.
the object equals: H¯ (μ) = H (μ) ·
s(x) · g(x) exp(2πiμx) dx
(6)
All spatial distributions are intensities. In case that a defocused imaging is used and the aperture of the lens has a dimension of 2b, the 1-D OTF function H has the form of a sin c function: 4Wm Zi μ (7) H (μ) ≈ sin c b The coefficient Wm determines the severity of the defocusing. The coefficient Wm is also denoted as b2 1 1 1 (8) + − Wm = 2 Zi Zo F Zo and Zi are the distances between the imaging lens and the object and the imaging lens and the image, respectively. F is the focal length. When the imaging condition is fulfilled Wm equals to zero. Since g(x) is a grating, we will express it as a Fourier series: g(x) =
K
ak exp(2πikμ0 x)
(9)
k=−K
Therefore, substitution of the last equation into (6) yields H¯ (μ) =
K
ak H (μ)S(μ − kμ0 )
(10)
k=−K
while S(μ) is the Fourier transform of s(x). 2K + 1 is the total number of replications that are generated by the grating g(x), i.e. the number of its non-zero Fourier coefficients. In our case, we will choose OTF first zero of the imaging system and the period of the grating that is being attached to the input object as μ0 = 1/(N x). We also assume that the bandwidth of S(μ) is limited to 1/(N x). Regarding the specific parameters of the imaging system—setting OTF first zero we impose a constraint on 4Wm Zi μ in (7), while free to choose appropriate F , Zi , Z0 and “b”. Choosing for b convenience the magnification factor of 1 leads of course to Z0 = 2F .
3 Numerical simulation The simulations were done in Matlab. We started from taking a 1-D camera with pixel of 10 µm, which serves as aforementioned x. Each camera pixel was simulated with 27 Matlab pixels describing the ‘continuous’ numerical grid. Accordingly, the ‘continuous’ grid was taken with a sample interval of 0.37 µm. The simulated 1-D “fill factor” of each pixel was 70%. We took the macro-pixel size N to be 27, inside
Optical spatial image processor based on aliasing of pseudo-periodic
677
Fig. 1 Schematic sketch of the proposed configuration
Fig. 2 The input signal
Fig. 3 The Fourier Transform of the grating
each macro-pixel, we took 3 samples at non-uniform locations (T = 3) and averaged them to receive 1 sample of the resulted signal. Number of camera “macro-pixels” was set to L = 243. Number of camera pixels was L × N = 6561. Therefore, the input maximal bandwidth, which equals to the periodicity of the grating is μ0 = 1/(N x) = 3.7e3 [1/m]. We simulated optical imaging system with magnification factor of 1. The schematic sketch of the simulated configuration appears in Fig. 1. The input signal was chosen as a random band limited signal. It is zeroed on the boundaries to prevent sampling artifacts as shown in Fig. 2. The grating’s spectrum
678
A. Zlotnik et al.
Fig. 4 Fourier transform of the defocused OTF (dashed) and the grating (solid)
Fig. 5 The camera’s output with interleaved zeros, before performing the kj related averaging of pixels. One may see the different weights of replicas due to different kj : (a) kj = [1, 3, 13]. (b) kj = [1, 3, 14]. (c) kj = [1, 3, 22]
Optical spatial image processor based on aliasing of pseudo-periodic
679
Fig. 6 The resulted spectral transmission of the realized spatial distribution due to applying different kj related averaging of pixels
is shown in Fig. 3—with K set to 12. In Fig. 4, we show the spectrum of the grating (in blue) along with the defocused OTF (in red). The defocusing factor was chosen to 1 be 4Wbm Zi = 2b · Zi · ( Z1i − 2F ) = 8e−004 [m], such that the zeros of the generated sinc of (7) fall in the center of the spectral replicas. The implications on the imaging system are setting Z0 = 2F , and choosing appropriate F , Zi , and “b”. Here, we have two degrees of freedom and one constraint. The signal that is captured by the camera undergoes virtual multiplication by a train of impulse functions with non-uniform imposed locations inside each N samples according to kj . We can see the resulting spectral replications in Fig. 5. Note that for different choice of kj one obtains different weights distribution for the replicas. We show three different examples in Figs. 5(a), (b), and (c). Finally, we average the T samples and get the final signal. In Fig. 6, we show the resulted spectral distribution of the ratio between the Fourier of the input signal S(μ) and the Fourier of the output signal. One may see realization of three different spectral responses being obtained.
4 Conclusions In this paper, we have presented a new approach for realization of a 2-D time varying spatial image processor. Numerical simulations validated the presented new concept. The concept includes attaching a fixed grating to an object being our input image to be processed. Then the input plane is being imaged through a defocused imaging lens on top of a pixilated detection array. The array is divided into groups of macro-pixels, and each macro-pixel is divided into several non-uniform sub-groups—effectively resulting in pseudoperiodic non-uniform sampling. The non-uniform sampling in each macro-pixel is modified with time simply by different summing and grouping of the pixels of each macro-pixel. Each such grouping realizes different spectral modification of the image. The magnification of the imaging lens should adjust the aliasing generated due to sampling of each period with the period of the static grating that is being attached to the input object.
680
A. Zlotnik et al.
References 1. Poon T, Schilling BW, Wu MH, Shinoda K, Suzuki Y (1993) Real-time two-dimensional holographic imaging by using an electron-beam-addressed spatial light modulator. Opt Lett 18:63–65 2. Gaeta CJ, Mitchell PV, Pepper DM (1992) Optical real-time defect-enhancement diagnostic system. Opt Lett 17:1797–1799 3. Huang G, Jin G, Wu M, Yan Y (1997) Developed, binary, image processing in a dual-channel, optical, real-time morphological processor. Appl Opt 36:5675–5681 4. Wang ZQ, Cartwright CM, Gillespie WA (1994) Real-time intensity correlation with a synthetic discriminant function filter. J Opt Soc Am B 11:1842–1847 5. Sreedhar PR, Sirohi RS (1995) Real-time image processing with a cat conjugator. Appl Opt 34:333– 337 6. Joseph J, Kamra K, Singh K, Pillai PKC (1992) Real-time image processing using selective erasure in photorefractive two-wave mixing. Appl Opt 31:4769–4772 7. Shen XA, Kachru R (1993) Time-domain optical memory for image storage and high-speed image processing. Appl Opt 32:5810–5815 8. Chang TY, Hong JH, Yeh P (1990) Spatial amplification: an image-processing technique using the selective amplification of spatial frequencies. Opt Lett 15:743–745 9. Babbitt WR, Mossberg TW (1995) Spatial routing of optical beams through time-domain spatialspectral filtering. Opt Lett 20:910–912 10. Marom DM, Panasenko D, Sun P, Fainman Y (1999) Real time spatial-temporal signal processing by wave-mixing with cascaded second-order nonlinearities. In Optics in Computing, OSA Technical Digest. Optical Society of America, paper OThC2 11. Mendlovic D, Farkas D, Zalevsky Z, Lohmann AW (1998) High frequency enhancement via super resolution optical system for temporally restricted objects. Opt Lett 23:801–803 12. Paturzo M, Ferraro P, Zlotnik A, Zalevsky Z (2009) Aliasing based incoherent optical spatial image processor. Appl Opt 48:5537–5545