Lossless binary image compression using logic ... - Semantic Scholar

Report 4 Downloads 40 Views
Computers and Electrical Engineering 30 (2004) 17–43 www.elsevier.com/locate/compeleceng

Lossless binary image compression using logic functions and spectra Bogdan J. Falkowski

*

School of Electrical and Electronic Engineering, Nanyang Technological University, Block S1, Nanyang Avenue, Singapore 639798, Singapore Received 4 July 2001; received in revised form 31 October 2001; accepted 6 July 2002

Abstract A lossless compression of images using coding schemes and patterns that include minterm, cube and coordinate data coding, Walsh, triangular and Reed–Muller weights based patterns, Reed–Muller spectra and reference row technique is proposed. The experimental results indicate that the technique is fairly efficient when compared with other methods based on representations of logic functions.  2003 Elsevier Ltd. All rights reserved. Keywords: Logic functions; Boolean functions; Minterms; Cubes; Ordered binary decision diagrams; Reed–Muller transform

1. Introduction Compression of binary images is required to reduce the transmission time and storage requirements as an enormous amount of such images is processed daily in airlines, banks, credit card suppliers, insurance companies, government agencies and offices. Digital image compression is the efficient coding of digital images to reduce the storage requirements and transmission time [16]. Compression refers to the mapping from the source symbols into fewer target symbols whereas decompression refers to the transformation from the target symbols back into the source symbols representing the original information. Frequently, lossless compression of binary images is necessary without losing any information in the original images. Various methods of lossless image compression using compact representations of logic functions have been developed. Each block of an image is transformed into a Boolean function and *

Tel.: +65-790-4521; fax: +65-791-2687. E-mail address: [email protected] (B.J. Falkowski).

0045-7906/$ - see front matter  2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0045-7906(03)00035-1

18

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

compressed by minimizing it using ESPRESSO [3] in [2,5] or Quine–McCluskey algorithm [23] in [21]. Fixed polarity Reed–Muller (FPRM) transform [6,7,11] was used to compress images in [12]. The algorithm in [14] finds an ordered binary decision diagram (OBDD) which represents the image exactly and then codes the OBDD efficiently. In this paper, we propose a novel technique for the lossless compression of binary images based on logic and spectral methods. A short description of this technique was presented for the first time in [10] and similar technique applied to gray scale images was described in [9]. In the new technique, a two-dimensional differencing operation is first applied to the image. The difference image is segmented and classified into all-black, all-white or mixed blocks and grouped into nonoverlapping regions of all-white and mixed blocks. The mixed blocks in the non-overlapping regions are represented using a variable block-size segmentation and coding scheme. The method is more general than previous techniques based only on FPRM transform [12], minterm coding [17] and logic minimization [2,5,14,21] and introduces new types of codings and patterns that include minterm, cube and coordinate data coding, Walsh, triangular, Reed–Muller weights based patterns, generalized partially-mixed-polarity Reed–Muller (GPMPRM) transform [8,24] and reference row technique [9,10]. In this paper, first all important concepts in basic logic and spectral terminology as well as different types of Reed–Muller expansions and their basic properties used in our compression scheme are discussed. Typical coding of reduced representations is also discussed. It is followed by the review of previous related work in the area of binary image compression using either logic based or spectral methods. The next section describes in detail a novel technique for lossless compression of binary images. This description is followed by decompression. This new lossless compression technique has been implemented in the C language and tested on the set of CCITT standard facsimile images [4]. A comparison of experimental results with other logic based compression techniques is also given.

2. Basic logic and spectral definitions 2.1. Logic switching functions Logic switching functions operating on Boolean (binary) variables are used in this article. Definition 1. A completely specified n-variable Boolean switching function F is a mapping F : Bn ! B, where B ¼ f0; 1g. Definition 2. A literal x_ i is a variable xi or the complement of a variable xi . Definition 3. A product term is a single literal or a logical product of two or more literals. In a normal product term, no literal appears more than once. Definition 4. An n-variable minterm is a normal product term with n literals. It can be represented by an n-bit integer, where each bit indicates whether the corresponding variable is asserted or negated. The outputs of false and true minterms are mapped to 0 and 1, respectively. The number of minterms in an n-variable Boolean switching function is 2n .

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

19

Definition 5. A cube is an m-tuple f_x1    x_ m1 x_ m g where m 6 n, xi 2 f0; 1; X g and X means that the particular literal does not appear in the cube at all. The dimension of a cube refers to the number of X s in it. An a-cube has 2a minterms (zero-cubes) within it. Similar to minterms, there are false and true cubes. Fig. 1 shows a data structure with two bytes that is used to represent a cube [23]. The maximum number of variables that can be represented by the above-mentioned data structure is eight, which is sufficient for our compression scheme since the maximum number of variables to be handled by our program is six. A non-complemented variable is represented by setting the particular bit in the ‘‘asserted’’ byte to one and clearing the corresponding bit in the ‘‘negated’’ byte to zero. The situation is reversed when a complemented variable is to be represented, i.e., the particular bit in the ‘‘asserted’’ byte is cleared to zero while the corresponding bit in the ‘‘negated’’ byte is set to one. If a particular bit position has 0s in both the ‘‘asserted’’ and ‘‘negated’’ bytes, the corresponding variable does not appear in the cube. If a particular bit contains 1s in both bytes, the variable is not used. Fig. 2 shows the internal representation of the cube x1x2 in a four-variable Boolean switching function. The cube encoding devised in [2] is applied to minimize the number of bits required to represent the cubes. For certain cubes, the number of X literals exceeds the numbers of asserted and negated literals. As such, it is inefficient to code each X literal with two bits. Instead, a two-bit prefix is used to indicate the X literal with the highest occurring frequency. This X literal is then coded with a single bit whereas the other literals (asserted and negated) are coded with two bits. Table 1 shows all the possible cases. It should be noted that the literal with the highest occurring frequency is always coded with one bit only. An example of the cube encoding method from Table 1 for Reed–Muller cubes is shown later on in Example 3.

Fig. 1. Data structure for internal representation of cube.

Fig. 2. Internal representation of a cube for a four-variable Boolean switching function.

20

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Table 1 Cube encoding Prefix

0

1

X

00 01 10

0 11 10

10 0 11

11 10 0

2.2. Spectral techniques Spectral techniques based on Reed–Muller and Hadamard transforms have been used in logic design and digital signal and image processing [1,6–13,24,26]. Spectral methods convert original data into its spectral equivalent. Spectral coefficients can be further processed to give some specific properties of the original data. They can also be used for data compression in cases where there are many zero coefficients. In this article, spectral methods based on Reed–Muller transform and weights as well as patterns based on Walsh transform [9,10] are used for lossless compression of images. Basic definitions related to these spectral transforms are presented in this section. The mutual relationships between Reed–Muller and Walsh–Hadamard spectral coefficients have been investigated in [7]. An input vector ~ F ¼ ½f0 ; . . . ; fN 2 ; fN 1  of N ¼ 2n elements can be transformed by the forward transformation matrix U into the spectral domain represented by a spectrum vector ~ P ¼ ½p0 ; . . . ; pN 2 ; pN1  comprising N spectral coefficients. The input vector ~ F can be recovered from the spectrum vector ~ P by using the inverse transformation matrix U1 . The relationship between the input vector ~ F and its spectrum vector ~ P is given by ~ P T ¼ U~ FT

ð1Þ

~ F T ¼ U1~ PT

ð2Þ

2 6 6 U¼6 4

/00 /10 .. .

/01 /11 .. .

 

/ðN 1Þ0

/ðN 1Þ1



/0ðN 1Þ /1ðN 1Þ .. .

3 7 7 7 5

ð3Þ

/ðN 1ÞðN 1Þ

where the superscript T denotes the transpose, and U and U1 are the N  N forward and inverse transformation matrices, respectively. Fig. 3 shows the block diagram for the forward and inverse transformation. Each element of the truth vector for an n-variable Boolean switching function f ðx1 ; . . . ; xn Þ describes the behavior of the function for a particular combination of the input variables. On the other hand, each spectral coefficient contains some information about the behavior of the function at a selected group or all the 2n points. In spectral techniques, the function is mapped into the transform domain and processed. In certain applications, the local or global information about the function provided by the spectral coefficients is more useful than the Boolean representation. In the continuation, variants of Reed–Muller transform used here are discussed.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

21

Fig. 3. Block diagram for forward and inverse spectral transformations.

Definition 6. An n-variable Boolean switching function can be expressed as a canonical Reed– Muller expansion [6,7,11] of 2n product terms as follows: n 2n 1 Y j f ðx1 ; . . . ; xn1 ; xn Þ ¼  aj ð4Þ x_ i i j¼0

i¼1

where  denotes the modulo-2 addition, aj 2 f0; 1g is called a Reed–Muller coefficient and ji 2 f0; 1g is called the power of x_ i such that hj1    jn1 Qjn i is equal to the binary representation of j. If ji ¼ 0, the literal x_ i is absent in the product term nj¼1 x_ iji , otherwise it is present in the product term. When each literal (_xi , i ¼ 1; . . . ; n  1; n) throughout (4) assumes either complemented or non-complemented but not both forms simultaneously, it is known as the fixed polarity Reed– Muller (FPRM) expansion [6,7,11]. Definition 7. The polarity number x is an integer computed by taking the decimal equivalent of the n-bit straight binary code formed by writing a zero for each asserted literal and one for each negated literal in the product terms. Property 1. For an n-variable Boolean switching function f ðx1 ; . . . ; xn1 ; xn Þ, there are 2n FPRM expansions corresponding to 2n different polarities. Definition 8. The polarity vector ~ Ax ¼ ½a0 ; . . . ; a2n 2 ; a2n 1  is a collection of all the 2n FPRM coefficients in a certain ordering of their indices for the polarity x. Definition 9. The polarity coefficient matrix PC½f  of an n-variable Boolean switching function is a Ax for the polarity x. It 2n  2n binary matrix, where each row corresponds to a polarity vector ~ can be partitioned into four sub-matrices of order 2n1 shown below:   PC½f 0  PC½f 000  ð5Þ PC½f  ¼ PC½f 00  PC½f 000  ½f 0  ¼ ½f0 ; . . . ; f2n1 1 

ð6Þ

½f 00  ¼ ½f2n1 ; . . . ; f2n 1 

ð7Þ

22

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

½f 000  ¼ ½f0  f2n1 ; . . . ; f2n1 1  f2n1 

ð8Þ

where the truth vector of the n-variable Boolean switching function is denoted by ~ F ¼ ½f 0 f 00  ¼ ½f0 ; . . . ; f2n 2 ; f2n 1 , and ½f 000  is the Boolean difference [6] of the Boolean switching function. The polarity coefficient matrix is computed by the recursive construction of PC½f 0 , PC½f 00  and PC½f 000 . Property 2. Each element of the polarity coefficient matrix pij (row i and column j) corresponds to the coefficient aj of the FPRM expansion with polarity x ¼ i. Property 3. The optimum polarity of a given function is the row i of the polarity coefficient matrix with the minimum number of nonzero coefficients pij . Example 1. Consider a three-variable Boolean switching function f ðx1 ; x2 ; x3 Þ ¼ x1x2 þ x1 x2 x3 þ F ¼ ½1 1 0 1 0 0 1 0. Using (5)–(8), the polarity coefficient matrix is x1 x2x3 with its truth vector ~ generated as follows:   PC½ 1 1 0 1  PC½ 1 1 1 1  PC½f  ¼ PC½ 0 0 1 0  PC½ 1 1 1 1  3 2 1 0 1 1 1 0 0 0 7 6 2 3 61 0 0 1 1 0 0 07 60 1 1 1 1 0 0 07 PC½ 1 1  PC½ 1 0  PC½ 1 1  PC½ 0 0  7 6 6 PC½ 0 1  PC½ 1 0  PC½ 1 1  PC½ 0 0  7 6 1 1 0 1 1 0 0 0 7 7 6 7 6 ¼4 ¼ 7 PC½ 0 0  PC½ 1 0  PC½ 1 1  PC½ 0 0  5 6 0 0 1 1 1 0 0 0 7 6 7 6 PC½ 1 0  PC½ 1 0  PC½ 1 1  PC½ 0 0  60 0 0 1 1 0 0 07 41 1 1 1 1 0 0 05 0 1 0 1 1 0 0 0 Since row five of the polarity coefficient matrix has the least number of nonzero coefficients, the optimum polarity for the FPRM expansion is x ¼ 5. Using (4), the FPRM expansion of the threevariable Boolean switching function with polarity 5 is given by 7

f ðx1 ; x2 ; x3 Þ ¼  aj j¼0

3 Y

x_ ji i

i¼1

¼ a0x01 x02x03  a1x01 x02x13  a2x01 x12x03  a3x01 x12x13  a4x11 x02x03  a5x11 x02x13  a6x11 x12x03  a7x11 x12x13 ¼ a0  a1x3  a2 x2  a3 x2x3  a4x1  a5x1x3  a6x1 x2  a7x1 x2x3 ¼ x2x3  x1 When the generalized multi-polarity Reed–Muller transform (RMT) matrix GRxn replaces the transformation matrix U in (3) and the input vector ~ F represents the truth vector of n-variable Boolean switching function, the spectrum vector ~ P represents the Reed–Muller polarity vector ~ Ax . This relationship is described by ~ FT ATx ¼ GRxn ~

ð9Þ

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

23

1 T ~ F T ¼ ½GRxn  ~ Ax

ð10Þ

where all the arithmetic operations are performed modulo 2 (Galois field of two elements de1 noted as GF(2)) [6,11]. The superscript T denotes the transpose. GRxn and ½GRxn  are the ren n cursive 2  2 forward and inverse RMT matrices in polarity, respectively. Definition 10. The 2n  2n generalized multi-polarity forward RMT matrix GRxn is defined recursively as follows: GRxn ¼ ni¼1 GRx1 i ¼ GRx1 1      GRx1 n1  GRx1 n GR01 GR11



1 ¼ 1

0 1



0 ¼ 1

1 1

ð11Þ

 ð12Þ  ð13Þ

where GR01 and GR11 are the non-singular forward elementary RMT matrices for both polarities (xi ¼ 0 and 1) and 1 6 i 6 n. Definition 11. The 2n  2n generalized multi-polarity inverse RMT matrix ½GRxn 1 is defined recursively as follows: ½GRxn 1 ¼ ni¼1 ½GRx1 i 1 ¼ ½GRx1 1 1      ½GRx1 n1 1  ½GRx1 n 1 1 ½GR01 

½GR11 1



1 ¼ 1 

1 ¼ 1

0 1 1 0

ð14Þ

 ð15Þ  ð16Þ

where ½GR01 1 and ½GR11 1 are the inverse elementary RMT matrices for both polarities (xi ¼ 0 and 1) and 1 6 i 6 n. Example 2. Consider a Boolean function with truth vector ~ F ¼ ½1 (11)–(13), the polarity vector ~ ATð101Þ is given by 2 0 0 0 0 60 0 0 0 6 60 0 0 0 6 60 0 0 0 ð101Þ 1 0 1 T T T ~ Að101Þ ¼ GR3 ~ F ¼ ðGR1  GR1  GR1 Þ~ F ¼6 60 1 0 0 6 61 1 0 0 6 40 1 0 1 1 1 1 1

1 0 1 0 0 1 0. Using (9) and 0 1 0 1 0 1 0 1

1 1 1 1 1 1 1 1

0 0 0 1 0 0 0 1

32 3 2 3 0 1 0 617 607 07 76 7 6 7 6 7 6 7 17 76 0 7 6 0 7 7 6 7 7 1 76 617 ¼ 617 6 7 6 7 0 76 0 7 7 617 6 7 6 7 07 76 0 7 6 0 7 1 54 1 5 4 0 5 0 0 1

From (4) and (9), the FPRM expansion in polarity x ¼ 5 is f ðx1 ; x2 ; x3 Þ ¼ x2x3  x1 .

24

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Using (14)–(16), the inverse generalized multi-polarity generated as follows: 2 1 1 61 0 6 61 1 6 61 0 6 ð101Þ ½GR3 1 ¼ ½GR11 1  ½GR01 1  ½GR11 1 ¼ 6 61 1 6 61 0 6 41 1 1 0 Using (10), the truth vector is recovered 2 1 1 0 61 0 0 6 61 1 1 6 61 0 1 ð101Þ 1~T T ~ F ¼ ½GR3  Að101Þ ¼ 6 61 1 0 6 61 0 0 6 41 1 1 1 0 1

RMT matrix for polarity x ¼ 5 is

3 1 1 0 0 1 0 0 07 7 1 1 1 17 7 1 0 1 07 7 7 0 0 0 07 7 0 0 0 07 7 0 0 0 05 0 0 0 0 from the polarity vector ~ ATð101Þ as follows: 32 3 2 3 1 0 0 1 1 0 0 6 7 6 7 0 1 0 0 0 76 0 7 6 1 7 7 6 7 6 7 1 1 1 1 17 76 0 7 6 0 7 6 7 6 7 0 1 0 1 07 76 1 7 ¼ 6 1 7 7 6 7 7 0 0 0 0 0 76 617 607 6 7 6 7 0 0 0 0 0 76 0 7 7 607 415 5 4 5 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1

0 0 1 0 0 0 1 0

If all the literals in (4) assume either complemented or non-complemented form in any combination, it is known as the generalized Reed–Muller (GRM) expansion [6]. Since there are n2n1 n1 literals in (4), a total of n2n2 GRM expansions is possible, which includes all the 2n FPRM expansions. The generalized partially-mixed-Polarity Reed–Muller (GPMPRM) expansion is a subset of GRM expansions that encloses the FPRM expansions [8,24]. Wu et al. [24] also mentioned that the requirement of fixing the polarities for all but one variable can be relaxed to give a more general definition of GPMPRM expansion discussed in [8]. Definition 12. The GPMPRM expansions are obtained by allowing the k2n1 literals of k variables in (4) to freely assume either polarity while maintaining consistent fixed polarities for all the literals of the remaining variables. The GPMPRM expansion defined above is one of the coding methods used in the lossless compression of binary images. In the GPMPRM expansion, the Boolean switching function is expressed in terms of cubes. Example 3. The five-variable Boolean switching function shown in Fig. 4 can be expressed as a Reed–Muller expansion of x1 x2  x3x4x5 . The numbers of negated, asserted and X literals are three, two and five, respectively. Since the number of X literals is greater than the asserted and negated literals, the prefix Ô10Õ is selected according to Table 1. Therefore the number of bits required to represent the above expansion is 2 þ 5  1 þ 2  2 þ 3  2 ¼ 17 as compared to 2  5  2 ¼ 20 for the case without cube encoding.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

25

Fig. 4. Example of cube coding.

3. Review of previous work In this section, the following techniques using reduced representations of logic functions or other methods related to our own compression scheme for the lossless compression of binary images proposed by other researchers are reviewed: (a) (b) (c) (d) (e)

logic minimization based approach, differential block coding, binary image compression through rectangular partitioning, ordered binary decision-diagrams based approach, Reed–Muller transform based approach.

3.1. Logic minimization based approach Augustine et al. [2] proposed a logic minimization based approach for the lossless compression of binary images. The image is segmented into blocks of r  c pixels. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. These Boolean switching functions are minimized using ESPRESSO [3], a cube-based two-level logic minimizer. A code set f0; 10; 11g to represent the cube symbols f0; 1; X g is used to reduce the bits required for the encoding of the minimized cubes. The one-bit code is allocated to the cube symbol with the highest frequency of occurrence. If the technique fails to compress a block, the original pixels are coded instead. Fig. 5 illustrates the compression and decompression schemes. Sarkar [21] also implemented a scheme for the representation of binary images based on the minimization of Boolean functions. The binary image is converted to Boolean functions and the Quine–McCluskey algorithm [23] is applied to minimize these functions. The cubes of the minimized functions representing the image are stored. 3.2. Differential block coding Robertson et al. [20] applied a simple one-dimensional (1D) differencing operation to binary images prior to block coding to produce a sparse binary image. The difference image is created by

26

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Fig. 5. Logic minimization based approach.

performing an Exclusive-OR logic operation between neighbouring pixels in the original image. The difference image can be coded more efficiently than the original image whenever the average run length [16] of black pixels in the original image is greater than two. Compression is achieved because the correlation between adjacent pixels in the difference image is reduced when compared to the original image. 3.3. Binary image compression through rectangular partitioning Mohamed and Fahmy [15] proposed a technique for coding binary images. In their technique, the black pixels in the binary image are first grouped into a number of non-overlapping rectangular partitions. The coordinates of the opposite vertices (top-left and bottom-right) for each rectangle are coded using a simple procedure. The image is reconstructed by decoding the compressed coordinates. Quddus and Fahmy [18] proposed an improved version of the above technique. In their method, the black pixels in the binary image are first partitioned into fully overlapping and nonoverlapping rectangles. After partitioning, the two opposite vertices of the rectangle are coded using the same scheme as in [15]. 3.4. Ordered binary decision diagrams approach Mateu-Villarroya and Prades-Nebot [14] proposed a lossless compression algorithm based on ordered binary-decision diagrams (OBDDs). The binary image is first represented by an OBDD. To code an OBDD, the nodes are represented as a table in which each row represents a node of the OBDD. In each row of the table, the first number is the number of the node, the second number is the level of the node in the OBDD, and the third and fourth numbers are the numbers of the pointed nodes low and high, respectively. A sophisticated algorithm that codes the table of the OBDD efficiently using sequence of pointers and Gray code to represent the variables of the image has been developed. The remaining redundancy in the obtained binary sequence is further removed by using arithmetic coding [16].

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

27

3.5. Reed–Muller transform based approach Iravani and Perkowski [12] proposed a lossless compression algorithm based on fixed polarity Reed–Muller transform. The binary image is segmented into blocks of N  N pixels. After finding the fixed Polarity Reed–Muller form for each block with the minimal number of terms, the corresponding spectral map usually contains less 1s than the original binary domain. Each block in spectral domain is then run-length coded using relative address coding (RAC) [16,25]. To determine the relative address distance codes in RAC, Huffman coding [16] was used as it gave the best compression factor. The authors noticed that with run-length coding the pattern of the elements is very important and just reducing the number of nonzero elements through conversion to spectral domain might be not sufficient to obtain a higher compression ratio in the above method. 4. Novel technique for lossless compression of binary images Fig. 6 shows the block diagram of the technique for the lossless compression of binary images introduced in this paper. The individual stages are described in the following sections. 4.1. File extraction The pixel values are extracted from the BMP image [19] and stored in a 2D array. If the dimensions of the image are not multiples of 16, they will be adjusted to the next higher multiple of 16 to handle images of different sizes. For example, the dimensions of an image are 1728 · 2376 pixels. The dimensions of the image are adjusted to 1728 · 2384 pixels. The additional pixels will be filled with ones. 4.2. Image differencing Image differencing is a preprocessing operation to extract the edges between groups of black and white pixels to reduce the spatial redundancy in a binary image [20]. The edge extraction

Fig. 6. Novel technique for lossless compression of binary images.

28

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Table 2 Number of black and white pixels Original image

Image CCITT CCITT CCITT CCITT CCITT CCITT CCITT CCITT

1 2 3 4 5 6 7 8

1D difference image

2D difference image

Black

White

Black

White

Black

White

155,591 184,240 337,052 509,635 317,707 207,110 356,850 1,766,467

3,963,961 3,935,312 3,782,500 3,609,917 3,801,845 3,912,442 3,762,702 2,353,085

4,071,362 4,091,414 4,038,000 3,937,544 4,026,216 4,065,638 3,956,482 4,069,702

48,190 28,138 81,552 182,008 93,336 53,914 163,070 49,850

4,068,075 4,090,999 4,041,159 3,915,397 4,029,171 4,073,541 3,933,007 4,067,255

51,477 28,553 78,393 204,155 90,381 46,011 186,545 52,297

reduces the number of bits to be coded by decreasing the number of nonzero pixels in the image since strings of 1s are reduced to the stringsÕ edges. The pixels in a given row or column are de-correlated by performing an Exclusive-OR logic operation between neighbouring pixels. In contrast to the 1D image differencing in [20], the proposed technique performs a 2D image differencing of the pixels by first applying the operation to the rows followed by the columns as follows:  I½y; x if x ¼ 0 ð17Þ D1 ½y; x ¼ I½y; x  I½y; x  1 otherwise  D1 ½y; x if y ¼ 0 ð18Þ D2 ½y; x ¼ D1 ½y; x  D1 ½y  1; x otherwise where  represents the Exclusive-OR logic operation, and y and x denote the rows and columns of the images, respectively. I½y; x, D1 ½y; x and D2 ½y; x refer to the pixels in the original, 1D and 2D difference images, respectively. Table 2 shows the number of black and white pixels in the original, 1D and 2D difference images. In the case of 1D difference image, the operation is applied to the rows alone. The test images consist of the set of eight CCITT facsimile images used to standardize the comparison of compression techniques for binary images [4]. Each image consists of 1 bit/pixel, 1728 pixels/line and 2376 lines/image. The dimensions of all eight images have been adjusted to 1728 · 2384 to handle images of different sizes, resulting in a total of 4,119,552 bits/image. Note that the image differencing operation has resulted in a significant reduction of white pixels in the difference image. 4.3. Segmentation and classification The 2D difference image is segmented into 16 · 16 blocks, which are classified as: (a) All-black: all the 256 pixels in the 16 · 16 block are black. (b) All-white: all the 256 pixels in the 16 · 16 block are white. (c) Mixed: the 16 · 16 block consists of both black and white pixels.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

29

Table 3 Number of all-black, all-white and mixed blocks Image CCITT CCITT CCITT CCITT CCITT CCITT CCITT CCITT

1 2 3 4 5 6 7 8

All-black

All-white

Mixed

13,580 14,062 12,173 7479 11,864 13,611 8364 12,831

0 0 0 0 0 0 0 0

2512 2030 3919 8613 4228 2481 7728 3261

A 2D matrix represents the segmented 2D difference image S ¼ S½k; j

for 0 6 k 6 M  1

and 0 6 j 6 N  1

ð19Þ

whose elements are skj ¼ 0 (all-black), 1 (all-white) or 2 (mixed). k and j refer to the rows and columns, respectively. Table 3 shows the number of all-black, all-white and mixed blocks in the set of CCITT test images after the 2D image differencing and segmentation into 16 · 16 blocks. Note that the number of all-white blocks is zero since the image differencing operation effectively reduces the number of white pixels in the original image. 4.4. Partitioning into rectangular regions After the 2D image differencing operation, there are very few all-white blocks left if any. It is more efficient to partition the segmented 2D difference image into rectangular regions of all-white and mixed 16 · 16 blocks and code only these regions. The two methods for the rectangular partitioning are the overlapping (non-disjoint) and nonoverlapping (disjoint) methods. Although the number of rectangular regions obtained by the overlapping partitioning method is always less than or equal to the non-overlapping method, this advantage has been obtained at the expense of computation effort and time. Moreover, the overlapping method requires the program to keep track of regions that may have been covered more than once thus resulting in further computation effort and time. As such, the non-overlapping method is chosen. The non-overlapping partitioning algorithm [15] to group mixed blocks proceeds as follows: (a) Initially, all the blocks are indicated as unprocessed. The algorithm scans the 2D matrix S in a raster format from left to right and top to bottom. When an unprocessed mixed block is encountered during the raster scan process, it is considered as the top left vertex of the developing rectangle. (b) All the unprocessed mixed blocks to the right of the above block are included in the new developing rectangle. This horizontal expansion terminates when a non-mixed block (all-black or all-white) is encountered or the end of row is reached. The block exactly to the left of the terminating block indicates the top right vertex of the rectangle.

30

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

(c) Next, all the unprocessed mixed blocks bounded by the column locations of the left and right vertices in the subsequent rows are included in the developing rectangle. This vertical expansion stops if any non-mixed blocks are encountered in the new row within the left and right vertices or the last row has been reached. (d) The rightmost block in the last row of the developed rectangle is considered as the bottom right vertex. An isolated rectangle encloses only one mixed block and is represented by the top and left vertices. A non-isolated rectangle encloses more than one mixed block and has to be represented by the top, left, bottom and right vertices. (e) All the mixed blocks enclosed by the developed rectangle are then indicated as processed so that they will not be considered again. The search for another rectangle resumes from the block next to the one identified as the top left vertex of the last encountered rectangle until the whole matrix S is processed. The same algorithm is also used to find the rectangular partitions of all-white blocks. Fig. 7 shows the disjoint and non-disjoint rectangular partitioning for two different cases. 4.5. Coding scheme The format of the compressed image is shown in Fig. 8. The dimensions of the original image are first transferred to the compressed image. The vertices of rectangular (isolated and non-isolated) regions of all-white 16 · 16 blocks are stored in the compressed image. An isolated rectangle encloses a single 16 · 16 block whereas a non-isolated rectangle encloses more than one 16 · 16 block. The isolated and non-isolated rectangular regions of all-white blocks coded in fields C to Fi of the compressed image do not require further processing. The top and left vertices of the isolated mixed rectangles are written to the compressed image and the corresponding data enclosed by these rectangles are then represented using a novel blockbased segmentation and coding scheme. After coding the isolated mixed rectangular regions in field Hi , the non-isolated rectangular regions of mixed blocks are coded next. The four vertices of the non-isolated mixed rectangles are stored in the compressed image followed by the coding of these regions using the same block-based approach. Each 16 · 16 mixed block is split into four smaller 8 · 8 blocks as shown in Fig. 9. Based on the variable block-size segmentation and coding scheme, the 8 · 8 block (starting from Block 0 to 4) is compressed into the following blocks shown in Fig. 10. The block optimiser will always select the optimum coding for each 8 · 8 block. For example, the 8 · 8 block has been decomposed into three incompressible 4 · 4 blocks and the remaining 4 · 4 block is coded using the reference row technique with two corrections. In this case, the optimum coding for the 8 · 8 block is to code it as an incompressible Type A block and store the original data instead. The headers of the compressed blocks are summarised in Table 4. The flow chart for the compression of an 8 · 8 block is shown in Fig. 11. A, B and C refer to the number of 0s, 1s and GPMPRM cubes in the 8 · 8 block, respectively. Table 5 shows the respective headers for the different types of 8 · 8 blocks. After the decomposition of the 8 · 8 block into smaller blocks, the compressed format may be larger than the original uncompressed data. Under such circumstances, the 8 · 8 block is coded as incompressible and the original data is stored instead.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

31

Fig. 7. Rectangular partitioning.

The detailed steps to code an 8 · 8 block are as follow: (a) If the 8 · 8 block consists of all 0s, it is a uniform block that is represented by its header only. (b) If the 8 · 8 block is not uniform and the number of 1s is less than three, minterm coding [17] is applied. The 8 · 8 block is transformed into a Boolean switching function in a sum-of-product expression comprising the true minterms. A total of six (log2 64 ¼ 6) bits is required to code each minterm. Fig. 12(a) shows an example of a minterm-coded 8 · 8 block which is represented using a total of 3 þ 1 þ 6 þ 6 ¼ 16 bits. (c) If the number of 1s in the 8 · 8 block is greater than two but less than nine, coordinate data coding [15] is applied. Fig. 13(a) shows an example of a coordinate-coded 8 · 8 block which is represented using a total of 37 bits. In Fig. 13(c), the bits in parentheses represent the subheaders while the other bits represent the coordinates of 1s. If the row contains all 0s, only a binary Ô0Õ is coded. If there are nonzero elements (1s) in the row, a binary prefix Ô1Õ is required followed by b bits used to specify the column location of the nonzero element. The value of b is computed as follows [15]: 1. For the first nonzero element encountered in a particular row, b ¼ log2 N ¼ log2 8 ¼ 3 bits are required to specify the location of this nonzero element with respect to the first column of the row.

32

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Fig. 8. Format of compressed binary image.

Fig. 9. Splitting of 16 · 16 block into four 8 · 8 blocks.

2. If there are more than one nonzero element in a particular row, then b ¼ log2 ðN  cÞ bits are used to represent the location of the next nonzero element with respect to the first column to the right of the previously encountered nonzero element where c is the column location of the previous encountered nonzero element. 3. A binary Ô0Õ is used to indicate that the last nonzero element in a particular row has been encoded. This Ô0Õ is omitted if the location of the last nonzero element is at the right end of the row. (d) If the 8 · 8 block is not compressed yet, the generalized partially-mixed-polarity Reed–Muller (GPMPRM) expansion [8] will be computed. If the GPMPRM expansion contains less than three cubes, the 8 · 8 block is compressed as cubes. Fig. 14(a) shows an example of a GPMPRM-coded 8 · 8 block (it is also FPRM expansion) which is represented using a total of 4 þ 1 þ 1 þ 2 þ 8 þ 8 ¼ 24 bits.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

33

Fig. 10. Types of compressed blocks for binary images. Table 4 Headers of compressed blocks for binary images Type

Header

Number of bits

A B C D E

0 10 110 1110 1111

1 2 3 4 4

Fig. 11. Compression of 8 · 8 block for binary images.

(e) If the 8 · 8 block cannot be compressed by steps (a)–(d), it will be split into two smaller 4 · 8 blocks.

34

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Table 5 Headers of 8 · 8 block for binary images Type

Header

Number of bits

Uniform Coordinate Minterm Incompressible GPMPRM

0 10 110 1110 1111

1 2 3 4 4

Fig. 12. Minterm-coded 8 · 8 block for binary images.

Fig. 13. Coordinate-coded 8 · 8 block for binary images.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

35

Fig. 14. GPMPRM-coded 8 · 8 block for binary images.

The flow chart for the compression of a 4 · 8 block is shown in Fig. 15. A, B and C denote the number of 0s, 1s and GPMPRM cubes in the 4 · 8 block, respectively. Table 6 shows the possible headers for a 4 · 8 block. During the compression, the compressed format for a 4 · 8 block can be larger than the original uncompressed data. In this case, the 4 · 8 block is coded as incompressible and the original data is encoded. The procedures to code a 4 · 8 block are as follows: (a) If the 4 · 8 block consists of all 0s, it is represented as a uniform block. (b) If the 4 · 8 block is not uniform and the number of 1s is less than seven, coordinate data coding is applied. An example of a coordinate-coded 4 · 8 block is shown in Fig. 16(a). This 4 · 8 block is represented using a total of 1 þ 8 þ 5 þ 5 þ 5 ¼ 24 bits. (c) If the 4 · 8 block is not compressed yet, the GPMPRM expansion will be computed. If the GPMPRM expansion consists of less than three cubes, the 4 · 8 block will be represented as cubes. Fig. 17(a) shows an example of a GPMPRM-coded 4 · 8 block (it is also FPRM expansion) which is represented using a total of 3 þ 1 þ 1 þ 2 þ 7 þ 8 ¼ 22 bits. (d) If the 4 · 8 block is not compressed by steps (a)–(c), it is split into two smaller 4 · 4 blocks to be compressed using another set of methods. Fig. 18 summarizes the steps taken to compress a 4 · 4 block. A and B denote the number of 0s and 1s in the 4 · 4 block, respectively. C is the number of GPMPRM cubes and D refers to the number of corrections in the reference row technique. The possible headers for a 4 · 4 block are given in Table 7. The steps to code the 4 · 4 block are as follows: (a) If the 4 · 4 block contains all 0s, it is coded as a uniform block. (b) If the number of 1s is less than three, the 4 · 4 block is transformed into a Boolean switching function in a sum-of-product expression comprising two true minterms. Each minterm

36

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Fig. 15. Compression of 4 · 8 block for binary images.

Table 6 Headers of 4 · 8 block for binary images Type

Header

Number of bits

Coordinate Incompressible Uniform GPMPRM

0 10 110 111

1 2 3 3

Fig. 16. Coordinate-coded 4 · 8 block for binary images.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

37

Fig. 17. GPMPRM-coded 4 · 8 block for binary images.

Fig. 18. Compression of 4 · 4 block for binary images.

requires a total of four (log2 16 ¼ 4) bits to be represented. Fig. 19(a) shows a minterm-coded 4 · 4 block that is represented using a total of 2 þ 1 þ 4 þ 4 ¼ 11 bits. (c) The 4 · 4 block is compared with a set of four frequently occurring patterns shown in Fig. 20. The possible combinations are:

38

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Table 7 Headers of 4 · 4 block for binary images Type

Header

Number of bits

Incompressible Reference row Minterm Pattern matching Uniform GPMPRM

00 01 10 110 1110 1111

2 2 2 3 4 4

Fig. 19. Minterm-coded 4 · 4 block for binary images.

Fig. 20. Set of 4 · 4 patterns for binary images.

1. Direct match: The 4 · 4 block is an exact match of the pattern. 2. Inverse match: The 4 · 4 block matches the inverse of the pattern. 3. Direct or inverse match with one correction: The 4 · 4 block differs from the pattern or its inverse by one bit. An example of a pattern-coded 4 · 4 block is shown in Fig. 21(a). The only discrepancy between the block and its closest matching pattern shown in Fig. 21(b) is the element enclosed within a rectangle. This 4 · 4 block is represented using a total of 3 þ 1 þ 2 þ 1 þ 4 ¼ 11 bits. (d) If the 4 · 4 block is not compressed yet, and it consists of one cube, the 4 · 4 block is represented as a cube. Fig. 22(a) shows a cube-coded 4 · 4 block that is represented using a total of 4 þ 1 þ 2 þ 6 ¼ 13 bits. (e) If the 4 · 4 block is not compressed by steps (a)–(d), the reference row technique is applied next. The second and fourth rows are compared with the first and third rows, respectively. The possible situations considered are: 1. No correction in the second and fourth rows. 2. One correction in second row only.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

39

Fig. 21. Pattern-coded 4 · 4 block for binary images.

Fig. 22. Cube-coded 4 · 4 block for binary images.

Fig. 23. Reference-row-coded 4 · 4 block for binary images.

3. One correction in fourth row only. 4. One correction in both second and fourth rows. Fig. 23(a) shows a 4 · 4 block coded using the reference row technique. The first and second rows are identical while the third and fourth rows differ only by one element. This 4 · 4 block is coded using a total of 2 þ 2 þ 4 þ 4 þ 2 ¼ 14 bits. (f) If the 4 · 4 block is not compressed by steps (a)–(e), it will be coded as incompressible and the original data are coded instead. 5. Decompression technique for binary images The block diagram for the decompression of a compressed binary image is shown in Fig. 24. To recover a copy of the original image, the following steps are performed:

40

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

Fig. 24. Decompression technique for binary images.

(a) The dimensions of the image are extracted from the compressed image. (b) The locations of the non-overlapping rectangular regions (isolated followed by non-isolated) of all-white 16 · 16 blocks are then determined. (c) The rectangular regions (isolated followed by non-isolated) of mixed 16 · 16 blocks are reconstructed. Each mixed 16 · 16 block had been segmented into four smaller 8 · 8 blocks during compression. The decoding for an 8 · 8 block is as follows: 1. The block type of the 8 · 8 block is determined by decoding the encoded block header shown in Table 4. 2. Based on the coded block type, the corresponding blocks are decoded. In the case of a Type C block, the 4 · 8 block is decoded followed by two 4 · 4 blocks. 3. The process is repeated until all the 8 · 8 blocks in the rectangular partitions of mixed 16 · 16 blocks have been decoded. (d) To recover the pixels I½y; x in the original image from the corresponding pixels D2 ½y; x in the 2D difference image, the following inverse differencing operation is performed on the columns followed by the rows:  D1 ½y; x ¼  I½y; x ¼

D2 ½y; x if y ¼ 0 D2 ½y; x  D1 ½y  1; x otherwise

D1 ½y; x D1 ½y; x  I½y; x  1

if x ¼ 0 otherwise

ð20Þ

ð21Þ

The pixels are then stored as a BMP file.

6. Experimental results The compression and decompression techniques have been implemented in the C language on a personal computer and tested on the set of eight standard CCITT facsimile images [4]. Comparisons of compression results with other cube-based [2,5] and spectral [12] methods are impossible due to the lack of results for these images in the corresponding papers. Table 8 lists the

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

41

Table 8 Comparison of compression ratios for binary images Image CCITT CCITT CCITT CCITT CCITT CCITT CCITT CCITT

1 2 3 4 5 6 7 8

OBDD 1

OBDD 2

Method 1

Method 2

6.20 10.41 4.41 1.93 3.78 7.29 1.99 6.18

17.41 29.71 11.80 5.10 10.43 20.14 5.35 17.53

16.30 24.40 10.90 4.30 9.60 18.60 5.00 14.60

17.50 27.40 11.80 4.80 10.60 19.50 5.20 15.70

Table 9 Timings for compression and decompression of binary images in seconds Image CCITT CCITT CCITT CCITT CCITT CCITT CCITT CCITT

1 2 3 4 5 6 7 8

Compression

Decompression

1.92 1.72 2.07 2.98 2.14 1.85 2.86 2.31

1.50 1.48 1.56 1.72 1.57 1.52 1.65 1.54

results of the compression ratios obtained by the proposed technique with the results based on OBDDs: OBDD 1 [22] and OBBD 2 [14]. It should be noticed that the results for OBDD 2 are obtained by using arithmetic coding after the coding of the table representing OBDD of an image. In contrary the results for the presented algorithm called method 1 in Table 8 are not compressed further by arithmetic coding. However, to make the comparison with arithmetic coding as well, such a coding is used in method 2. The compression ratios achieved indicate that the proposed method is comparable with the results obtained in [14] and better that the ones in [22]. Table 9 shows the timings for the compression and decompression of the same set of standard images obtained on a Pentium III 500 MHz personal computer with 128 MB RAM. The compression timings vary dependently on the test images whereas the decompression timings are fairly consistent.

7. Conclusions A new technique for the lossless compression of binary images has been proposed and implemented. After the extraction of the raw image data from the BMP images, a 2D differencing operation is performed. The 2D difference image is segmented into 16 · 16 blocks, classified into all-black, all-white and mixed blocks, and partitioned into non-overlapping rectangular regions of all-white and mixed blocks. The 16 · 16 blocks in the non-overlapping rectangular regions of mixed blocks are further segmented into 8 · 8 blocks. A novel variable block-size segmentation

42

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

and coding technique based on minterm coding, GPMPRM expansion, coordinate data coding, pattern matching and the reference row technique is used to represent these 8 · 8 blocks. The presented method can be used for gray code image compression after splitting of the original images into binary planes and modified technique was used for such a purpose in [9]. The presented technique achieves comparable results for binary image compression with the technique based on OBDDs [14] and with addition of integer valued functions and efficient coding of corresponding decision diagrams even more competitive results should be obtained.

Acknowledgements Agency for Science, Technology and Research in Singapore grant no. 0121060053 supported this study.

References [1] Agaian S, Astola J, Egiazarian K. Binary Polynomial Transforms and Nonlinear Digital Filters. New York: Marcel Dekker Inc.; 1995. [2] Augustine J, Feng W, Jacob J. Logic minimization based approach for compressing image data. In: Proc. IEEE 8th Int. Conf. VLSI Design; January 1995. p. 225–8. [3] Brayton RK, Hachtel GD, McMullen CT, Sangiovanni-Vincentelli AL. Logic Minimization Algorithms for VLSI Synthesis. Boston: Kluwer Academic Press; 1984. [4] CCITT Standard Fax Images at http://www.cs.waikato.ac.nz/~singlis/ccitt.html. [5] Chaudhary AK, Augustine J, Jacob J. Lossless compression of images using logic minimization. In: Proc. IEEE Int. Conf. on Image Processing; 1996. p. 77–80. [6] Davio M, Deschamps JP, Thayse A. Discrete and Switching Functions. New York: McGraw-Hill; 1978. [7] Falkowski BJ, Chang CH. Hadamard–Walsh spectral characterization of Reed–Muller expansions. Comput Electric Eng 1999;25(2):111–34. [8] Falkowski BJ, Chang CH. Generalized k-variable-mixed-polarity Reed–Muller expansions for systems of Boolean functions and their minimization. IEE Proc, Circuits, Devices Syst 2000;147(4):201–10. [9] Falkowski BJ, Lim LS. Gray scale image compression based on multiple-valued input binary functions, Walsh and Reed–Muller spectra. In: Proc. 30th Int. Symp. Multiple-Valued Logic, May 2000, Portland, Oregon, p. 279–84. [10] Falkowski BJ. Lossless compression of binary images using logic methods. In: Proc. South Eastern Europe Workshop on Computational Intelligence and Information Technologies, June 2001, Nis, Yugoslavia, p. 111–6. [11] Green DH. Modern Logic Design. Wokingham, MA: Addisson-Wesley; 1986. [12] Iravani K, Perkowski MA. Image compression based on Reed–Muller transforms. In: Proc. Int. Conf. on Computational Intelligence and Multimedia Applications, 1998, Australia, p. 81–95. [13] Karpovsky MG. Finite Orthogonal Series in the Design of Digital Devices: Analysis, Synthesis, and Optimization. New York: Wiley; 1976. [14] Mateu-Villarroya P, Prades-Nebot J. Lossless image compression using ordered binary-decision diagrams. Electron Lett 2001;37(3):162–3. [15] Mohamed SA, Fahmy MM. Binary image compression using efficient partitioning into rectangular regions. IEEE Trans Commun 1995;43(5):1888–93. [16] Nelson M, Gailly JL. Data Compression Book. 2nd ed. New York: M&T Books; 1996. [17] Pramanik D, Jacob J, Augustine J. Lossless compression of images using minterm coding. In: Proc. Int. Conf. Information, Communications and Signal Processing, vol. 3, Singapore; 1997. p. 1570–74. [18] Quddus A, Fahmy MM. A new compression technique for binary text images. In: Proc. 2nd IEEE Symp. Computers and Communications, July 1997. p. 194–8.

B.J. Falkowski / Computers and Electrical Engineering 30 (2004) 17–43

43

[19] Rimmer S. Supercharged Bitmapped Graphics. New York: Windcrest /McGraw-Hill; 1992. [20] Robertson GR, Aburdene MF, Kozick RJ. Differential block coding of bi-level images. IEEE Trans Image Process 1996;5(9):1368–70. [21] Sarkar D. Boolean function-based approach for encoding of binary images. Pattern Recognit Lett 1996;17(8):839– 48. [22] Starkey M, Bryant RE. Using ordered binary-decision diagrams for compressing images and image sequences. Technical Report, CMU-CS-95-105, Carnegie Mellon University, January 1995. [23] Wakerly JF. Digital Design: Principles and Practices. Second ed. Englewood Cliffs: Prentice-Hall; 1994. [24] Wu H, Perkowski MA, Zeng X, Zhuang N. Generalized partially-mixed-polarity Reed–Muller expansion and its fast computation. IEEE Trans Comput 1996;45(9):1084–8. [25] Yamazaki, Y. Wakahara, Y, Teramura, H. Digital facsimile equipments ‘‘Quick-FAX’’ using a new redundancy technique. In: National Telecommunications Conference; 1976. p. 6.2.1–6.2.5. [26] Yaroslavsky LP. Digital Picture Processing: An Introduction. Berlin: Springer-Verlag; 1985. Bogdan J. Falkowski received the MSEE degree from Warsaw University of Technology, Poland and the Ph.D. degree in Electrical and Computer Engineering from Portland State University, Oregon, USA. His industrial experience includes research and development positions at several companies. He then joined the Electrical and Computer Engineering Department at Portland State University. Since 1992 he has been with the School of Electrical and Electronic Engineering, Nanyang Technological University in Singapore where he is currently an Associate Professor. In June 2002 he was a Visiting Professor in Tampere International Center for Signal Processing, Tampere University of Technology, Finland. His research interests include digital signal and image processing, VLSI systems and design, switching circuits, testing, and design of algorithms. He has published three book chapters and over 180 refereed journal and conference articles. He was a guest editor of special issue on Spectral Techniques and Decision Diagrams published in February 2002 for VLSI Design, An International Journal of Custom-Chip Design, Simulation and Testing. He is a senior member of the IEEE, member of international advisory committee for International Conference on Applications of Computer Systems and was technical chair for IEEE International Conference on Information, Communication and Signal Processing held in December 1999 in Singapore.