Extracting Nonrigid Motion and 3D Structure of ... - Semantic Scholar

Report 2 Downloads 180 Views
Extracting Nonrigid Motion and 3D Structure of Hurricanes from Satellite Image Sequences without Correspondences Lin Zhou and Chandra Kambhamettu Dept. of Computer and Info. Sci. University of Delaware Newark, Delaware, 19716 Email: lzhou/[email protected]

Abstract

Image sequences capturing Hurricane Luis through meteorological satellites (GOES-8 and GOES-9) are used to estimate hurricane-top heights (structure) and hurricane winds (motion). This problem is dicult not only due to the absence of correspondences but also due to the lack of depth cues in the 2D hurricane images (scaled orthographic projection). In this paper, we present a structure and motion analysis system, called SMAS. In this system, the hurricane images are rst segmented into small square areas. We assume that each small area is undergoing similar nonrigid motion. A suitable nonrigid motion model for cloud motion is rst de ned. Then, non-linear least-square method is used to t the nonrigid motion model for each area in order to estimate the structure, motion model, and 3D nonrigid motion correspondences. Finally, the recovered hurricane-top heights and winds are presented along with an error analysis. Both structure and 3D motion correspondences are estimated to subpixel accuracy. Our results are very encouraging, and have many potential applications in earth and space sciences, especially in cloud models for weather prediction.

1 Introduction

The estimation of hurricane-top structure and motion using meteorological satellite images is an important application area for computational methods developed in computer vision, especially in nonrigid motion analysis. Accurate hurricane heights and winds are important for a number of meteorological and climate application [18] [6], such as cloud model veri cation, physically-based numerical weather prediction and data assimilation, cloud-wind height assignment [5] [7] [13], convective intensity estimation [17] [19], and radiation balance estimation for Mission to Planet Earth type climate baseline studies. However, it is a dicult task to develop automatic computational motion analysis algorithms capable of handling cloud motion. Most of the work in motion analysis [9] [15] [14] [21] has been done based on the rigidity assumption that the shapes of objects do not change over time. But, the rigidity assumption fails in numerous real-world examples, such as human movement, motion of biological organs, and also cloud motion. A lot of potential applications make nonrigid motion analysis very desirable [11]. Most of the work in nonrigid motion analysis so far aims at using complete structure of the scene (stereo or range) [10] [20] [22], "optic

ow" estimation of 2D images [3] [2], or image registrations [4]. Very limited work has been made to recover structure and nonrigid motion at the same time from

Dmitry B. Goldgof Dept. of Computer Sci. and Eng. University of South Florida Tampa, Florida, 33620 Email: [email protected]

time-varying images. Balasubramanian et al. recently recovered 3D structure and nonrigid motion from 2D image sequences under perspective projection [1]. The estimation of hurricane-top structure and motion using meteorological satellite images is more dicult due to the complex dynamics of the imaging instruments and the underlying non-linear phenomena of cloud formation and weather. Also, the scaled orthographic projection in cloud images makes the structure estimation problem even more dicult because the depth information is missing. Some computational approaches to hurricane structure and motion estimation have been proposed in recent years. Palaniappan et al. [16] developed an algorithm based on 3D analysis. They use 3D data obtained from stereo analysis, and/or approximate 2D intensity images for depth information in order to perform cloud tracking. Zhou et al. [23] used a sequence of 2D images to estimate not only motion but also 3D structure. Intensity analysis was rst employed to nd the candidates for correspondence, then ane motion model was tted to nd the cloud motion. In this paper, we extend the work in [23] by developing algorithms for estimating the structure and motion for the entire hurricane and a system called SMAS has been implemented. Furthermore, experiments on Hurricane Luis image sequences are performed to generate the complete hurricane structure and 3D motion correspondences, along with an extensive error analysis. The rest of the paper is organized as follows: Section 2 explains data acquisition of hurricane image sequences. Section 3 describes algorithms for motion and structure analysis. Section 4 presents experimental results on satellite images of Hurricane Luis. Section 5 discusses validation of results and error analysis. Finally, conclusions and future work are presented in Section 6.

2 GOES Hurricane Image Sequences

The current generation of Geostationary Operational Environmental Satellite (NOAA GOES 8, 9, 10) has an Imager instrument with ve multispectral channels of high spatial resolution, and very high dynamic range radiance measurements with 10-bit precision. The Imager instrument can image with high spatial, temporal, radiometric and spectral resolution [8], which makes it possible for automatic cloud-tracking algorithms to track mesoscale atmospheric phenomena such as hurricane and severe convective storms. Also, GOES Imager super rapid scan sequences at 1-minute intervals of mature hurricane give new capabilities to observe hurricane dynamics. Cloud details can now be seen due to the good contrast, in spite of very bright

clouds. The 1-minute interval between images makes it possible to track features with high accuracy and reliability. These techniques are essential because they provide information which is independent of other meteorological measurements. Hurricane Luis has formed as tropical depression on Aug. 28, 1995. After 3.5 days as a tropical storm, it intensi ed to Category 1 hurricane on Aug. 31 and later became a Category 4 hurricane on Sep. 1. The track of Luis covered the outer regions of Caribbean islands, Puerto Rico and some of the Virgin Islands. Luis did not make landfall in the U.S. but went back out to sea on Sep. 7 and by Sep. 11 was o the coast of Newfoundland where it had downgraded to a Category 1 hurricane. During the time of Hurricane Luis, two satellites, GOES-8 and GOES-9, are focused on it as shown in Fig. 1. GOES-9 used Super Rapid Scan Operations (SRSO), by which hurricane Luis was scanned approximately once every minute, while GOES-8 used routine schedule scan mode and one view was provided at approximately every 15 minutes. The reason why GOES-8 used routine schedule scan mode is that most of the southern hemisphere can not be scanned when GOES Rapid Scan Operations (RSO) or Super Rapid Scan Operations (SRSO) is utilized. Thus, two image sequences are available for Hurricane Luis: one image every minute (GOES9), and another every 15 minutes (GOES-8). The subsatellite point of GOES-8 (GOES-East) is 75 degrees, which has been re-mapped to GOES-9 (GOES-West) having sub-satellite point of 135 degrees. Stereo analysis of these two image sequences has been done on a Maspar parallel machine, using a coarse-to- ne, hierarchical algorithm which was previously developed at NASA-Goddard [16]. This gives us a sequence of disparities for every 15 minutes. In this work, we utilize the GOES-9 image sequence (1 frame/minute) and the available disparities (1 frame/15 minutes) to estimate structure, nonrigid motion, and 3D correspondences of Hurricane Luis for every frame (every minute). Our SMAS-generated 1-minute structure and motion analysis of hurricanes are of utmost importance to meteorologists, as GOES-8 is never operated on Super Rapid Scan Operations (SRSO) when GOES-9 is taking 1-minute images (and vice-versa), which makes it impossible to have one-minute stereo pairs.

3 Algorithms

Cloud motion consists of complex nonrigid motion. Any restricted classes of nonrigid motion, such as articulated motion, quasi-rigid motion, isometric motion, homothetic motion and conformal motion, will not be suitable for the cloud motion. More general motion algorithms are desirable. In this paper, the cloud images are segmented into small square areas. We assume that each small cloud region is undergoing nonrigid motion according to the same given model. The ow diagram of the algorithm is illustrated in Fig. 2.

3.1 Nonrigid Motion Model

In our algorithm, the most important step is to de ne a good nonrigid motion model for each small cloud area. In this paper, ane motion model is chosen because ane motion model is a general nonrigid mo-

GOES−8 (one image per 15 minites)

GOES−9 (one image per minite)

Figure 1. GOES Hurricane Image Sequences Nonlinear fitting Disparities per 15 mins

Nonrigid motion model

Structure, nonrigid motion, 3D correspondences

Image sequence (GOES−9) 1 frame/minute

Figure 2. Flow Diagram of the Algorithm

tion model and has more power in describing nonrigid motion. In addition, ane motion model has been experimentally proven to be a suitable model for small local cloud motion [16]. The corresponding algebraic relations to derive structure and nonrigid motion from cloud images are explained below. Consider a point P 1 (xl 1 ; yl 1 ; zl 1 ) in frame 1 moving to a point P 2 (xl 2 ; yl 2 ; zl 2 ) in frame 2 after a nonrigid motion, and to point P 3 (xl 3 ; yl 3 ; zl 3 ) in frame 3, and so on. Let M i describes the motion between frame i and i + 1. For cloud images, we can consider them as scaled orthographic projections because the distance between the satellite and clouds is very high (approximately 30,000 Kilometers). Thus, the structure constraints for points on cloud images are almost negligible. For point (xl ; yl ; zl ), we have the form:

Xl i = x l i ; Y l i = y l i ;

(1)

where (xl i ,yl i ) is in the 3D space, (Xl i , Yl i ) is the corresponding projected point in the cloud image. We assume that the scaling for (Xl i ,Yl i ) and (xl i ,yl i ) is the same. From the nature of the motion, the following equations can be derived. xl i+1 = M i (xl i ; yl i ; zl i );

yl i+1 = M i (xl i ; yl i ; zl i ); zl i+1 = M i (xl i ; yl i ; zl i );

(2) where M i is an ane displacement function which can be de ned as follows: xl i+1 = a1 xl i + b1 yl i + c1 zl i + d1 ;

yl i+1 = a2 xl i + b2 yl i + c2 zl i + d2 ; zl i+1 = a3 xl i + b3 yl i + c3 zl i + d3 :

(3) Then following equations can be obtained from Eq. 1 and Eq. 3:

xl i+1 = a1 Xl i + b1 Yl i + c1 zl i + d1 ; yl i+1 = a2 Xl i + b2 Yl i + c2 zl i + d2 ; zl i+1 = a3 Xl i + b3 Yl i + c3 zl i + d3 :

(4) According to Eq. 4, the motion between successive frames is assumed to be under the same motion model. For cloud application, it was experimentally found that a small cloud region moves smoothly but not constantly [16]. Hence, an additional scaling factor i can be added to Eq. 4 as follows:

xl i+1 = i (a1 Xl i + b1Yl i + c1 zl i + d1 ); yl i+1 = i (a2 Xl i + b2 Yl i + c2 zl i + d2 ); zl i+1 = i (a3 Xl i + b3 Yl i + c3 zl i + d3 ):

(5) Eq. 5 represent constraint equations for tracking a point across a sequence of images using ane motion model. Similar equations for every pair of successive frames can be derived.

3.2 Motion Model Fitting

3.2.1 Minimization Method and Error-of-Fit Function Balasubramanian et al. discussed the minimum number of data points required for the analytical solution to Eq. 5 [1]. However, in this paper, we want to track more than the minimum number of points, rather, all the points in a small cloud area. This, of course, produces more equations than unknowns. Thus, Levenberg-Marquardt non-linear least-square method is utilized to solve Eq. 5 and t a nonrigid motion model for each small cloud region. To make the algorithm robust and be able to nd a good solution, a good error function that measures the di erence between a learned model and given data set is very important. However, with the absence of correspondences, it is almost impossible to de ne such an error function. Robust algorithms to nd some correspondence candidates using the cloud image sequences are very desirable. In this paper, two methods are experimented to nd the correspondence candidates. The rst one is correlation-based. In this method, each point in the rst frame is searched within a small neighborhood (3  3) in the second frame and the three points with highest correlation match scores are chosen as the correspondence candidates. The second

one is based on optic ow. In this case, optic ow constraint equation is used to evaluate all the points within the search area (3  3). Three points with highest evaluations are chosen as the candidates of correspondence. Both methods have been tested for our purposes. Interestingly, it was found that although optic- ow based method is faster, its results are not as good as those of correlation-based method. The reason is that optic ow is not uniquely determined by local information. Since in our case, each small area has been processed independently, no global information was applied to each small area. Thus, we preferred cross-correlation based measure for our purposes in obtaining the correspondence candidates 1 . Using the correspondence candidates, we can de ne the EOF function by the minimal distance between the correspondence candidates and the one obtained by Eq. 5:

EOF =

X X min((Xij1 ? xij )2 + (Yij1 ? yij )2;

Mframe Ndata

j =1 i=1 (Xij2 ? xij )2 + (Yij2 ? yij )2 ; (Xij3 ? xij )2 + (Yij3 ? yij )2 ))

(6) where (Xij1 ; Yij1 ), (Xij2 ; Yij2 ), (Xij3 ; Yij3 ) are the rst, second and third correspondence candidates for point (Xi ; Yi ) (in the rst frame) respectively, (xij ; yij ) is the correspondence estimation obtained by Eq. 5. 3.2.2 Initial Guesses The initial guesses for all the unknowns in Eq. 5 are required in optimization, consisting of 12 motion parameters, the depth for every point in the rst frame and i for each pair of successive images. It is known that almost all non-linear system solvers are highly sensitive to initial guesses. For numerical simplicity and ensuring convergence, two eliminations of unknowns have been done in our method. As we know, disparities of hurricane cloud images are available every 15 frames (beginning with the rst frame). Hence, the depth unknowns in the rst frame can be eliminated by xing them with these disparities. This is very important for our algorithm. Due to the poor depth constraints in cloud images, much more constraints on depth are necessary. Stronger constraints will be discussed in following subsection. Also, we eliminate the translational unknowns by setting the translation components, i.e. d1 , d2 , d3 , to small constants. This also makes sure that the trivial solution, when all other unknowns are zero, cannot be reached. For the other nine motion parameters and i , initial values are chosen assuming that cloud motion is very small between two successive frames (a1 = 1, a2 = 0, a3 = 0, b1 = 0, b2 = 1, b3 = 0, c1 = 0, c2 = 0, c3 = 1, i = 1). 3.2.3 Depth Constraint Although the depth unknowns in the rst frame are eliminated by xing them to disparities, the error 1 Currently, we are in the process of incorporating global information in our error-of- t function and use optic ow for our candidate hypotheses.

functions will still have in nite global minima because there is no information about the change of the cloudtop height for the following frames in Eq. 6. Clearly, we need some restrictions on the range of values for the depth. For cloud motion, the cloud-top height will not change much in one minute, which means the depth di erence between two successive frames has an upper bound. Based on this observation, we can specify a small range for the depth of each point. Thus we have zi;j?1 ? a  zi;j  zi;j?1 + a; (7) where zi;j is the depth of the ith data in the j th frame, zi;j?1 is the depth of the ith data in the (j ?1)th frame. 2a is the range for the depth. In our experiments, a is set to 0.4 (in disparity units) and was found to yield good results. Finally, we incorporate this depth constraint into the minimization process using a penalty method [12].

3.3 Postprocessing by Using a Smoothness Force

Cloud motion in hurricanes is mostly smooth, having gradual variations in velocities for the most parts. Since the above optimization scheme performs motion analysis on each small cloud area independently, it suffers from discontinuities across borders, as the cloud velocities may change a lot from one small area to another. Hence, an additional constraint is necessary in order to regularize the recovered motion and depth. In this paper, a postprocessing technique using a smoothness force is proposed. The smoothness force is de ned as, Esmoothness = 1 (vx ? vx ) + 2 (vy ? vy ) + 3 (vz ? vz ); (8) where vx , vy , vz are the velocities in x, y, z directions in a small cloud area, vx , v y , v z are the corresponding mean values within its neighbor areas (3  3), and 1 , 2 , 3 are small positive constants. With the smoothness force, we postprocess the initial results obtained by Eq. 6. The complete algorithm of SMAS along with the postprocessing step is given below. 1. for the entire cloud, minimize Eq. 6 to get initial results for each small cloud area; 2. compute vx , v y , v z for each small cloud area using these results; 3. for the entire cloud, minimize Eq. 6 again by incorporating the smoothness force for each small cloud area; 4. if the recovered motion and depth are not smooth enough (using a threshold), go to 2; else stop. From the above algorithm, it is clear that for each small area, the smoothness force constrains the recovered motion and structure in order to be consistent with its neighbor areas. This technique is very important for our algorithm. Fig. 3 presents the results with and without postprocessing. It is clear that the recovered depth can change dramatically from one small area to another without postprocessing and the results with postprocessing look very smooth.

4 Experimental Results

Our system SMAS, implemented in C (on SGI platform), performs nonrigid motion and structure analy-

(a)

(b)

Figure 3. Comparison of the results with and without postprocessing. (a) results without postprocessing (b) results with postprocessing

sis on the cloud image sequences captured by GOES. To the best of our knowledge, this is the rst reported system that extracts the cloud-top heights and winds from a sequence of cloud images automatically. Extensive experiments on the GOES image sequences of hurricane Luis have been performed. The data include 490 frames of hurricane Luis starting from 0906-95 at 1023 UTC to 09-06-95 at 2226 UTC, provided by NASA-Goddard. Although ve spectral bands are available for each frame, only visible channel having 10-bits per pixel is being used for our experiments. We will be using the other spectra to see their use in determining the structure and motion in our future experiments. In this paper, we present the results for a sequence of 12 cloud images (from 1621 UTC to 1634 UTC). 1621 UTC is used as the rst frame in the experiments, as it has a stereo occurrence (from GOES-8 and GOES-9). Fig. 4 shows the intensity images from 1621 UTC to 1634 UTC (1-minute images from GOES-9), which are the input to our system. The recovered motion and structure are shown in Fig. 6 and Fig. 5 respectively. The recovered cloud heights are in disparity units (pixel shifts). It was found through mathematical derivations using the position of the satellites, that the true cloud heights can be calculated by scaling the disparity by a constant (height(km) = 1:78097  disparity) [8].

5 Validations

In order to verify the results, we exhaust all the possible ways of validations because it is almost impossible to get the ground truth of cloud top-heights and motion. We found that our motion estimates are quite accurate when compared to manual analysis. In this section, we elaborate more on our structure estimation. First, we have manually tracked the stereo correspondences at 3 original frames and compared against the estimated structure. Next, we have compared SMAS estimated structure against the automatic stereo analysis results at the 3 frames. In both comparisons, we found that our results are very close to the corresponding structures (within 1 pixel) for most areas of the hurricane. Another comparison is made by comparing the recovered cloud-top heights and the stereo analysis results against the IR (infrared) cloud-top heights, as IR is believed to be the closest to the ground-truth for cloud-top heights

Image 1

Image 7

Image 2

Image 8

Image 3

Image 9

Image 4

Image 5

Image 6

Image 10

Image 11

Image 12

Figure 4. Input GOES images of Hurricane Luis (from 1621 UTC to 1634 UTC)

Image 1

Image 2

Image 3

Image 4

Image 5

Image 7

Image 8

Image 9

Image 10

Image 11

Image 6

Image 12

Figure 5. Recovered cloud-top heights of Hurricane Luis (from 1621 UTC to 1634 UTC)

(at the areas where the clouds are dense). Fig. 7 shows the IR images and the error distribution of our SMAS-generated results for 1634 UTC. Table I gives error statistics (means and standard deviations) for both the SMAS-generated results and the stereo analysis results for some particular areas. We observe that for most areas of the hurricane, our recovered results have smaller errors than the stereo analysis results. In addition, according our estimated results, bigger errors occur in both the hurricane eye and the hurricane edge. This distribution perfectly ts our ane assumption because this assumption may fail in these two areas due to the presence of uid motion in these areas. Also, IR is not a good estimate of height in these areas since these areas do not consist of thick clouds. For most parts of the clouds where IR heights are reliable and the ane motion model ts, the errors for our estimated results are very small (0.0671).

6 Conclusions and Future Work

Figure 6. Needle graph of the recovered cloud motion (Note: This is 2D projection of the recovered 3D motion)

This paper presents the recovery of structure and nonrigid motion from a sequence of 2D cloud images. The main contribution of this research is that it not only deals with the problem of recovering structure from the scaled orthographic projection views but also performs nonrigid motion estimation to get correspondences by combining intensity with motion analysis.

(b)

(a)

Figure 7. (a) IR image of 1634 UTC; (b) Error distribution.

Area Hurricane body Hurricane eye Hurricane edge

Error1 mean S.D. 0.0671 0.0521 0.2602 0.233 0.175 0.153 TABLE I

Error2 mean S.D. 0.141 0.0345 0.493 0.199 0.121 0.115

Errors for some particular areas (Note: Error1 is for the SMAS-estimated results and Error2 is for the stereo analysis results).

The results are very encouraging and have tremendous applications in earth and space sciences, especially in cloud models for weather prediction. This work can also be easily applied for data under perspective projections such as lip motion, human facial expressions, hand motion, tongue motion etc. Our future directions include the following, 1. Besides ane nonrigid motion model, other general nonrigid motion models such as uid motion models and high order quadric models, will be utilized to construct a hierarchical nonrigid motion analysis system; 2. Robust parallel implementation of the algorithms will be introduced to improve the eciency. Our ultimate goal is to have a set of techniques that can perform cloud classi cation and apply appropriate motion models in cloud tracking and nonrigid motion and structure analysis for climate studies.

Acknowledgments

Research funding was provided by the National Science Foundation Grant NSF IRI-9619240. The authors are thankful to Dr. Fritz Hasler of NASA Goddard and Dr. K. Palaniappan of University of Missouri-Columbia for providing the data and useful discussions.

REFERENCES

[1] Ramprasad Balasubramanian, Dmitry B. Goldgof, and Chandra Kambhamettu. Tracking of nonrigid motion and 3d structure from 2d image sequences without correspondences. Proc. ICIP, Chicago, 1:933 { 937, Oct. 1998. [2] Michael J. Black and P. Anandan. A framework for the robust estimation of optical ow. ICCV-93, pages 231{ 236, May, 1993. [3] A. Giachetti and V. Torre. Optical ow and deformable objects. Proceeding of 5th ICCV, pages 706{711, 1995. [4] Michael Gleicher. Projective registration with di erence decomposition. CVPR'97, pages 331{337, June, 1997. [5] A. F. Hasler. Stereographic observations from satellites: An important new tool for the atmospheric sciences. Bull. Amer. Meteor. Soc., 62:194{212, 1981.

[6] A. F. Hasler. Stereoscopic measurements. In P. K. Rao, S. J. Holms, R. K. Anderson, J. Winston, and P. Lehr, editors, Weather Satellites: Systems, Data and Environmental Applications, Section VII-3, pages 231{239. Amer. Meteor. Soc., Boston, MA, 1990. [7] A. F. Hasler and K. R. Morris. Hurricane structure and wind elds from stereoscopic and infrared satellite observations and radar data. J. Climate Appl. Meteor., 25:709{ 727, 1986. [8] A. F. Hasler, K. Palaniappan, C. Kambhamettu, P. Black, E. Uhlhorn, and D. Chesters. High resolution wind elds within the inner-core and eye of the a mature tropical cyclone using a long series of geos one-minute images and a massively parallel computer. Bulletin of the American, Meteorological Society, 1998. [9] T. S. Huang. Motion analysis. In Encyclopedia of Arti cial Intelligence, volume 1, pages 620{632. John Wiley and Sons, New York, 1986. [10] Chandra Kambhamettu, Dmitry B. Goldgof, and Matthew He. Determination of motion parameters and estimation of point correspondences in small nonrigid deformations. Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 943{946, June 1994. [11] Chandra Kambhamettu, Dmitry B. Goldgof, Demetri Terzopoulos, and Thomas S. Huang. Nonrigid motion analysis. In Tzay Young, editor, Handbook of PRIP: Computer vision, volume II, pages 405{430. Academic Press, San Diego, California, 1994. [12] D. G. Luenberger. In Linear and Nonlinear Programming. Addison-Wesley, 1984. [13] P. Minnis, P. W. Heck, and E. F. Harrison. The 27-28 october 1986 re ifo cirrus case study: Cloud parameter elds derived from satellite data. Monthly Weather Review, 118:2426{2447, 1990. [14] A. N. Netravali, T. S. Huang, A. S. Krishnakumar, and R. J. Holt. Algebraic methods in 3-D motion estimation from two-view point correspondences. International Journal of Imaging Systems and Technology, 1:78{99, 1989. [15] J. Oliensis. Multiframe structure from motion in perspective. Workshop on the Representations of Visual Scenes, pages 77{84, 1995. [16] K. Palaniappan, Chandra Kambhamettu, A. Frederick Hasler, and Dmitry B. Goldgof. Structure and semi- uid motion analysis of stereoscopic satellite images for cloud tracking. Proceedings of the International Conference on Computer Vision, pages 659{665, 1995. [17] A. F. Hasler R. A. Mack and R. F. Adler. Thunderstorm cloud top observations using satellite stereoscopy. Monthly Weather Review, 111:1949{1964, 1983. [18] H. K. Ramapriyan, J. P. Strong, Y. Hung, and C. W. Murray, Jr. Automated matching of pairs of SIR-B images for elevation mapping. IEEE Trans. Geosciences and Remote Sensing, 24(4):462{472, 1986. [19] E. Rodgers, R. Mack, and A. F. Hasler. A satellite stereoscopic technique to estimate tropical cyclone intensity. Monthly Weather Review, 111:1599{1610, 1983. [20] L. V. Tsap, D. B. Goldgof, and S. Sarkar. Human skin and hand motion analysis from range image sequences using nonlinear fem. IEEE Nonrigid and Articulated Motion Worshop, pages 80{89, June, 1997. [21] Xiaoguang Wang, Yong-Qing Cheng, Robert T. Collins, and Allen R. Hanson. Determining correspondences and rigid motion of 3-d point sets with missing data. IEEE Computer Vision and Pattern Recognition, pages 252{257, June, 1996. [22] Liang Zhao and Chunk Thorpe. Qualitative and quantitative car tracking from a range image sequence. International Conference on Computer Vision and Pattern Recognition (CVPR'98), pages 496{501, June, 1998. [23] Lin Zhou, Chandra Kambhamettu, and Dmitry B. Goldgof. Structure and nonrigid motion analysis of satellite cloud images. ICVGIP'98, pages 285{291, December 1998.

Recommend Documents