Automatic Reconstruction of Unknown 3D Objects Based on the Limit ...

Report 4 Downloads 189 Views
Automatic Reconstruction of Unknown 3D Objects Based on the Limit Visual Surface and Trend Surface Xiaolong Zhou1, Bingwei He1,2, and Y.F. Li3 1

School of Mechanical Engineering & Automation Fuzhou University, Fuzhou, China 2 State key laboratory of precision measuring technology and instruments Tianjin University, Tianjin, China 3 Department of Manufacturing Engineering and Engineering Management City University of Hong Kong, Hong Kong [email protected], [email protected], [email protected]

Abstract. This paper presents a new planning approach of generating unknown 3-D models automatically. The new algorithm incorporates the limit visual surfaces with the trend surfaces and selects the suitability of viewpoints as the NBV on scanning coverage. The limit visual surfaces and trend surfaces are modelled by means of the known boundary region data obtained from initial view. The optimal design method is used to obtain the maximal visible area of next viewpoint and correspondding pose parameters in left and right planning process respectively. And the position that can obtain the maximal visible area is defined as the next best view position. The reconstrcted result of real model showes that the method is effective in practical implementation. Keywords: View Planning, Limit visual surface, Trend surface, Next best view, Three-dimensional reconstruction.

1 Introduction Automatic reconstruction of unknown 3-D objects has been of great importance role in the areas of machine vision, object recognition, and automatic modeling. The planning for automatic measurement of 3-D object is to search an optimal space pose with a sequence of viewpoints. According to the sensor configuration and task specifications, it determines a least viewpoint number and best spatial distribution so that the measurement task can achieve the highest efficiency. Recently, many methods on determining the location of the best next viewpoint have been proposed for different kinds of sensors [1-7]. For example, Bottino and Laurentini [3] present a general approach to interactive, object-specific volumetric algorithm, based on a necessary condition for the best possible reconstruction to have been performed. The proposed algorithms just only suit to simple convex polyhedra but not for general polyhedra, especially nonconvex polyhedra. Tarbox and Gottschlich [4] propose a model based approach. In which the measurability matrix is computed and the next view is based on glancing angles and the part of “difficulty to view.” And the strategy proposed in [5] is similar but adds an incremental process and a constraint on sensor measurement error. Recently, Scott [6] presents a new view planning algorithm based M. Xie et al. (Eds.): ICIRA 2009, LNAI 5928, pp. 1217–1223, 2009. © Springer-Verlag Berlin Heidelberg 2009

1218

X. Zhou, B. He, and Y.F. Li

on the modified measurability matrix (3M) which is an enhancement and extension of Tarbox’s measurability matrix (2M) concept. Sablatnig et al. [7] present an approach to next view planning for shape from silhouette for 3D shape reconstruction with minimal different views. The next view position is determined by comparing the difference between current image and acquired image. However it leads to a larger computing time with a larger number of viewpoints. The goal of this paper is to develop a new strategy of automatic viewpoint planning for unknown 3-D object measurement and reconstruction.

2 The Visual Region of the Vision System The developed laser line scanning vision system is shown in Fig.1: it consists of an analogue camera, laser line generator and stepper motor driven linear slide providing X, Z scanning motions for the camera and laser. The part is placed on a rotary table that can rotate 360 degrees with Z and linear slide providing Y motions. The camera and laser line generator are mounted on the stepper motor driven linear slide, with the laser line perpendicular to the rotary workstation and an angle between the camera’s optical axis and the laser line. The objective of the vision system is to provide a 3D profile of the object based on a coordinate system defined on the rotary table.

Fig. 1. The laser line scanning vision system. A (B) denotes the nearest (farthest) measurement position of the vision system. OT is the rotary table center.

2.1 Determination the Limit Visual Region of the Vision System A 3D object surface is measured by the vision system with both them at a certain distance, and 3D surface is fitted simultaneously. If the fitting accuracy belongs to the allowable errors range, then this distance is regarded as effective focus distance, or else as invalid. According to this method, it is easy to obtain the effective depth of field (DOF) range of the vision systems by moving workstation along the Y direction (as shown in Fig.1). In order to simplify the fitting process of the measurement data, here the planar object is used and the fitting accuracy is set to 0.02mm. From the experiments, the nearest and farthest measurement distance is 64mm and 186mm respectively. So the effective DOF of the system is 122mm.

Automatic Reconstruction of Unknown 3D Objects

1219

If a surface point beyond the field of view (FOV), it will not be detectable to a common CCD camera. But in this vision system, the FOV is satisfied two constraints: 1) the laser line is not occluded by the object surface in order to be detected by CCD camera; 2) CCD camera is not occluded by the object surface. So, FOV is determined by elimination of these two blocks. Assume the angle between the normal vector of the object surface and the laser plane is defined as the left (right) visual angle. Then the left (right) limit visual angle is defined as the maximal rotary angle which the projection of the laser on the plane could be detected by CCD camera at one certain position. From the experiments, the relationship between left (right) limit visual angle θ il ( θ ir ) and the measurement distance d are obtained as (1), (2) respectively:

⎧0.0053d + 0.2731, 64 ≤ d ≤ 160 . 1.1211, 160 < d ≤ 186 ⎩

(1)

⎧0.0103d + 0.2278, 64 ≤ d ≤ 116 . 1.4226, 116 < d ≤ 186 ⎩

(2)

θ il = ⎨

θ ir = ⎨

Where the units of θ il and θ ir are radian. Assume that the point ( x, y, z ) is in effective focus depth, a limit visual curve on XY plane is obtained according to above presents. When z is varied, the surface is constructed from a series of the limit visual curves, which is denoted as the limit visual surface. It represents the limit visual position of the object surface which is even visible to our vision system. And the equations of the left and right limit visual curves are obtained based on equations (1) and (2).

y + d0 + 51.5 ⎧ ⎪ x = −188.7 ln(sin( )) + C1, ⎨ 188.7 ⎪⎩ y = −2.1x + C2 ,

y + d0 + 22.1 ⎧ ⎪ x = 97.1ln(sin( )) + C3 , ⎨ 97.1 ⎪⎩ y = 6.7 x + C4 ,

y0 ≤ y ≤ 160 − d0 y > 160 − d0

y0 ≤ y ≤ 116 − d0 y > 116 − d0

.

(3)

.

(4)

Therefore, if the prior 3D data cloud of the object surface are obtained, the limit visual surface of the vision system is obtained from a series of the limit visual curves acquired according to the equations (3) and (4). 2.2 Obtaining the Trend Surface of the Object

To an unknown object with complex surface shape, the limit visual surface model constructed only to reflect the maximal information of the unknown object. So, here we incorporate the trend surface to predict the unknown object surface. The trend surface is a function of two orthogonal coordinate axes which can be represented by z = f ( x, y ) + e . (5)

1220

X. Zhou, B. He, and Y.F. Li

in which the variable z at the point (x, y) is a function of the coordinate axes, plus the error term e. This expression is the generalized form of the General Linear Model (GLM), which is the basis of most trend methods. The function f ( x, y ) is usually expanded or approximated by various terms to generate polynomial equations. For an norder three-dimensional surface, the form of the power series is given by n

i

f (u, v) = ∑ ∑ bij u j v i − j .

(6)

i =0 j = 0

where u and v are the coordinates on an arbitrary orthogonal reference system, bij is the constant coefficient of the surface ( b00 is the surface base).

3 The View Planning Method Suppose the initial knowledge of the model has been got at initial view (shown as the solid line A1 B1C1 D1 in Fig.2). Then the view planning strategy presented here is:

Fig. 2. The knowledge of initial view and its limit visual surfaces and trend surfaces

(a)

(b)

(c)

(d)

Fig. 3. The relationship with the limit visual surface and the trend surface. T denotes the trend surface, L denotes the limit visual surface, and Pmax is the farthest measurement position of the vision system. I is the intersection point of the limit visual surface with the trend surface.

Automatic Reconstruction of Unknown 3D Objects

1221

1) Firstly, construct the left (right) limit visual surface D1 E1 ( A1 F1 ) and trend surface D1G1 ( A1 H 1 ) according to the partially acquired object surface (shown as Fig.2). And determine the predict surface to plan the next view according to the relationship between the limit visual surface and the trend surface (shown as Fig.3). In Fig.2, the left (right) predict surface is D1G1 ( A1 F1 ). Besides, during the process of the determination of the next viewpoint, not only the maximal predict surface should be detected, but also the boundary data of known model should be seen in order to easily register data points. 2) Secondly, regard the visible area of left (right) predict surface of next viewpoint as the object function, the rotation angle θ and translation distance d as the variables, the visibility of known boundary data as the constraint condition. Then adopt optimal design method to obtain the maximal visible area S l ( S r ) and corresponding position parameters ( θ l ( θ r ) and d l ( d r )) in left (right) planning process. And define this position as the NBV candidate position. 3) Finally, compare the visible area got from above two NBV candidate positions, and select the larger one as the final NBV position.

4 Experiment The experiment was carried in our laboratory for automatic reconstruction of the part model. The initial knowledge of part model was acquired from first view in random position (shown as Fig.4(a)). Then the left and right limit visual surfaces and trend surfaces were constructed (shown as Fig.4(b)). And the experimental data of NBV candidate positions of viewpoint 1 were shown as table 1. (Suppose the rotary table counterclockwise rotating was positive, otherwise was negative) Table 1. The NBV candidate positions data of initial viewpoint 1

d 0 (mm)

θ1l (rad)

d1l (mm)

S1l (mm2)

θ1r (rad)

d1r (mm)

S1r (mm2)

160

1.36

67.8

1.93 × 104

1.69

63.0

1.83 × 104

In Table1, For S1l > S1r , so the next best view was the position that rotary table counterclockwise rotated 1.36rad and translated 67.8mm to. Then we could get the viewpoint 2 (shown as Fig.4(c)). Similarly, the residual process of the reconstruction object were shown as Fig.4(d-e). The final reconstructed model was shown as Fig.4(f). The whole reconstruction result was shown as Table 2. Table 2. The reconstruction result analysis View numbers 4

Actual volume (mm3) 332767

Reconstructed volume (mm3) 335248

Volume error (%) +0.75

Reconstruction presicion 0.032

1222

X. Zhou, B. He, and Y.F. Li

(a) initial view

(b) obtain predict surfaces

(d) view 3 (1.45rad, 58.6mm) (e) view 4 (1.42rad, 62.4mm)

(c) view 2 (1.36rad, 67.8mm)

(f) reconstructed model

Fig. 4. The reconstruction process of the Part model

5 Conclusions In this paper, we presented a new approach of generating 3-D models automatically, putting emphasis on planning of NBV. The proposed algorithm incorporated the limit visual surfaces with the trend surfaces obtained according to the partial known object knowledge. The final NBV was the position that obtained the largest visible space area among viewing points by using the optimal design method. And the experimental result showed that the method was effective in practical implementation. Acknowledgments. The work was supported by a grant from the National Natural Science Foundation of China (No.50605007) and Program for New Century Excellent Talents in Fujian Province University (Project No. XSJRC2007-07) and State key laboratory of precision measuring technology and instruments and a grant from the Research Grants Council of Hong Kong (Project No. CityU117507)

References [1] Larsson, S., Kjellander, J.A.P.: Path planning for laser scanning with an industrial robot. Robotics and Autonomous Systems 56, 615–624 (2008) [2] Li, Y.F., Liu, Z.G.: Information entropy-based viewpoint planning for 3-D object reconstruction. IEEE Trans. on Robotics 21(3), 324–337 (2005) [3] Bottino, A., Laurentini, A.: What’s NEXT? An interactive next best view approach. Pattern Recognition 39, 126–132 (2006) [4] Tarbox, G.H.: Planning for Complete Sensor Coverage in Inspection. Computer Vision and Image Understanding 61(1), 84–111 (1995)

Automatic Reconstruction of Unknown 3D Objects

1223

[5] Scott, W.R., Roth, G., Rivest, J.F.: View planning for automated three-dimensional object reconstruction and inspection. ACM Computer Surveys 35(1), 64–96 (2003) [6] Scott, W.R.: Model-based view planning. Machine Visionand Applications 20(1), 47–69 (2009) [7] Sablatnig, R., Tosovic, S., Kampel, M.: Next view planning for shape from silhouette. In: Computer Vision (CVWW 2003), Czech Pattern Recognition Society, pp. 77–82 (2003)

Recommend Documents