CONVEXITY AND SOLVABILITY FOR COMPACTLY SUPPORTED RADIAL BASIS FUNCTIONS WITH DIFFERENT SHAPES∗ SHENGXIN ZHU† AND ANDREW J. WATHEN
‡
Abstract. It is known that interpolation with radial basis functions of the same shape can guarantee a non-singular interpolation matrix, whereas little is known when one uses various shapes. In this paper, we prove that a class of compactly supported radial basis functions are convex on a certain region; based on this local convexity and ready local geometrical property of the interpolation points, we construct a sufficient condition which guarantees diagonally dominant interpolation matrices for radial basis functions interpolation with various shapes. The proof is constructive and can be used to design algorithms directly. Real applications from 3D surface reconstruction are used to verify the results. Key words. Radial basis function; Wendland function; solvability; RBF; various shape. AMS subject classifications. 65D05
1. Introduction. Recent advances in radial basis function (RBF) theory have demonstrated the promise of RBFs for scattered data approximation in high dimensional space, its utility in the design of mesh-free methods, for global optimisation and computer aided design [7, 10, 11, 15, 19, 22]. It has for long been established that certain types of RBFs guarantee non-singular interpolation matrices provided the interpolating points are distinct [16]. There is no doubt that such a guaranteed solvability is one of the convincing reasons why methods based on RBFs continue to attract much attention. As is well known—and as we shall describe—currently solvability of the RBF interpolation problem depends on the symmetry property of the interpolation matrix; this requires that the centres of the basis functions are the interpolating points and that the basis functions are of the same scale. In practice it is often desirable to define/use RBF approximation/interpolation with different shapes—different scales or different bases. But for RBF interpolation with different shape parameters, previous results say little on the issue of solvability. To make this more clear, the reader is invited to see how the previous proof works for the case of using the same scale of radial basis functions. Consider the radial basis function (RBF) interpolation problem in high dimensional space: given observations f1 , f2 , . . . , fn on a setP of points X = {xi ∈ Rd : i = n 1, . . . n}, find an approximation in the form s(x) = i=1 αj φj (x) to an unknown function f (x) such that (1)
s(xk ) = fk , for 1 ≤ k ≤ n,
where φj (x) = Φ(x − xj ) = ϕ(kx − xj k). (1) results in a system matrix Aα = f , ∗ The research is supported by Award no KUK-C1-013-04, made by King Abdullah University of Science of Technology. † Oxford Center for Collaborative and Applied Mathematics & Numerical Analysis Group, Mathematical Institute, The University of Oxford. Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG United Kingdom. (Email:
[email protected] ). ‡ Numerical Analysis Group, Mathematical Institute, The University of Oxford. Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG United Kingdom. (Email:
[email protected] )
1
2
Zhu and Wathen
where f = (f1 , f2 , . . . , fn )T and Φ(x1 − x1 ) Φ(x2 − x1 ) A= .. .
Φ(x1 − x2 ) Φ(x2 − x2 ) .. .
··· ··· .. .
Φ(x1 − xn ) Φ(x2 − xn ) . .. .
Φ(xn − x1 )
Φ(xn − x2 )
···
Φ(xn − xn )
ˆ then Φ can be recovered from Φ ˆ by If Φ has an integrable Fourier transform Φ, the Fourier inversion formula [22, p.67] Z 1 ixT ω ˆ (2) Φ(x) = p Φ(ω)e dω. (2π)d Rd For any β ∈ Cn , consider the following quadratic form (3) β H Aβ =
Z n X n X 1 i(xk −xj )T ω ˆ βj β¯k Φ(xk − xj ) = p βj β¯k Φ(ω)e dω. (2π)d k=1 j=1 Rd k=1 j=1 n X n X
Separate ei(xk −xj ) n X n X
(4)
T
ω
as eixk ω e−ixj ω , then T
T
βj β¯k ei(xk −xj )
T
ω
=
k=1 j=1
n X
βj e
j=1
Therefore it follows that 1 β H Aβ = p (2π)d
(5)
−ixT j ω
n X
ixT kω
β¯k e
k=1
2 n X T −ixj ω = βj e . j=1
2 X n −ixT ˆ j ω dω. Φ(ω) β e j Rd j=1
Z
ˆ > 0, the interpolation matrix A is symmetric and positive definite. Such functions If Φ with positive Fourier transform are called positive definite functions. It is obvious that positive definite radial basis functions ϕ(k · k) guarantee invertible interpolation matrices. One of the most famous positive definite radial basis functions is the Gaussian kωk2
1 e− 42 . Also included in this class are inverse e− kxk2 , with Fourier transform (√2) d multiquadrics, truncated power functions, and Wendland compactly supported radial basis functions. See [22, p.76, p.80, p.128] for details. A natural generalization of positive definite function is conditional positive definite function. A conditional positive definite function is a continuous function which H n guarantees Pnthe quadratic form β Aβ is positive on a subspace, where β ∈ C /{0} satisfies j=1 βj p(xj ) = 0 for any distinct point set X and all complex-valued polynomials, p(x) of degree less than m. For such conditional positive definite functions, one can prove that a modified approximation 2
(6)
2
s(x) =
n X j=1
αj φj (x) +
Q X
γk pk (x)
k=1
with lower order polynomial constraints is solvable under certain conditions; (7)
n X j=1
αj pk (xj ) = 0, for 1 ≤ k ≤ Q,
Convexity and solvability of RBFs with different shapes
3
where pk , 1 ≤ k ≤ Q is a linear independent basis of the space of polynomials of degree less than m, πm−1 (Rd ). The approximation (6) together with the polynomial constraints results in a saddle point system α A P f (8) = γ 0 PT 0 If P is full rank, and the quadratic form αH Aα is positive on the null space of P T , i.e. {α : P T α = 0}, then the saddle point system (8) has a unique solution [22, p.117]. Whether P is full rank still depends on the point set X for πm−1 (Rd ) with m > 1, whereas Micchilli proves that αH Aα is positive on the null space of P T is always true for all conditional positive definite functions of order m on any data set X [16]. His proof in fact also depends on whether the quadratic form (5) is positive on the null space of P T and also depends on (4); see [16, p.17] for details. When m = 1, P is always full rank, therefore for conditional positive (negative) definite functions of order 1, the saddle points systems in (8) always has a unique solution. Furthermore, A is non-singular if points in X are distinct. Well known conditional negative definite p functions of order 1 includes the multiquadric radial basis functions ϕ(kxk2 ) = 1 + 2 kxk22 and the power function kxk2 which results in a distance matrix. The reader is directed to [16, 18] for details. If either {xk }nk=1 and {xj }nj=1 in (4) are different, or the basis functions in (1) are in different scales or different kinds of functions, the formula (4) does not hold. The former case corresponds to the interesting situation that the centres of basis functions and the interpolation points are different. An example of the later case, using different kinds of radial basis function, is the unsymmetrical collocation method for solving partial differential equations (PDEs) [13]. The unsymmetrical collocation method uses one kind of basis functions in a domain, while on the boundary it applies another kind of basis function; the choice of the basis functions depends on the underlying governing PDEs and related boundary conditions. For complicated real applications, many are eager for an adaptive refinement scheme, arranging more data where it should be, to reduce the total computing complexity. In this case, one also wants to use radial basis functions with different scales for the locally clustered points; if two basis function centres are too close to each other, the corresponding two columns of the system matrix A are nearly linearly dependent, and thus the matrix can be ill-conditioned. Whereas if such two basis functions employ different scales or are different, then the conditioning may be improved [9]. There are papers discussing how to choose various shape parameters both numerically [3, 6, 8, 12, 23, 17, 14] and theoretically [1, 3]. In particular in [1], Bozzini et al use matrix perturbation arguments, concluding that if the scales do not vary a lot across the domain, then interpolation with various scales can result in a non-singular interpolation matrix. The threshold of variation of the scales depends on the estimation of the smallest eigenvalue of an interpolation matrix with the same scale [1, Theorem 2], such a dependence limits practical utility of the method, but see recent paper [2]. Solvability for these interesting cases remains to be established for general radial basis functions. The present paper focuses on this problem, presenting sufficient conditions to guarantee a unique solution to the RBF interpolation problem. The conditions we derive only depend on local convexity of the underlying basis functions, and a local geometric property of a neighbourhood of each centre. Information on both of these aspects are easy to obtain. We first introduce notation and preliminaries in §2 and then present the main results in §3. We further discuss several issues related to the
4
Zhu and Wathen
main results in §4 and put algorithms and numerical results in §5 and §6 respectively. Finally we give some conclusions in §7. 2. Convexity of Compactly Supported Radial Basis Functions. A function f (r) is said to be convex on (a, b) if f 0 (r) < 0 and f 00 (r) ≥ 0 on (a, b). (We suppose a linear function with negative slope is convex in this paper.) This section focuses on one popular class of compactly supported radial basis function—Wendland functions. A Wendland compactly supported radial basis function φd,k is obtained through the following integral operator Z ∞ tφ(t)dt. (9) I(φ)(r) := r
The Wendland function φd,k is defined as φd,k = I k φbd/2c+k+1 ,
(10)
where φ` (r) = max{(1 − r)` , 0} and b·c is the floor operator. Here, φd,k (kxk) is a compactly supported radial basis function and φd,k ∈ C 2k (Rd ). Another operator is defined as 1 Dφ(r) = − φ0 (r). for φ ∈ C 2 (R). r
(11)
It is known that DIφ = φ and IDφ = φ [22, p.121]. Applying the formula (11), we can get the following properties of functions defined by (9). Lemma 1. If φ ∈ L1 [0, ∞) is a non-negative function, then, d k 1. dr I φ(r) = −rI k−1 φ(r) ≤ 0; 2 d k 2 k−2 2. dr φ(r) − I k−1 φ(r), for k ≥ 2. 2 I φ(r) = r I Proof. Since φ is non-negative, by the definition of operator I in (9), I k φ(r) d k is positive. Employing the formula (11), we can get dr I φ(r) = −rDI k φ(r) = 2 d d k −rI k−1 φ(r) ≤ 0. The second part holds because dr = dr (−rI k−1 φ(r)) = r2 I k−2 φ(r)− 2I I k−1 φ(r). The first part of Lemma 1 demonstrates that Wendland functions are non-increasing on [0, 1], and the second parts gives a three term recurrence formula to compute the second derivative of a Wendland function. Lemma 2. If φ(r) is a continuous and monotonic decreasing function on [0, 1], which satisfies φ(0) > 0 and φ(1) = 0, then for the function Z
1
f (r) = r2 φ(r) −
tφ(t)dt, r
there exists a positive number γ ∈ (0, 1) such that f (r) ≥ 0 on [γ, 1]. Proof. Since φ is a monotonic decreasing function on [0, 1], then for any t ∈ (r, 1), 0 < φ(t) < φ(r). By the Cauchy-Schwarz inequality, Z
Z
1
tφ(t)dt ≤ r
1/2 Z
1 2
2
t dt r
φ (t)dt r
r
1/2
1
≤
r √ 1 − r3 1 − r3 . φ(r) 1 − r ≤ φ(r) 3 3
Convexity and solvability of RBFs with different shapes
Therefore r (12)
f (r) ≥
r − 2
1 − r3 3
5
! φ(r) = h(r)φ(r).
√ 2 p 3r Compute h0 (r) = 2r + 2√1−r > 0 on (0, 1). On the other hand, h(0) = − 1/3 and 3 h(1) = 1 > 0. Therefore, according to the intermediate value theorem for continuous functions, there must exists a positive number γ ∈ (0, 1), such that h(γ) = 0 and h(r) > 0 on (γ, 1). Thus f (r) ≥ 0 on [γ, 1]. Theorem 1. For any Wendland Function, φd,k , defined above, there exists a positive real number q ∈ [0, 1), such that φd,k is convex on [q, 1]. Proof. If k = 0, φd,k = (1 − r)bd/2c+1 . φ0d,0 (r) = −1(bd/2c + 1)(1 − r)bd/2c < 0 on (0, 1). If d = 1 φ00d,0 (r) = 0, otherwise φ00d,0 (r) = (−1)2 (bd/2c + 1)(bd/2c)(1 − r)bd/2c−1 ≥ 0. Thus φd,0 (r) is convex on [0, 1]. R1 For k = 1, φd,1 (r) = r t(1 − t)bd/2c+1+1 dt, φ0d,1 (r) = −r(1 − r)bd/2c+2 ≤ 0 in (0, 1),
φ00d,1 (r) = (1 − r)bd/2c+1 ((bd/2c + 3)r − 1) . 1 , then φ00d,1 (r) ≥ 0 on [q, 1]. Let q = bd/2c+3 For k ≥ 2, by Lemma 1, φ0d,k (r) ≤ 0 and
φ00d,k (r) = r2 I k−2 φ` (r) − I k−1 φ` (r), where ` = bd/2c + k + 1. where φ` (r) = (1 − r)`+ . Define ϕ(r) = I k−2 φ` (r), then ϕ(r) is a decreasing and nonR1 negative function on [0, 1]. Note that I k−1 φ` (r) = r tϕ(t)dt. Employing Lemma 2, there exists a positive 0 < γ < 1, such that φ00d,k (r) is positive on [γ, 1]. Let q = γ, which finishes the proof. Lemma 3. For any Wendland function ϕ(r) = φd,k (r), there exist a positive number δ1 , such that for r ∈ (δ1 , 1), ϕ(r) ≤ 1 − r. Proof. When k ≥ 1, ϕ(r) ∈ C 2k , ϕ0 (1) = ϕ(1) = 0, then according the the mean value theorem and Taylor expansion, for r ∈ (0, 1) 0 = ϕ(1) = ϕ(r) + (1 − r)ϕ0 (r) +
(1 − r)2 00 ϕ (ξ), where ξ ∈ (r, 1). 2!
When r ∈ [q, 1), where q is defined in Theorem 1 , ϕ00 (ξ) > 0 and ϕ(r) < 0, we have ϕ(r) +
(1 − r)2 00 ϕ (ξ) = −(1 − r)ϕ0(r) = (1 − r)|ϕ0 (r)|. 2
On the other hand, according to the continuity of ϕ0 (r) on (0, 1), for any positive δ, there exists a left neighbourhood near 1 on the real line, such that |ϕ0 (r)| ≤ δ. In particular, consider the case δ = 1, and (δˆ1 , 1) is the largest left neighbourhood of 1 which satisfies |ϕ0 (r)| ≤ 1, where δˆ1 can be defined as δˆ1 = arg min {h : |ϕ0 (r)| ≤ 1, ∀ r ∈ [h, 1)}. h∈(0,1)
Set δ1 = max{q, δˆ1 }, then for r ∈ (δ1 , 1), ϕ(r) ≤ (1 − r). When k = 0, ϕ = φd,0 = (1 − r)bd/2c+1 , ϕ(r) ≤ 1 − r always true on (0,1), or we can say δ1 = 0 in this case.
6
Zhu and Wathen
1
qj
xj φ(q)
1
0 0 (a) Scaled neighbourhood of xj .
ym φ(m) m
q
1
(b) Wendland function φ3,1 .
Fig. 1. (a) illustrates a ’scaled’ neighbourhood of xj with 10 neighbour points. (b) illustrates ym a Wendland function which is convex in (q, 1). One can show φ(q) = 1−m , which is used to prove 1−q Theorem 2.
3. Main Results. Let N (xj , nj ) ⊂ X = {x1 , x2 , . . . , xN } ⊂ Rd , be the set of the nj nearest neighbour points to xj , where xj ∈ / N (xj , nj ). For each N (xj , nj ), define (13)
rj =
min
xk ∈N (xj ,nj )
kxk − xj k
and
Rj =
max
xk ∈N (xj ,nj )
kxk − xj k.
It is obvious that N (xj , nj ) ⊂ B(xj , Rj )\B(xj , rj ), where B(x, R) is a Euclidean ball with center x and radius R. Further, define (14)
qj =
rj 1 , and mj = Rj nj Rj
X
kxk − xj k.
xk ∈N (xj ,nj )
It is easy to find that 0 < q ≤ mj ≤ 1, the equality holding only when all of the points in N (xj , nj ) are located on the same sphere with xj as center (including the special case nj = 1). The quantities qj and mj are associated with the particular geometric property of N (xj , nj ): the points in N (xj , nj ) are located in B(xj , Rj )\B(xj , rj ), and can be viewed as distributed on the sphere kxj − xk2 = mj Rj on average. Such a local geometric property only depends on the relative distance between xj and points in N (xj , nj ). Denote the radial basis functions ϕ(kxk) with different scales, j as φj (x) = ϕ(j kxk), for 1 ≤ j ≤ n. The corresponding approximation to (1) is now written as (15)
s(x) =
N X
αj φj (x)
j=1
and the corresponding interpolation matrix on the data set X with {φj }nj=1 is denoted as Aφj ,X . Theorem 2. Let φ(r) be a univariate compactly supported radial basis function on [0, 1] which is convex on [q, 1] and satisfies φ(0) = 1. X = {x1 , x2 , . . . , xN } is a scattered data set in Rd . For an nj neighbourhood of points N (xj , nj ), Rj is defined by (13) and qj and mj are defined by (14). If for every xj ∈ X , there is a
7
Convexity and solvability of RBFs with different shapes
1−q nj neighbourhood of points N (xj , nj ) which satisfies qj ≥ q and mj ≥ 1 − ϕ(q)n , j then the interpolation matrix Aφj ,X = φ(j kxi − xj k)1≤i,j≤N is non-singular when j = 1/Rj . kx −x k r Proof. For xk ∈ N (xj , nj ), define dkj = kRj j . Since qj = Rjj ≥ q, then for each k and j dk,j ≥ qj ≥ q. Suppose now ykj is defined so that the points (dkj , ykj ) are located on the straight line which passes through (q, ϕ(q)), and (1, 0), 1−d then ykj = 1−qkj ϕ(q). Since dkj ∈ (qj , 1) ⊂ [q, 1] and ϕ is convex on [q, 1], then ykj ≥ ϕ(dkj ). It follows that
X
(16)
ykj =
xk ∈N (xj ,nj )
ϕ(q) 1−q
X
(1 − dkj ) =
xk ∈N (xj ,nj )
nj ϕ(q) (1 − mj ) ≤ 1, 1−q
1−q under the stated assumption that mj ≥ 1 − ϕ(q)n . This proves that when the scale j parameter j is taken as j = 1/Rj then for each j we must have X X 1− ϕ(dkj ) = ϕ(0) − ϕ(j kxk − xj k2 ) ≥ 0. xk ∈N (xj ,nj )
xk ∈N (xj ,nj )
That is: the interpolation matrix Aφj ,X is diagonally dominant and thus non-singular. 1−q Remark 1. If nj = 1, then qj = mj = 1 > 1 − ϕ(q)n , Theorem 2 always holds. j In this case the interpolation matrix is a diagonal matrix, and of course non-singular. However, this case is not of interest. For nj = 2, then the non-zero off-diagonal elements are always smaller than 1 if j = Rj , even if N (xj , 2) doesn’t satisfy the condition. Theorem 3. Let φ be a compactly supported Wendland radial basis function, for any set of distinct point X = {xk ∈ Rd , k = 1, 2, . . . , n.}, there exist scale parameters j , such that each column of the interpolation matrix Aφj ,X = φ(j kxi − xj k2 )1≤i,j≤n is strictly column diagonally dominant and each column has at least 3 non-zero elements. Proof. For given xj , let ρ1 = mini6=j kxi − xj k, B(xj , ρ1 ) is the ball with center xj and radial ρ1 . Then either the closed ball B(xj , ρ1 ) contains 2 points xi and xj , else there exist k ≥ 2 so that B(xj , ρ1 ) contains xj and k further points. If the later case, choose
(17)
j =
1 , ρ1
then the j-th column of the interpolation matrix only has one non-zero element—the diagonal element. Now choose a positive δ such that δ≤
(18)
ρ1 ρ1 − ρ1 , or ≥ δ1 , δ1 ρ1 + δ
where δ1 is defined in Lemma 3. We have (19) When δ ≤
kφ(
ρ1 kδ )≤ < 1. ρ1 + δ ρ1 + δ
ρ1 k−1 ,
in this case it is guaranteed that the j-th column has k +1 non zero elements and diagonal dominance because of (19). Finally,set δ˜ = min{ ρ1 (1−δ1 ) , ρ1 < δ1
ρ1 +δ
8
Zhu and Wathen
1}, then for ρ 1+δ˜ < j < ρ11 , the j-th column has no less than k + 1 non-zero elements 1 and satisfies the diagonal dominance condition (19). The lower bound on j guaranties the diagonal dominance condition and the upper bound guaranties that there are at least k non-zero off-diagonal elements. For the case when xj only has one nearest neighbour point. Define ρ2 so that B(xj , ρ2 ) is the smallest ball which contains xj ’s 3 nearest points. There can be k(k ≥ 1) points on the sphere B(xj , ρ2 ). Then one can reduce this case to a similar argument as before, with the diagonal dominance condition (19) modified as (20)
kϕ(
kδ ρ1 ρ2 )≤ ≤ 1 − ϕ( ). ρ2 + δ ρ2 + δ ρ2 + δ
This condition requires that (21)
f (δ) :=
kδ ρ1 + ϕ( )RHS &qj>q); % satisfy mj> RHS and qj>q if ~isempty(mask) mnj=max(mask); % find the maximum nj else mnj=2; % otherwise mnj=2 end dsto=[1; rbf(dst2(1:mnj)/dst2(mnj))]; G=[dst2(1), dst2(idx),qj(mnj), mj(mnj), 1-sum(dsto(2:end))];
Algorithm 2 A Matlab function to generate the interpolation matrix function[A,R,mnj]=mkA(dsites,rbf,npts,q) % IN : dsites: the interpolation points % rbf: the radial basis functions % npts: the initial number of neighbour points % q: the lower bound of the convex interval of rbf % OUT : A: the interpolation matrix % R: The radials of each basis functions % nj: the numbers of neighbour points of the j-th basis % function % Requires package: statistics toolbox:knnsearch NN=length(dsites); [idx,dist]=knnsearch(dsites,dsites,’k’,npts,’NSMethod’,’kdtree’); idx=idx’;dist=dist’; A=spalloc(NN,NN,NN*npts);s=zeros(NN,5);mnj=uint32(ones(NN,1)); for k=1:NN [dst,mnj(k),G]=checkt(dist(:,k),q,rbf); R(k)=G(2);A(idx(1:mnj(k)+1,k),k)=dst; end
6. Numerical Verification. Two data sets from adaptive finite element refinement on three quarters of a disc are used as the first examples to verify the theory; the data sets are referred to as TriDisc1(T1) and TriDisc2(T2) respectively. We then apply the procedures to implicit curve/surface construction problems, demon-
12
Zhu and Wathen
strating that the presented algorithms can solve very large scale real applications. For the 3D surface construction problem, we consider the Stanford bunny model and Stanford dragon model in the Stanford3D scanning repository 1 . The Stanford bunny, bunny.tar.gz consists of 35947, 8171, 1889 and 453 points. The Stanford dragon is a larger model; only three low resolution data sets in dragon recon.tar.gz are used, the data sets with 100250, 22998 and 5205 vertices. The test problems are named Bunny453(B4), Bunny1889(B3), Bunny8171(B2), Bunny35947(B1), Dragon5205(D3), Dragon22998(D2) and Dragon100250(D1) respectively. To illustrate the method for 3D surface construction with radial basis functions, a 2D parametric heart curve (HC) is also considered. In all of the problems, the Wendland function φ3,1 = (1−r)4+ (4r+1) is employed. A collection of the results related to our theorems are put together in Table 2, more information is detailed as follows. Table 2 Information of several test problems by Algorithm 2
A size nnz HC 252 1704 T1 117 989 T2 2050 18350 B4 1351 9911 B3 5643 42825 B2 24425 183988 B1 105615 740493 D3 15563 117362 D2 68830 512376 D1 300298 2136353 Prob
Eig Bounds ≤ Re(λ) ≥ 4.0 0.45 7.6 0.23 10.8 0.17 10.7 0.17 8.8 0.20 12.0 0.15 18.9 0.10 10.1 0.18 14.1 0.13 13.3 0.14
λmax |λ|min
nj min max 6 8 6 14 4 14 3 16 3 17 3 21 3 21 3 24 3 19 2 24
mean 6.7 8.5 9.0 7.3 7.6 7.5 7.0 7.5 7.4 7.1
Scale Range (Rj ) min max mean max/min 0.1095 0.3568 0.2275 3.3 0.2163 0.4 0.2824 1.8 3.8e-4 0.1515 5.0e-2 391.9 3.5e-3 1.6e-2 1.0e-2 4.6 1.6e-3 8.7e-3 5.3e-3 5.6 1.3e-4 4.2e-3 2.6e-3 32.2 5.8e-6 2.9e-3 1.2e-3 510.3 2.6e-4 5.8e-3 3.51-3 22.0 1.5e-4 3.4e-3 1.7e-3 22.2 2.1e-5 2.3e-3 7.9e-4 109.8
6.1. TriDisc Problem . The data set TriDisc1 is generated by the following Matlab code [~,p,e,t]=adaptmesh(’cirsg’,’cirsb’,1,0,0,’maxt’,100,... ’tripick’,’pdeadworst’,’ngen’,inf);TriDisc1=p’; The data set TriDisc2 is generated similarly by replace the parameter 100 by 3000. Figure 4 illustrates the data sets, the structure of the interpolation matrices, and the eigenvalues distribution of the two interpolation matrices. The results demonstrate that using compactly supported radial basis functions with various shape parameters according to the procedure described in this paper can result in non-singular and well conditioned sparse matrices; and the shape parameters can span several scales, see Table 2. 6.2. Implicite Curve/Surface Reconstruction. An implicit curve/surface can be viewed as a zero level set of a function f (x), where f (x) satisfies if x on the curve/surface; f (x) = 0 f (x+ ) = 1 if x+ off the curve/surface (outside); f (x+ ) = −1 if x− off the curve/surface (inside). 1 http://graphics.stanford.edu/data/3Dscanrep/
13
Convexity and solvability of RBFs with different shapes
(a) TriDisc1 data set
(b) TriDisc2 data set 0
0 20
500 40
1000
60 80
1500
100 0
50 nz = 989
2000 0
100
(c) TriDisc1 interpolation matrix
500
1000 1500 nz = 18350
2000
(d) TriDisc2 interpolation matrix
−3
−3
x 10
x 10
2 3 2
1
1 0
0 −1
−1
−2 −3
−2 0
0.5
1
1.5
(e) TriDisc1 eigenvalues
2
0
0.5
1
1.5
2
(f) TriDisc2 eigenvalues
Fig. 4. Data sets, interpolation matrices, and eigenvalues distribution for TriDisc problem. The blue circles in (e) and (f ) are eigenvalues, and the blue dashed lines are the boundaries of the convex hull of all the eigenvalues.
Given a point set on the curve/surface and associated normal information, the artificial off-surface points x+ and x− which are easily obtained. The implicit curve/surface can be reconstructed as the zero level set of the interpolant s(x). See [4] for details. An implicit curve construction problem is used to illustrate such a procedure.
14
Zhu and Wathen
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 0
0.5
(a) Point cloud
1
1.5
2
(b) Eigenvalues
0.4
1 0.5
0.2
0 −0.5
0
−1 −1.5
−0.2
−2 −0.4
−2.5 −2 (c) Reconstructed curve
−1
0
1
2
(d) Contour
Fig. 5. Reconstruction of an implicit curve. The blue + points are artificial off-curve point constructed by related normal information. The black circles in (a) are the supports for the basis functions corresponding to red + centres. (b) demonstrates that the eigenvalues of the interpolation matrix are located in the black circle. The red curve in (c) and the 0 level set in (d) is the reconstructed curve.
6.2.1. Illustration in 2D. Consider the following heart curve ( √ x(t) = 2(sin(t) − cos(t)); √ (22) y(t) = − 2(sin(t) + cos(t)(1 + sin(t))), which is modified from an example in [7, p.257]. 84 points on the curve are sampled by setting {ti }84 i=1 equally spaced in [0, 2π]. These points themselves are not equally spaced, see the small blue circles in Figure 5(a) and 5(c). The off curve artificial points are constructed as follows x± = x ± .75δx ~nx , where ~nx is the normalised normal direction at x and δx is the distance x to its nearest neighbour on the curve; the ratio .75 can be any scalar not far from 1. Figure 5(a) illustrates the support of each each basis function, the support is found by Algorithm 1. Figure 5(b) demonstrates that the eigenvalues of the interpolation matrix are located in the black circle by the diagonal dominance condition. The red curve in Figure 5(c) and the zero level set in Figure 5(d) is the reconstructed result.
Convexity and solvability of RBFs with different shapes
15
6.2.2. 3D Surface Reconstruction. The normal information are computed by the normalsply from the package ply.tar.gz provided by Greg Turk 2 . The artificial computed points are constructed by x± = x ± .5δx ~nx . Suppose dsites are the points in the cloud and normals is the corresponding calculated normal information. The 3D surface construction problem can be solved by the following framework described in Matlab code Algorithm 3, in which the function name and linear solver are chosen randomly; basically, it includes four steps: preparing data, generating linear systems, solving linear systems and post-precessing. The details of the post-precessing part can be find in [24]. Figure 7 shows the reconstruction results for the Stanford bunny Algorithm 3 A framework for 3D surface reconstruction with Radial basis functions rbf=@(r) (1-r).^4.*(4*r+1); [dsites,rhs]=scan3Drhs(dsites,normals,.5); % prepare data [A,R]=mkA(dsites,rbf,npts,.25); % generate interpolation matrix [sol,flag,reves,iter, resvec]=gmres(A,rhs,50,1e-13); % A*sol=rhs %%%%% ---- post-precessing----%%%% neval=200;[epoints,xe,ye,ze]=eva_pts(dsites,neval); Pf=spevaluation(dsites,epoints,R,rbf,sol); % sparse evaluation scan3Dplot(xe,ye,ze,Pf,neval); %visualiztion model and the Stanford dragon model. 6.3. Conditioning. Three ways are used to verify that the interpolation matrix is very well conditioned. First, the lower bounds of the real part of the P eigenvalue are calculated when checking the diagonal dominance: Re(λ) ≥ minj {1− i aij }. Second, we observe the convergence history of GMRES methods without preconditioning. Third, Matlab condest is used for the largest sparse matrices in this paper; which returns a number between 5 to 13. It should be pointed out that the condest(A,2) can estimate the condition number to within a factor of 2, whilst condest(A,4) which gives a more accurate estimate returns a number around 12. ILU(0) preconditioning for GMRES is also considered. The performance for the Stanford bunny and Stanford dragon problems are listed in Table 4. The restarted GMRES(50) are used, while all the problem are terminated in 32 iterations without any preconditioning. The eigenvalues for two relatively small models are calculated. It turns out that the eigenvalue distribution of the two problem share similar features, all the eigenvalues are clustered in a very small disc with center (1, 0)(see Figure 6), and the convergence history for GMRES without preconditioning are similar. For large problems, it is not easy to calculate all of the eigenvalues of the interpolation matrix, but the convergence histories for GMRES are similar; see Figure 6(e) for the largest problem in this paper. A GMRES solution approach for globally supported radial basis function is discussed in [5]. 6.4. Efficiency. The Matlab implementation in Algorithm 1 and 2 is inefficient for large scale problems due to the long loop in Algorithm 2. Indeed the algorithms here do not guarantee at least 3 elements per column as described in Theorem 3. Procedures guarantee Theorem 3 would require several if or while branches, which are inefficient in Matlab. If C/C++ is used, an algorithm based on Theorem 3 works very efficiently. To illustrate the efficiency, timing results are presented in Table 3, 2 http://www.cc.gatech.edu/projects/large
models/ply.html
16
Zhu and Wathen Table 3 Timing results (in seconds) for C/C++ implementation
Prob B4 B3 B2 B1 D3 D2 D1
N 1351 5643 24425 105615 15563 68830 300298
knn 0.0130 0.0543 0.2576 1.2357 0.1641 0.7776 3.5881
check 0.0002 0.0009 0.0040 0.0180 0.0027 0.0119 0.0521
sparse 0.0015 0.0068 0.0342 0.1546 0.0213 0.1023 0.5041
ilu 0.0007 0.0031 0.0162 0.0801 0.0087 0.0495 0.2335
gmres 0.0054 0.0169 0.0726 0.4029 0.0543 0.2118 0.9574
where we report the time for knnsearch in Matlab (based on kd-tree) to search for the 20 nearest neighbourhood for each point, the time for the check procedure, and the command sparse(row,col,val) in Matlab to generate a sparse matrix, and the time for ilu and gmres. The check procedure is a Matlab wrapper function with C/C++ kernel function which checks the diagonal dominance condition in Theorem 3 and prepares row, col and val for the sparse function. If experiments are carried out outside of the Matlab environment, the sparse function is not necessary. The results are presented to demonstrate that checking the theorem and generating the interpolation matrix takes a relatively small fraction of the overall time. Details of the C/C++ implementation will be published in an ongoing software package. Table 4 Information of several test problems
Prob B4 B3 D3 B2 D2 B1 D1
N 1351 5643 15563 24425 68830 105615 300298
GMRES time iters 0.056699 26 0.137854 29 0.205449 24 0.468109 30 1.431795 30 2.096634 30 6.521828 32
ILU(0)-GMRES ilu gmres total iters 0.000474 0.032375 0.032849 8 0.001891 0.043237 0.045128 8 0.005438 0.078885 0.084323 8 0.008712 0.097339 0.106051 8 0.031673 0.235703 0.267376 8 0.041696 0.297057 0.338753 7 0.160924 1.149587 1.310511 9
7. Conclusion. The main results establishing solvability of the equations for interpolation with radial basis functions generally employ Fourier Transforms and Bochner’s theorem. As such they can only apply to RBFs with identical shape parameter. In this manuscript, we provide some sufficient conditions for invertibility of the interpolation equations for compactly supported RBFs with differing shape. Examples are given which demonstrate the utility. Error estimates for approximation with such a scheme are unclear at the moment. REFERENCES [1] M. Bozzini, L. Lenarduzzi, M. Rossini, and R. Schaback, Interpolation by basis functions of different scales and shapes, Calcolo, 41 (2004), pp. 77–87. [2] Mira Bozzini, Licia Lenarduzzi, Milvia Rossini, and Robert Schaback, Interpolation with variably scaled kernels, IMA J. Numer. Anal., (2014), pp. 1–21.
17
Convexity and solvability of RBFs with different shapes −3
x 10 2
0.015 0.01
1 0.005
0
0 −0.005
−1 −0.01
−2
−0.015
0
0.5
1
1.5
2
0
(a) Eigenvalue for Bunny453
0.5
1
1.5
2
(b) Eigenvalues for Bunny1889
0
0
10
GMRES ILU−GMRES
−5
10
GMRES ILU−GMRES
−5
10
10
−10
−10
10
10
−15
−15
10
10 5
10
15
20
25
5
(c) GMRES method for Bunny453 0
10
10
15
20
25
(d) GMRES method for Bunny1889
Bunny Dragon T=cN1.045
GMRES ILU−GMRES 0
10
−5
T
10
−10
10
−1
10 −15
10
4
5
10
15
20
25
30
(e) GMRES method for Dragon100250
5
10
10 N
(f) Scalability
Fig. 6. (a) and(b)the distribution of eigenvalues, the blue circles are eigenvalues and the blue polygon are the convex hull of the eigenvalues. (c) to (e) the convergence history of GMRES and preconditioned GMRES methods. (f ) shows the time of solving the interpolation linear systems resulted by Algorithm 2,mkA with GMRES solver without any preconditioning is linearly dependent on the dimensional of the linear system. where c = 1.2168e − 05 .
[3] M. Bozzini, L. Lenarduzzi, and R. Schaback, Adaptive interpolation by scaled multiquadrics, Adv. Comput. Math., 16 (2002), pp. 375–387.
18
Zhu and Wathen
(a) Bunny453
(b) Bunny1889
(c) Bunny8171
(d) Dragon5205
Fig. 7. Surface reconstruction from 3D Scanning Repository.
[4] Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans, Reconstruction and representation of 3d objects with radial basis functions, in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ACM, 2001, pp. 67–76. ´ n-Canda ´s, Jun Li, and Victor Eijkhout, A discrete adapted hierarchical [5] Julio E. Castrillo basis solver for radial basis function interpolation, BIT, 53 (2013), pp. 57–86. [6] Quan Deng and Tobin A. Driscoll, A fast treecode for multiquadric interpolation with varying shape parameters, SIAM J. Sci. Comput., 34 (2012), pp. A1126–A1140. [7] Gregory E. Fasshauer, Meshfree approximation methods with MATLAB, vol. 6 of Interdisciplinary Mathematical Sciences, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2007. With 1 CD-ROM (Windows, Macintosh and UNIX). [8] N. Flyer and E. Lehto, Rotational transport on a sphere: local node refinement with radial basis functions, J. Comput. Phys., 229 (2010), pp. 1954–1969. [9] Bengt Fornberg and Julia Zuev, The Runge phenomenon and spatially variable shape parameters in RBF interpolation, Comput. Math. Appl., 54 (2007), pp. 379–398.
Convexity and solvability of RBFs with different shapes
19
[10] Jaroslav M. Fowkes, Nicholas I. M. Gould, and Chris L. Farmer, A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions, J. Global Optim., 56 (2013), pp. 1791–1815. [11] Edward J. Fuselier and Grady B. Wright, A high-order kernel method for diffusion and reaction-diffusion equations on surfaces, J. Sci. Comput., 56 (2013), pp. 535–565. [12] Arta A. Jamshidi and Michael J. Kirby, Skew-radial basis function expansions for empirical modeling, SIAM J. Sci. Comput., 31 (2009/10), pp. 4715–4743. [13] E. J. Kansa, Multiquadrics—a scattered data approximation scheme with applications to computational fluid-dynamics. II. Solutions to parabolic, hyperbolic and elliptic partial differential equations, Comput. Math. Appl., 19 (1990), pp. 147–161. [14] E. J. Kansa and R. E. Carlson, Improved accuracy of multiquadric interpolation using variable shape parameters, Comput. Math. Appl., 24 (1992), pp. 99–120. Advances in the theory and applications of radial basis functions. [15] E. Marchandise, C. Piret, and J.-F. Remacle, CAD and mesh repair with radial basis functions, J. Comput. Phys., 231 (2012), pp. 2376–2387. [16] C. A. Micchelli, Interpolation of scattered data: distance matrices and conditionally positive definite functions, Constr. Approx., 2 (1986), pp. 11–22. [17] S. A. Sarra and D. Sturgill, A random variable shape parameter strategy for radial basis function approximation methods, Eng. Anal. Bound. Elem., 33 (2009), pp. 1239–1245. [18] I. J. Schoenberg, On certain metric spaces arising from Euclidean spaces by a change of metric and their imbedding in Hilbert space, Ann. of Math. (2), 38 (1937), pp. 787–793. [19] Varun Shankar, Grady B Wright, Aaron L Fogelson, and Robert M Kirby, A radial basis function (rbf ) finite difference method for the simulation of reaction–diffusion equations on stationary platelets within the augmented forcing method, Int. J. Numer. Meth. Fl. [20] Richard S. Varga, Matrix iterative analysis, vol. 27 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, expanded ed., 2000. [21] R. S. Varga, E. B. Saff, and V. Mehrmann, Incomplete factorizations of matrices and connections with H-matrices, SIAM J. Numer. Anal., 17 (1980), pp. 787–793. [22] H. Wendland, Scattered data approximation, vol. 17 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 2005. [23] F. Zhou, J. Zhang, X. Sheng, and G. Li, Shape variable radial basis function and its application in dual reciprocity boundary face method, Eng. Anal. Bound. Elem., 35 (2011), pp. 244–252. [24] Shengxin Zhu and Andrew J. Wathen, Fast sparse kernel summation on Cartesian grids, tech. report, The University of Oxford.