LIBSVM: a Library for Support Vector Machines - Semantic Scholar

Report 2 Downloads 103 Views
LIBSVM: a Library for Support Vector Machines Chih-Chung Chang and Chih-Jen Lin∗ Last updated: January 3, 2006 Abstract LIBSVM is a library for support vector machines (SVM). Its goal is to help users to easily use SVM as a tool. In this document, we present all its implementation details. For the use of LIBSVM, the README file included in the package and the LIBSVM FAQ provide the information.

1

Introduction

LIBSVM is a library for support vector classification (SVM) and regression. Its goal is to let users can easily use SVM as a tool. In this document, we present all its implementation details. For the use of LIBSVM, the README file included in the package provides the information. In Section 2, we show formulations used in LIBSVM: C-support vector classification (C-SVC), ν-support vector classification (ν-SVC), distribution estimation (one-class SVM), -support vector regression (-SVR), and ν-support vector regression (ν-SVR). We discuss the implementation of solving quadratic problems in Section 3. Section 4 describes two implementation techniques: shrinking and caching. Then in Section 6 we discuss the implementation of multi-class classification. We now also support different penalty parameters for unbalanced data. Details are in Section 5. Model selection is very important for obtaining the best generalization. LIBSVM provides simple and useful tools which are discussed in 7.

2

Formulations

2.1

C-Support Vector Classification (Binary Case)

Given training vectors xi ∈ Rn , i = 1, . . . , l, in two classes, and a vector y ∈ Rl such that yi ∈ {1, −1}, C-SVC (Cortes and Vapnik, 1995; Vapnik, 1998) solves the following primal problem: l

min

w,b,ξ

subject to

X 1 T w w+C ξi 2

(2.1)

i=1

yi (wT φ(xi ) + b) ≥ 1 − ξi , ξi ≥ 0, i = 1, . . . , l.



Department of Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan (http://www.csie.ntu.edu.tw/∼cjlin). E-mail: [email protected]

1

Its dual is min α

subject to

1 T α Qα − eT α 2 0 ≤ αi ≤ C, i = 1, . . . , l,

(2.2)

yT α = 0,

where e is the vector of all ones, C > 0 is the upper bound, Q is an l by l positive semidefinite matrix, Qij ≡ yi yj K(xi , xj ), and K(xi , xj ) ≡ φ(xi )T φ(xj ) is the kernel. Here training vectors xi are mapped into a higher (maybe infinite) dimensional space by the function φ. The decision function is

l X sgn( yi αi K(xi , x) + b). i=1

2.2

ν-Support Vector Classification (Binary Case)

The ν-support vector classification (Sch¨olkopf et al., 2000) uses a new parameter ν which let one control the number of support vectors and errors. The parameter ν ∈ (0, 1] is an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Given training vectors xi ∈ Rn , i = 1, . . . , l, in two classes, and a vector y ∈ Rl such that yi ∈ {1, −1}, the primal form considered is: l

1X 1 T ξi w w − νρ + 2 l

min

w,b,ξ,ρ

i=1

T

yi (w φ(xi ) + b) ≥ ρ − ξi ,

subject to

ξi ≥ 0, i = 1, . . . , l, ρ ≥ 0. The dual is: min α

subject to

1 T α Qα 2 0 ≤ αi ≤ 1/l,

i = 1, . . . , l,

eT α ≥ ν, yT α = 0. where Qij ≡ yi yj K(xi , xj ). The decision function is: l X sgn( yi αi (K(xi , x) + b)). i=1

2

(2.3)

In (Crisp and Burges, 2000; Chang and Lin, 2001), it has been shown that eT α ≥ ν can be replaced by eT α = ν. With this property, in LIBSVM, we solve a scaled version of (2.3): 1 T α Qα 2 0 ≤ αi ≤ 1,

min α

subject to

i = 1, . . . , l,

eT α = νl, yT α = 0. We output α/ρ so the computed decision function is: l X sgn( yi (αi /ρ)(K(xi , x) + b)) i=1

and then two margins are yi (wT φ(xi ) + b) = ±1 which are the same as those of C-SVC.

2.3

Distribution Estimation (One-class SVM)

One-class SVM was proposed by (Sch¨olkopf et al., 2001) for estimating the support of a high-dimensional distribution. Given training vectors xi ∈ Rn , i = 1, . . . , l without any class information, the primal form in (Sch¨olkopf et al., 2001) is: l

min

w,b,ξ,ρ

subject to

1 T 1 X ξi w w−ρ+ 2 νl i=1

wT φ(xi ) ≥ ρ − ξi , ξi ≥ 0, i = 1, . . . , l.

The dual is: min α

subject to

1 T α Qα 2 0 ≤ αi ≤ 1/(νl), i = 1, . . . , l, eT α = 1,

where Qij = K(xi , xj ) ≡ φ(xi )T φ(xj ). In LIBSVM we solve a scaled version of (2.4): min subject to

1 T α Qα 2 0 ≤ αi ≤ 1, eT α = νl. 3

i = 1, . . . , l,

(2.4)

The decision function is

l X sgn( αi K(xi , x) − ρ). i=1

2.4

-Support Vector Regression (-SVR)

Given a set of data points, {(x1 , z1 ), . . . , (xl , zl )}, such that xi ∈ Rn is an input and zi ∈ R1 is a target output, the standard form of support vector regression (Vapnik, 1998) is: l

l

i=1

i=1

X X 1 T w w+C ξi + C ξi∗ 2

min ∗

w,b,ξ,ξ

T

w φ(xi ) + b − zi ≤  + ξi ,

subject to

zi − wT φ(xi ) − b ≤  + ξi∗ , ξi , ξi∗ ≥ 0, i = 1, . . . , l. The dual is: min

α,α∗

subject to

l

l

i=1

i=1

X X 1 (α − α∗ )T Q(α − α∗ ) +  (αi + αi∗ ) + zi (αi − αi∗ ) 2 l X

(αi − αi∗ ) = 0, 0 ≤ αi , αi∗ ≤ C, i = 1, . . . , l,

(2.5)

i=1

where Qij = K(xi , xj ) ≡ φ(xi )T φ(xj ). The approximate function is: l X

(−αi + αi∗ )K(xi , x) + b.

i=1

2.5

ν-Support Vector Regression (ν-SVR)

Similar to ν-SVC, for regression, (Sch¨olkopf et al., 2000) use a parameter ν to control the number of support vectors. However, unlike ν-SVC where C is replaced by ν here ν replaces the parameter  of -SVR. The primal form is l

min∗

w,b,ξ,ξ ,

subject to

1X 1 T w w + C(ν + (ξi + ξi∗ )) 2 l i=1

(wT φ(xi ) + b) − zi ≤  + ξi , zi − (wT φ(xi ) + b) ≤  + ξi∗ , ξi , ξi∗ ≥ 0, i = 1, . . . , l,  ≥ 0.

4

(2.6)

and the dual is min

α,α∗

subject to

1 (α − α∗ )T Q(α − α∗ ) + zT (α − α∗ ) 2 eT (α − α∗ ) = 0, eT (α + α∗ ) ≤ Cν, 0 ≤ αi , αi∗ ≤ C/l,

i = 1, . . . , l,

(2.7)

Similarly, the inequality eT (α + α∗ ) ≤ Cν can be replaced by an equality. In LIBSVM, we consider C ← C/l so the dual problem solved is: min

α,α∗

subject to

1 (α − α∗ )T Q(α − α∗ ) + zT (α − α∗ ) 2 eT (α − α∗ ) = 0, eT (α + α∗ ) = Clν, 0 ≤ αi , αi∗ ≤ C,

i = 1, . . . , l.

(2.8)

Then the decision function is l X

(−αi + αi∗ )K(xi , x) + b,

i=1

the same as that of -SVR.

3

Solving the Quadratic Problems

3.1

The Decomposition Method for C-SVC, -SVR, and One-class SVM

We consider the following general form of C-SVC, -SVR, and one-class SVM: min α

subject to

1 T α Qα + pT α 2 yT α = ∆,

(3.1)

0 ≤ αt ≤ C, t = 1, . . . , l, where yt = ±1, t = 1, . . . , l. It can be clearly seen that C-SVC and one-class SVM are already in the form of (3.1). For -SVR, we consider the following reformulation of (2.5):       T  Q −Q α  α 1 T ∗ T T T T α , (α ) min + e + z , e − z −Q Q α∗ α∗ α,α∗ 2   α subject to yT = 0, 0 ≤ αt , αt∗ ≤ C, t = 1, . . . , l, (3.2) α∗ where y is a 2l by 1 vector with yt = 1, t = 1, . . . , l and yt = −1, t = l + 1, . . . , 2l. The difficulty of solving (3.1) is the density of Q because Qij is in general not zero. In LIBSVM, we consider the decomposition method to conquer this difficulty. Some work 5

on this method are, for example, (Osuna et al., 1997a; Joachims, 1998; Platt, 1998). This method modifies only a subset of α per iteration. This subset, denoted as the working set B, leads to a small sub-problem to be minimized in each iteration. An extreme case is the Sequential Minimal Optimization (SMO) (Platt, 1998), which restricts B to have only two elements. Then in each iteration one solves a simple two-variable problem without needing optimization software. Here we consider an SMO-type decomposition method proposed in (Fan et al., 2005). Algorithm 1 (An SMO-type Decomposition method in (Fan et al., 2005)) 1. Find α1 as the initial feasible solution. Set k = 1. 2. If αk is a stationary point of (2.2), stop. Otherwise, find a two-element working set B = {i, j} by WSS 1 (described in Section 3.2). Define N ≡ {1, . . . , l}\B and αkB and αkN to be sub-vectors of αk corresponding to B and N , respectively. 3. If aij ≡ Kii + Kjj − 2Kij > 0 Solve the following sub-problem with the variable αB :       Qii Qij αi 1 k T αi αi αj min + (pB + QBN αN ) αi ,αj αj Qij Qjj αj 2 subject to 0 ≤ αi , αj ≤ C, (3.3) T k yi αi + yj αj = ∆ − yN αN ,

else Solve min

αi ,αj

subject to

      Qii Qij αi 1 k T αi αi αj (3.4) + (pB + QBN αN ) Qij Qjj αj αj 2 τ − aij ((αi − αik )2 + (αj − αjk )2 ) + 4 constraints of (3.3).

4. Set αk+1 to be the optimal solution of (3.3) and αk+1 ≡ αkN . Set k ← k + 1 and goto B N Step 2. Note that B is updated in each iteration. To simplify the notation, we simply use B instead of B k . If aij ≤ 0, (3.3) is a concave problem. Hence we use a convex modification in (3.4).

6

3.2

Stopping Criteria and Working Set Selection for C-SVC, -SVR, and One-class SVM

The Karush-Kuhn-Tucker (KKT) optimality condition of (3.1) shows that a vector α is a stationary point of (3.1) if and only if there is a number b and two nonnegative vectors λ and µ such that ∇f (α) + by = λ − µ, λi αi = 0, µi (C − αi ) = 0, λi ≥ 0, µi ≥ 0, i = 1, . . . , l, where ∇f (α) ≡ Qα + p is the gradient of f (α). This condition can be rewritten as ∇f (α)i + byi ≥ 0

if αi < C,

(3.5)

∇f (α)i + byi ≤ 0

if αi > 0.

(3.6)

Since yi = ±1, by defining Iup (α) ≡ {t | αt < C, yt = 1 or αt > 0, yt = −1}, and

(3.7)

Ilow (α) ≡ {t | αt < C, yt = −1 or αt > 0, yt = 1}, a feasible α is a stationary point of (3.1) if and only if m(α) ≤ M (α),

(3.8)

where m(α) ≡ max −yi ∇f (α)i , and M (α) ≡ i∈Iup (α)

min i∈Ilow (α)

−yi ∇f (α)i .

From this we have the following stopping condition: m(αk ) − M (αk ) ≤ .

(3.9)

About the selection of the working set set B, we consider the following procedure: WSS 1 1. For all t, s, define ats ≡ Ktt + Kss − 2Kts > 0, bts ≡ −yt ∇f (αk )t + ys ∇f (αk )s > 0 and

 a ¯ts ≡

ats τ

if ats > 0, otherwise.

(3.10)

(3.11)

Select i ∈ arg max{−yt ∇f (αk )t | t ∈ Iup (αk )}, t   2 bit k k k | t ∈ Ilow (α ), −yt ∇f (α )t < −yi ∇f (α )i . j ∈ arg min − t a ¯it 7

(3.12)

2. Return B = {i, j}. Details of how we choose this working set is in (Fan et al., 2005, Section II).

3.3

Convergence of the Decomposition Method

See (Fan et al., 2005, Section III) or (Chen et al., 2006) for detailed discussion of the convergence of Algorithm 1.

3.4

The Decomposition Method for ν-SVC and ν-SVR

Both ν-SVC and ν-SVR can be considered as the following general form: 1 T α Qα + pT α 2 yT α = ∆1 ,

min α

subject to

(3.13)

eT α = ∆2 , 0 ≤ αt ≤ C, t = 1, . . . , l. The KKT condition of (3.13) shows ∇f (α)i − ρ + byi = 0 if 0 < αi < C, ≥ 0 if αi = 0, ≤ 0 if αi = C. Define r1 ≡ ρ − b, r2 ≡ ρ + b. If yi = 1 the KKT condition becomes ∇f (α)i − r1 ≥ 0 if αi < C,

(3.14)

≤ 0 if αi > 0. On the other hand, if yi = −1, it is ∇f (α)i − r2 ≥ 0 if αi < C,

(3.15)

≤ 0 if αi > 0. Hence given a tolerance  > 0, the stopping condition is: max( − −

min

αi 0,yi =−1

∇f (α)i ) < .

The working set selection is by extending WSS 1 to the following 8

WSS 2 (Extending WSS 1 to ν-SVM) 1. Find ip ∈ arg mp (αk ), b2i t jp ∈ arg min{− p | yt = 1, αt ∈ Ilow (αk ), −yt ∇f (αk )t < −yip ∇f (αk )ip }. t a ¯ip t 2. Find in ∈ arg mn (αk ), b2 jn ∈ arg min{− in t | yt = −1, αt ∈ Ilow (αk ), −yt ∇f (αk )t < −yin ∇f (αk )in }. t a ¯in t 3. Return {ip , jp }) or {in , jn } depending on which one gives smaller −b2ij /¯ aij .

3.5

Analytical Solutions

Details are described in Section 5 in which we discuss the solution of a more general subproblem.

3.6

The Calculation of b or ρ

After the solution α of the dual optimization problem is obtained, the variables b or ρ must be calculated as they are used in the decision function. Here we simply describe the case of ν-SVC and ν-SVR where b and ρ both appear. Other formulations are simplified cases of them. The KKT condition of (3.13) has been shown in (3.14) and (3.15). Now we consider the case of yi = 1. If there are αi which satisfy 0 < αi < C, then r1 = ∇f (α)i . Practically to avoid numerical errors, we average them: P r1 =

0 m(αk ), αtk = C, yt = 1 or αtk = 0, yt = −1} ∪ {t | t < M (αk ), αtk = 0, yt = 1 or αtk = C, yt = −1}.

(4.5)

Thus the set A of activated variables is dynamically reduced in every min(l, 1000) iterations. 2. Of course the above shrinking strategy may be too aggressive. Since the decomposition method has a very slow convergence and a large portion of iterations are spent for achieving the final digit of the required accuracy, we would not like those iterations are wasted because of a wrongly shrunken problem (4.3). Hence when the decomposition method first achieves the tolerance m(αk ) ≤ M (αk ) + 10, where  is the specified stopping criteria, we reconstruct the whole gradient. Then we inactive some variables based on the current set (4.5). and the decomposition method continues. Therefore, in LIBSVM, the size of the set A of (4.3) is dynamically reduced. To decrease the cost of reconstructing the gradient ∇f (α), during the iterations we always keep ¯i = C G

X

Qij , i = 1, . . . , l.

αj =C

Then for the gradient ∇f (α)i , i ∈ / A, we have ∇f (α)i =

l X

¯i + Qij αj + pi = G

j=1

X 0 Ci − Cj and αinew ≥ Ci . Other cases are similar. Therefore, we have the following procedure to identify (αinew , αjnew ) in different regions and change it back to the feasible set. if(y[i]!=y[j]) { double quad_coef = Q_i[i]+Q_j[j]+2*Q_i[j]; if (quad_coef 0) { if(alpha[j] < 0) // in region 3 { alpha[j] = 0; alpha[i] = diff; } } else { if(alpha[i] < 0) // in region 4 { alpha[i] = 0; alpha[j] = -diff; } } if(diff > C_i - C_j) { if(alpha[i] > C_i) // in region 1 { alpha[i] = C_i; alpha[j] = C_i - diff; } } else { if(alpha[j] > C_j) // in region 2 { alpha[j] = C_j; alpha[i] = C_j + diff; } } }

6

Multi-class classification

We use the “one-against-one” approach (Knerr et al., 1990) in which k(k − 1)/2 classifiers are constructed and each one trains data from two different classes. The first use of this strategy on SVM was in (Friedman, 1996; Kreßel, 1999). For training data from the ith

15

and the jth classes, we solve the following binary classification problem: min

wij ,bij ,ξij

subject to

X 1 ij T ij (w ) w + C( (ξ ij )t ) 2 t ((wij )T φ(xt )) + bij ) ≥ 1 − ξtij , if xt in the ith class, ((wij )T φ(xt )) + bij ) ≤ −1 + ξtij , if xt in the jth class, ξtij ≥ 0.

In classification we use a voting strategy: each binary classification is considered to be a voting where votes can be cast for all data points x - in the end point is designated to be in a class with maximum number of votes. In case that two classes have identical votes, though it may not be a good strategy, now we simply select the one with the smallest index. There are other methods for multi-class classification. Some reasons why we choose this “1-against-1” approach and detailed comparisons are in (Hsu and Lin, 2002).

7

Model Selection

LIBSVM provides a model selection tool using the RBF kernel: cross validation via parallel grid search. Currently, we support only C-SVC where two parameters are considered: C and γ. They can be easily modified for other kernels such as linear and polynomial. For median-sized problems, cross validation might be the most reliable way for model selection. First, the training data is separated to several folds. Sequentially a fold is considered as the validation set and the rest are for training. The average of accuracy on predicting the validation sets is the cross validation accuracy. Our implementation is as follows. Users provide a possible interval of C (or γ) with the grid space. Then, all grid points of (C, γ) are tried to see which one gives the highest cross validation accuracy. Users then use the best parameter to train the whole training set and generate the final model. For easy implementation, we consider each SVM with parameters (C, γ) as an independent problem. As they are different jobs, we can easily solve them in parallel. Currently, LIBSVM provides a very simple tool so that jobs are dispatched to a cluster of computers which share the same file system. Note that now under the same (C, γ), the one-against-one method is used for training multi-class data. Hence, in the final model, all k(k − 1)/2 decision functions share the same (C, γ).

16

Figure 1: Contour plot of heart scale included in the LIBSVM package

LIBSVM also outputs the contour plot of cross validation accuracy. An example is in Figure 1.

8

Probability Estimates

Originally support vector classification (regression) predicts only class label (approximate target value) but not probability information. In the following we briefly describe how we extend SVM for probability estimates. More details are in (Wu et al., 2004) for classification and in (Lin and Weng, 2004) for regression. Given k classes of data, for any x, the goal is to estimate pi = p(y = i | x), i = 1, . . . , k. Following the setting of the one-against-one (i.e., pairwise) approach for multi-class classification, we first estimated pairwise class probabilities rij ≈ p(y = i | y = i or j, x) 17

using an improved implementation (Lin et al., 2003) of (Platt, 2000): rij ≈

1 1 + eAfˆ+B

,

(8.1)

where A and B are estimated by minimizing the negative log-likelihood function using known training data and their decision values fˆ. Labels and decision values are required to be independent so here we conduct five-fold cross-validation to obtain decision values. Then the second approach in (Wu et al., 2004) is used to obtain pi from all these rij ’s. It solves the following optimization problem: k 1XX min (rji pi − rij pj )2 p 2

subject to

i=1 j:j6=i

k X

pi = 1, pi ≥ 0, ∀i.

(8.2)

i=1

The objective function comes from the equality p(y = j | y = i or j, x) · p(y = i | x) = p(y = i | y = i or j, x) · p(y = j | x) and can be reformulated as 1 min pT Qp, p 2 where

(P Qij =

2 s:s6=i rsi

−rji rij

(8.3) if i = j, if i = 6 j.

(8.4)

This problem is convex, so the optimality conditions that there is a scalar b such that      Q e p 0 = . (8.5) eT 0 b 1 Here e is the k × 1 vector of all ones, 0 is the k × 1 vector of all zeros, and b is the P Lagrangian multiplier of the equality constraint ki=1 pi = 1. Instead of directly solving the linear system (8.5), we derive a simple iterative method in the following. As −pT Qp = −pT Q(−bQ−1 e) = bpT e = b, the solution p satisfies Qtt pt +

X

Qtj pj − pT Qp = 0, for any t.

j:j6=t

Using (8.6), we consider the following algorithm: Algorithm 2 1. Start with some initial pi ≥ 0, ∀i and

Pk

i=1 pi

18

= 1.

(8.6)

2. Repeat (t = 1, . . . , k, 1, . . .) pt ←

X 1 [− Qtj pj + pT Qp] Qtt

(8.7)

j:j6=t

normalize p

(8.8)

until (8.5) is satisfied. This procedure guarantees to find a global optimum of (8.2). Using some tricks, we do not need to recalculate pT Qp in each iteration. Detailed implementation notes are in Appendix C of (Wu et al., 2004). We consider a relative stopping condition for Algorithm 2: kQp − pT Qpek1 = max |(Qp)t − pT Qp| < 0.005/k. t

When k is large, p will be closer to zero, so we decrease the tolerance by a factor of k. Next, we discuss SVR probability inference. For a given set of training data D = {(xi , yi ) | xi ∈ Rn , yi ∈ R, i = 1, . . . , l}, we suppose that the data are collected from the model: yi = f (xi ) + δi ,

(8.9)

where f (x) is the underlying function and δi are independent and identically distributed random noises. Given a test data x, the distribution of y given x and D, P (y | x, D), allows one to draw probabilistic inferences about y; for example, one can construct a predictive interval I = I(x) such that y ∈ I with a pre-specified probability. Denoting fˆ as the estimated function based on D using SVR, then ζ = ζ(x) ≡ y − fˆ(x) is the out-of-sample residual (or prediction error), and y ∈ I is equivalent to ζ ∈ I − fˆ(x). We propose to model the distribution of ζ based on a set of out-of-sample residuals {ζi }li=1 using training data D. The ζi ’s are generated by first conducting a k-fold cross validation to get fˆj , j = 1, . . . , k, and then setting ζi ≡ yi − fˆj (xi ) for (xi , yi ) in the jth fold. It is conceptually clear that the distribution of ζi ’s may resemble that of the prediction error ζ. Figure 2 illustrates ζi ’s from a real data. Basically, a discretized distribution like histogram can be used to model the data; however, it is complex because all ζi ’s must be retained. On the contrary, distributions like Gaussian and Laplace, commonly used as noise models, require only location and scale parameters. In Figure 2 we plot the fitted curves using these two families and the histogram of ζi ’s. The figure shows that the distribution of ζi ’s seems symmetric about zero and that both Gaussian and Laplace reasonably capture the shape of ζi ’s. Thus, we propose to model ζi by zero-mean Gaussian and Laplace, or equivalently, model the conditional distribution of y given fˆ(x) by Gaussian and Laplace with mean fˆ(x). 19

(Lin and Weng, 2004) discussed a method to judge whether a Laplace and Gaussian distribution should be used. Moreover, they experimentally show that in all cases they have tried, Laplace is better. Thus, here we consider the zero-mean Laplace with a density function: 1 − |z| e σ. 2σ

p(z) =

(8.10)

Assuming that ζi are independent, we can estimate the scale parameter by maximizing the likelihood. For Laplace, the maximum likelihood estimate is Pl

i=1 |ζi |

σ=

l

.

(8.11)

(Lin and Weng, 2004) pointed out that some “very extreme” ζi may cause inaccurate estimation of σ. Thus, they propose to estimate the scale parameter by discarding ζi ’s which exceed ±5 × (standard deviation of ζi ). Thus, for any new data x, we consider that y = fˆ(x) + z, where z is a random variable following the Laplace distribution with parameter σ. In theory, the distribution of ζ may depend on the input x, but here we assume that it is free of x. This is similar to the model (8.1) for classification. Such an assumption works well in practice and leads to a simple model.

Figure 2: Histogram of ζi ’s from a data set and the modeling via Laplace and Gaussian distributions. The x-axis is ζi using five-fold CV and the y-axis is the normalized number of data in each bin of width 1.

20

Acknowledgments This work was supported in part by the National Science Council of Taiwan via the grants NSC 89-2213-E-002-013 and NSC 89-2213-E-002-106. The authors thank Chih-Wei Hsu and Jen-Hao Lee for many helpful discussions and comments. We also thank Ryszard Czerminski and Lily Tian for some useful comments.

References C.-C. Chang and C.-J. Lin. Training ν-support vector classifiers: Theory and algorithms. Neural Computation, 13(9):2119–2147, 2001. P.-H. Chen, R.-E. Fan, and C.-J. Lin. A study on SMO-type decomposition methods for support vector machines. IEEE Transactions on Neural Networks, 2006. URL http: //www.csie.ntu.edu.tw/∼cjlin/papers/generalSMO.pdf. To appear. C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273–297, 1995. D. J. Crisp and C. J. C. Burges. A geometric interpretation of ν-SVM classifiers. In S. Solla, T. Leen, and K.-R. M¨ uller, editors, Advances in Neural Information Processing Systems, volume 12, Cambridge, MA, 2000. MIT Press. R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using the second order information for training SVM. Journal of Machine Learning Research, 6:1889–1918, 2005. URL http://www.csie.ntu.edu.tw/∼cjlin/papers/quadworkset.pdf. J. Friedman. report,

Another approach to polychotomous classification.

Department of Statistics,

Stanford University,

1996.

Technical Available at

http://www-stat.stanford.edu/reports/friedman/poly.ps.Z. C.-W. Hsu and C.-J. Lin. A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 13(2):415–425, 2002. T. Joachims. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA, 1998. MIT Press. S. Knerr, L. Personnaz, and G. Dreyfus. Single-layer learning revisited: a stepwise procedure for building and training a neural network. In J. Fogelman, editor, Neurocomputing: Algorithms, Architectures and Applications. Springer-Verlag, 1990.

21

U. Kreßel. Pairwise classification and support vector machines. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 255–268, Cambridge, MA, 1999. MIT Press. C.-J. Lin and R. C. Weng. Simple probabilistic predictions for support vector regression. Technical report, Department of Computer Science, National Taiwan University, 2004. URL http://www.csie.ntu.edu.tw/∼cjlin/papers/svrprob.pdf. H.-T. Lin, C.-J. Lin, and R. C. Weng. A note on Platt’s probabilistic outputs for support vector machines. Technical report, Department of Computer Science, National Taiwan University, 2003. URL http://www.csie.ntu.edu.tw/∼cjlin/papers/plattprob.ps. E. Osuna, R. Freund, and F. Girosi. Training support vector machines: An application to face detection. In Proceedings of CVPR’97, pages 130–136, New York, NY, 1997a. IEEE. E. Osuna, R. Freund, and F. Girosi. Support vector machines: Training and applications. AI Memo 1602, Massachusetts Institute of Technology, 1997b. J. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In A. Smola, P. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, Cambridge, MA, 2000. MIT Press. URL citeseer. nj.nec.com/platt99probabilistic.html. J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch¨ olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA, 1998. MIT Press. B. Sch¨olkopf, A. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms. Neural Computation, 12:1207–1245, 2000. B. Sch¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. V. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998. T.-F. Wu, C.-J. Lin, and R. C. Weng. Probability estimates for multi-class classification by pairwise coupling. Journal of Machine Learning Research, 5:975–1005, 2004. URL http://www.csie.ntu.edu.tw/∼cjlin/papers/svmprob/svmprob.pdf.

22