MVA2005 IAPR Conference on Machine VIsion Applications, May 16-18, 2005 Tsukuba Science City, Japan
15-3 Shadow Compensation in Color Images for U nstru c tu red R oad Segmentation Ramin Ghurchian D e partme nt o f B io nics T o k y o U niv e rsity o f T e chno lo g y T o k y o , J apan ramin@ b s.te u.ac.jp
S ato shi H ashino D e partme nt o f B io nics T o k y o U niv e rsity o f T e chno lo g y T o k y o , J apan hashino @ b s.te u.ac.jp
A b strac t Road-following by mobile robots under varying outdoor illumination demands special care to be taken in road seg mentation to h andle color ch ang es in sunny or sh adow parts. T h is paper addresses th e tech nical feasibility of an automatic color ev aluation meth od for fast seg mentation of roads w ith unknow n g eometrical sh apes. Instead of color seg mentation in 2 D or 3 D color space, w e use automatic color projection to sh orten th e total processing speed.
1
Intro d u c tio n
In a ” fo llo w – ro a d ” m issio n in o rd e r to u se v isu a l fe e d b a ck fo r ro b o t p o sitio n c o n tro l, th e lo c a tio n o f th e ro a d m u st b e d e te c te d in th e in p u t im a ge . T h is in v o lv e s se gm e n tin g th e im a ge in to ro a d a n d n o n – ro a d re gio n s e v e n in p re se n c e o f sh a d o w s o n th e ro a d su rfa c e (F ig.1 ) a n d lo c a tin g th e ro a d w h ich c o rre sp o n d s to ro b o t d ire c tio n . O n e glo b a l a p p ro a ch is u se o f 3 D re p re se n ta tio n o f th e su rro u n d in g fi e ld b y ste re o v isio n [1 1 ]. S y ste m s th a t e m p lo y gre y le v e l im a ge s lik e [7 , 1 6 , 1 3 , 1 4 , 1 5 , 6 ] e ith e r ign o re th e se gm e n ta tio n p ro b le m in sh a d o w a n d h igh ligh ts o n th e ro a d su rfa c e , o r u se th e e d ge p ro p e rtie s o f stru c tu re d ro a d s. T h e re fo re , su ch m e th o d s c a n n o t b e u se d fo r d e gra d e d fo re st ro a d s. A lso in [1 2 ] it is sh o w n th a t fu sio n o f to o m a n y im a ge fe a tu re s lik e te x tu re a n d c o lo r d o e s n o t im p ro v e th e re su lt o f ro a d se gm e n ta tio n n e c e ssa rily . A u to m a tic c a lc u la tio n o f fe a tu re w e igh ts is a p ro b le m in su ch m e th o d s. T h e C a rn e gie M e llo n U n iv e rsity in P ittsb u rgh , U S A , h a v e c o n d u c te d e x c e lle n t re se a rch e s o n a u to n o m o u s o u td o o r v e h ic le n a v iga tio n u sin g c o lo r im a ge p ro c e ssin g a n d ra n ge d a ta , su ch a s R A L P H , A L V IN N o r R A N G E R [2 , 9 ]. T h e re se a rch w a s b a se d o n c o lo r c lu ste rin g in 3 D c o lo r sp a c e w h ich c a n b e c o n sid e re d a s
a ge n e ra l so lu tio n to c o lo r– b a se d ro a d se gm e n ta tio n . S C A R F a n d U N S C A R F [4 , 5 ] a re tw o re m a rk a b le e x a m p le s o f th is m e th o d . H o w e v e r, in b o th e x a m p le s th e ge o m e try p ro p e rtie s o f th e ro a d is a n im p o rta n t fa c to r. T o a v o id th e tim e c o n su m p tio n o f c lu ste rin g in 3 D c o lo r sp a c e , m a n y re se a rch e s trie d to re d u c e th e n u m b e r o f d a ta d im e n sio n s. T u rk e t a l. [3 ] d isc u sse d th a t a sp h a lt ro a d s c a n b e se p a ra te d b y a th re sh o ld in R / B p la n e , b a se d o n th is fa c t th a t in a c o lo r im a ge p a v e m e n ts lo o k m o re b lu e th a n su rro u n d in g d irt o r v e ge ta tio n s. A lso L in e t a l. [1 0 ] p ro p o se d a sp h a lt ro a d se gm e n ta tio n in S / I (S a tu ra tio n / In te n sity ) p la n e , in d ic a tin g th a t a sp h a lt sa tu ra tio n is lo w e r th a n su rro u n d in gs re gio n s. S u ch m e th o d s a re a p p lic a b le o n ly to a sp h a lt p a v e d ro a d s. F e rn a n d e z e t a l. [1 ] p ro p o se d a m e th o d fo r se gm e n ta tio n o f fo re st d irt ro a d s b a se d o n c o n v e rtin g th e in p u t R G B im a ge to H / I (H u e / In te n sity ) p la n e w ith 1 2 8 c o lo r (h u e ) a n d 6 4 gra y le v e ls.
2
B rig h tne ss Inv a ria nts
O u r p u rp o se o f c o lo r a n a ly sis is to re je c t sh a d o w a n d h igh ligh ts in th e ro a d se gm e n ta tio n a lgo rith m b y u sin g o n e sin gle p a ra m e te r. T o sp e a k in th e la n gu a ge o f ligh t p h y sic s, e a ch se n so r c e ll in a giv e n im a ge p o in t re c e iv e s a b a n d o f λ, th e to ta l re c e iv e d c o lo r (C) is w ritte n in in te gra l fo rm : Z ∞ I(λ)ρ(λ)S(λ)d λ (1 ) C= 0
w h e re λ (in [n m ]) is th e w a v e le n gth o f th e ligh t e m itte d b y a p o in t in th e e n v iro n m e n t (λ ∈{in fra re d ∼u ltra v io le t}). T h e se n so r e ff e c t S(λ) itse lf is a c o m b in a tio n o f C C D se n sitiv ity c(λ) a n d th e o u tp u t a m p lifi e r sc a lin g fa c to r sλ : S(λ) = sλ c(λ)
(2 )
S in c e th e C C D se n so r se n sitiv ity c(λ) is u su a lly u n k n o w n , it is a p p ro x im a te d b y a d e lta fu n c tio n δ(λ). B y re p la c in g Z ∞ Z ∞ δ(λ)d λ = sλ (3 ) S(λ)d λ = sλ 0
(a)
(b )
0
it w ill sim p lify to :
F igu re 1 :
U n stru c tu re d ro ad s w ith stro n g sh ad o w s (a) te st im ag e b y C M U (b ) T h e ro ad in o u r te st fi e ld .
598
C = sλ I(λ)ρ(λ).
(4 )
As a result, the color component – say R – in sunny and shadow regions can be written as: Rs = sr Is (r)ρ(r)
(5)
Rsh = sr Ish (r)ρ(r)
(6)
where sr stands for sensor’s output amplifier factor for the red signal, Is is brightness of both highlight (direct refl ection) and ambient light (the indirect refl ection), Ish is the ambient light and ρ(re d) is the refl ection function of object in red light. W ithout loss of generality, the same eq uations can be hold for G an B components. An empirical analysis of colors in RGB cube (as shown in Figure 2) shows that the pixel distributions can be represented simply by slope and interception diagonal axis of color clusters.
in 3D or other combinations of this type will generate a new parameter which is not sensitive to brightness changes. For asphalt roads bounded by dirt regions Turk in [3] suggested R − B parameter. But to considering the three dimentional nature of color, the following parameter is proposed: r1 = max{G, B} − R r2 = max{R, B} − G r1 r2 r3 (10) r = max{R, G} − B 3
2.2
D ivision–based brightness–invariants
Another way to achieve invariant parameters is to normalize the RGB values to the intensity factor. This can be done by division of color signals. Examples of this type of parameters are normalized–RGB (rg b), saturation calculated by: S ∗∗ =
↓ (255,255,255)
or the new introduced colors like c1 c2 c3 R c = arctan 1 m a x{G,B} G c2 = arctan m a x{R,B} c1 c2 c3 B c3 = arctan m a x{G,R}
↓
road color cluster
White
(255,255,255)
White
Sunny road
Sunny road
max{R, G, B} − min{R, G, B} max{R, G, B}
road color cluster
(11)
(12)
and l1 l2 l3 :
Red
Blue
Shadow road
(0,0,0)
l1 = l1 l2 l3 l2 = l = 3
Blue
Red Shadow road
(0,0,0)
Black
(a)
Black
(b)
Figure 2:
Effects of sunlight on the road color (a) a change in brightness (b) a change in brightness and color.
where m is the scale factor. Based on statistical results, there have been two types of color features for elimination of brigtness effect from color signals:
2.1
Subtraction–based brightness–invariants
In this case the easiest way to remove brightness information to form a pure color channel is to subtract it. Examples of this approach are ohta’s I2 , E and or S in Y ES color system. In fact, any subtraction like R − B, G − B, R − G in 2D or S ∗ = max(R, G, B) − min(R, G, B)
0 r1 = 0 0 0 r20 = r1 r2 r3 0 r3 =
(7)
(where b is shift of brightness), or shifted in color saturation: Rs = sr (mIsh (r))ρ(r) (8)
(9) 599
(13)
which are suggested by Gever and discussed in [8]. Y et we propose another brightness–invariant model can be obtained by calculation:
The comparison between sunny and shadow parts yields us to the resault that color pixels are either shifted in brightness: Rs = sr (Ish (r) + b)ρ(r)
(R−G)2 (R−G)2 + (R−B)2 + (G−B)2 (R−B)2 (R−G)2 + (R−B)2 + (G−B)2 (G−B)2 (R−G)2 + (R−B)2 + (G−B)2
m a x{G,B}−R m a x(R,G,B) m a x{R,B}−G m a x(R,G,B) m a x{R,G}−B m a x(R,G,B)
(14)
It is shown that in some forest road scenes, segmentation by r30 yields to better results in comparison to other brightness invariant parameters.
3
R oad S egm entation
Assume that a road can be represented by mean value µc of an unknown color, where subscript c indicates the road. W e desire the road segmentation be performed by computation of Euclidean distance so that for any given image pixel x, the distance defined by: d = |x − µc |
(15)
shows the similarity of that pixel to the road. Clearly calculation of distance by Eq .(15) is far faster than clustering techniq ues that use multiple color clusters.
Given a color feature f we would like to know if it can be used in Eq.(15) for road segmentation or not. This is done by dividing the image into road and non– road regions. For the first frame, this task is done by operator but in the next frames it is performed automatically by the program. The road (hr ) and non–road (hn r ) histograms of feature f is then used to define how f fits for segmentation of road region, as explained in following paragraphs: The zone z is the minimum region in the histogram centered on µr , where more than z percent of road pixels are located in it. For example when z = 90, then z includes 90% of total number of road pixels in the histogram. Mathematically it can be expressed by: µX r +dz
µr −dz
Nr
↓
↓
Figure 4: R esults of feature selection for road segmentation in different test roads. Up p er images are inp ut and lower images are the outp ut of selected color channel with highest v alue of κ calculated by Eq .(1 7 ).
hr × 100 > z
(in percent)
(16)
number of pixels
where hr is the road histogram, Nr is the total number of road pixels and dz indicates the distance from µr as shown in Fig.3.
nonroad
dz
dz
Figure 5:
road
The robot while self– nav igation in a real forest field.
Z
µr
Figure 3: The definition of zone z in of road and non– road histogram. F or a giv en z, the feature f is ev aluated by Eq .(1 7 ).
regression across the line edges. Based on this evaluation, some images are shown in Fig.7. The 4’th row of images in Fig.7 shows a failure result. This is because, the colors are saturated and values are beyond the sensing range of the camera.
For any given z, a factor κ calculated by:
κ = (1.0 −
µX r +dz
i=µr −dz
Nn r
5 hn r ) × 100
(17)
is used for evaluation of the feature f , where Nn r is the total number of non–road pixels. The feature with higher value of κ can then be selected for road segmentation. The method of feature evaluation explained above in Eq.(17) was applied on many road scenes including a road image set provided by CMU used for their NAVLAB tests [2, 4, 5]. We have examined this evaluation process on some of them as shown in Fig.4.
4
Imp lementation
The road extraction by automatic color selection explained in previous section was used for navigation of a prototype 4 wheel robot. The robot was missioned through a forest road which is shown in Fig.5. The color parameter evaluation of this particular road scene is summaraized in Fig.6. Once the image is calculated by selected color feature, road segmentation is then performed by region growing. Finally the road model is estimated by line 600
C onclusion
When there are no tools for prediction of color changes in shadow or highlights on the road surface, clustering in 2D or 3D color space is probably the most general solution. However, our experiments on several road scenes show that in most cases it is possible to find out a single dimensional image feature for road segmentation, without any need for color clustering. The feature is selected by an evaluation process, which evaluates different features before navigation, and uses that feature for road extraction while navigation. Once the feature is selected, the road segmentation is then performed simply by calculation of Euclidean distance of each pixel from mean value of the road sample. Two new color sets called r1 r2 r3 and r10 r20 r30 were introduced and it was shown that in some cases, the segmentation by these features yields to better results than other conventional parameters. We can conclude that there is no singular feature which can be used for all situations. A diffi cult example is a colorless scene (like snow roads) with no information about geometrical shape of the road. In absence of prior knowledge about geometrical shape of the road exists, correct extraction of the road region will be very complicated.
road
30000
non-road
40000
number of pixels
number of pixels
25000 20000 15000 10000
30000 20000 15000 10000 5000
50
150
100
200
0
250
Hue
road
non-road
70000
50 road
50
150
100
200
road
number of pixels
10000 5000
50
50
150
100
200
road
150
100
250
50000 45000 40000 35000 30000 25000 20000 15000 10000 5000 0
road
50
200
250
non-road
100
150
r 1’
normalized Blue non-road
30000
road
non-road
25000
number of pixels
50000
number of pixels
250
20000
normalized Green
non-road
20000
40000
30000
20000
20000
Figure 7:
15000
R e su lts o f c o lo r se le c tio n fo r ro ad se g m e n tatio n in o u r te st fi e ld . D ark e r re g io n s hav e sim ilar v alu e s to the ro ad c o lo r.
10000
5000
10000
0
200
40000 30000
0
250
15000
60000
250
non-road
10000
25000
0
200
Saturation *
50000
normalized Red 30000
150
100
60000
number of pixels
number of pixels
0
number of pixels
non-road
25000
5000
50000 45000 40000 35000 30000 25000 20000 15000 10000 5000 0
road
35000
50
100
r 2’
150
200
250
0
50
100
r 3’
150
200
250
Figure 6:
Histogram analysis of the road scene in Fig.5 . r30 has the hig he st v alu e o f κ c alc u late d b y E q .(1 7 ).
References [1] J. Fernandez, A. Casals, ”Autonomous Navigation in ill-S truc tured O utdoor E nvironments,” in P roc eedings of IE E E Int. Conferenc e on Intelligent R ob ots and S y stems, p p .3 9 5 -4 0 0 , 19 9 7 . [2] Ch arles T h orp e, M artial H . H erb ert, T ak eo K anade, S teven A. S h afer, ”V ision and Navigation for th e Carnegie-M ellon Navlab ”, IE E E T rans. on P attern Analy sis and mach ine Intelligenc e, V ol.10 , No.3 , p p .3 6 23 7 3 , 19 9 8 . [3 ] M atth ew A. T urk , D avid G . M orgenth aler, K eith D . G remb an, M artin M arra, ”V IT S -A V ision S y stem for Autonomous L and V eh ic le Navigation”, IE E E T rans. on P attern Analy sis and M ach ine Intelligenc e, vol.10 , no.3 , p p .3 4 2-3 6 1, 19 9 8 . [4 ] Jill D . Crisman, Ch arles E . T h orp e, ”S CAR F: A Color V ision S y stem th at T rack R oads and Intersec tions”, IE E E T rans. on R ob otic s and Automation, V ol.9 , No.1, p p .4 9 -5 8 , 19 9 3 . [5 ] Jill D . Crisman, Ch arles E . T h orp e, ”U NS CAR F:A Color V ision S y stem for th e D etec tion of U nstruc tured R oads”, IE E E P roc eedings of th e Inter. Conf. on R ob otic s and Automation, p p .24 9 6 -25 0 1, 19 9 1. [6 ] M ich el B eauvais, S ridh ar L ak sh man, ”CL AR K : A H eterogeneous S ensor Fusion M eth od for Finding L anes and O b stac les”, IE E E Int. Conf. on Intelligent V eh ic les(IV ’9 8 ), p p .4 7 5 -4 8 0 , 19 9 8 . [7 ] S erge B euch er, X .Y u, ”R oad rec ognition in c omp lex traffi c situations”, 7 th IFAC/ IFO R S S y mp osium on T ransp ortation S y stems: T h eory and Ap p lic ation of Advanc ed T ech nology , p p .4 13 – 18 , 19 9 4 .
601
[8 ] T h eo G evers, Arnold W .M . S meulders, H . S tok man ”P h otometric Invariant R egion D etec tion”, B ritish M ach ine V ision Conferenc e, p p .6 5 9 -6 6 9 , 19 9 8 . [9 ] T odd M . Joch em, D ean A. P omerleau, and Ch arles E . T h orp e, ”M ANIAC: A Nex t G eneration Neurally B ased Autonomous R oad Follow er,” P roc eedings of th e International Conferenc e on Intelligent Autonomous S y stems, Feb ruary , 19 9 3 . [10 ] X uey in L in, S h aoy un Ch en, ”Color Image S egmentation U sing M odifi ed H S I S y stem for R oad Follow ing”, P roc . IE E E Int. Conf. on R ob otic s and Automation, p p .19 9 8 -20 0 3 , 19 9 1. [11] K azunori O noguch i, Nob uy uk i T ak eda, M utsumi W atanab e, ”P lanar P rojec tion S tereop sis M eth od for R oad E x trac tion”, IE E E Conf. on Intelligent R ob ots and S y stems (IR O S ’9 5 ), p p .24 9 -25 6 , 19 9 5 . [12] Ch ristop h er R asmussen, ”Comb ining L aser R ange, Color, and T ex ture Cues for Autonomous R oad Follow ing”, IE E E International Conferenc e on R ob otic s and Automation (ICR A), 20 0 2 [13 ] S ay d P ., Ch ap uis R ., Aufrere R ., Ch ausse F., ”A D y namic V ision Algorith m to R ec over th e 3 D S h ap e of a Non-S truc tured R oad”, IE E E Int. Conf. on Intelligent V eh ic les (IV ’9 8 ), p p .8 0 -8 6 , 19 9 8 . [14 ] Y ue W ang, D inggang S h en, E am K h w ang T eoh , ”L ane D etec tion U sing Catmull– R om S p line”, P roc . of IE E E international c onferenc e on intelligent veh ic les (IV 9 8 ), p p .5 1-5 7 , 19 9 8 . [15 ] Allen M . W ax man, Jac q ueline J. L eM oigne, L arry S . D avis, B ab u S rinivasan, T odd R . K ush ner, E li L iang, T h arak esh S iddalingaiah , ”A V isual Navigation S y stem for Autonomous L and V eh ic les”, IE E E Journal of R ob otic s and Automation, V ol.R A-3 , No.2, p p .124 -14 1, Ap ril 19 8 7 . [16 ] Alb erto B roggi, ”R ob ust R eal– T ime L ane and R oad D etec tion in Critic al S h adow Conditions”, P roc . IE E E Intern. S y mp osium on Comp uter V ision, p p .3 5 3 -3 5 8 , 19 9 5 .