INT. J. COMPUTER INTEGRATED MANUFACTURING, 2001, VOL. 14, NO. 5, 489–500
An adaptive genetic assembly-sequence planner SHIANG-FONG CHEN and YONG-JIN LIU
Abstract. Assembly sequence planning is a combinatorial op timization p ro ble m with highly nonline ar geometric constraints. Most proposed solution methodologies are based on graph theory and involve complex geometric and physical analyses. As a result, even for a simple structure, it is difficult to take all important criteria into account and to find real-world solutions. This paper proposes an adaptive genetic algorithm ( AGA) for efficiently finding global-optimal or near-globaloptimal assembly sequences. The difference between an adaptive genetic algorithm and a classical genetic algorithm is that genetic-operator probabilities for an adaptive genetic algorithm are varied according to certain rules, but geneticoperator probabilities for a classical genetic algorithm are fixed. For our AGA, we build a simulation function to preestimate our GA search process, use our simulation function to calculate optimal genetic-operator probability settings for a given structure, and then use our calculated genetic-operator probability settings to dynamically optimize our AGA search for an optimal assembly sequence. Experimental results show that our adaptive genetic assembly-sequence planner solves combin atorial assembly p roblems quic kly, re liably, and accurately.
1. Introduction Automatic assembly planning plays a vital role in both the life cycle of a product and in minimizin g product production cost. Valid assembly plans must satisfy all product geometrical constraints and physical constraints. Fu rthermore, for an efficie nt assembly plan, major factors leading to reduced lead-time and product cost ( su ch as minimizing the number of reorientations, minimizin g the number of tool changes, proper fixture selection, etc) , should be taken into full consideration. Insufficient consideration may result in difficulties in assembling a product, which will not be discovered until a physical prototype is constructed.
Authors: Shiang-Fong Chen, Assistant Professor, Department of Industrial Education and Technology, Iowa State University, 122 I. Ed. II, Ames IA 50011-3130, USA. Fax: 219-665-4188; e-mail:
[email protected]; Yong-Jin Liu, Department of Mechanical Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong.
Automatically generating assembly sequences is a difficult combinatorial optimization problem ( De Fazio a n d Whitn e y 1987, Baldwin et al. 1991, Delchambre 1992, Laperriere and Elmaraghy 1996, Gottipolu and Ghosh 1997) . Most existing methodologies are based on graph theory and involve complex analyses. For example, one common approach uses a cut-set algorithm to perform an exhaustive search on the liaison graph of a product. However, exhaustive search techniques require substantial computational resources, even for a simple structure. Any search technique that selects the best assembly plan by comparing all valid solutions tends to be very time consuming and, thus, unusable for realistic assembly planning problems. Wilson and Latombe ( 1994) introduced a compact data structure, called a nondirectional blocking graph ( NDBG) , to represent the combinatorial set of parts interactions of an assembly product in polynomial space. Romney et al. ( 1995) e xtended the NDBG into their Stanford Assembly Analysis Tool ( STAAT) . However, their work mainly focused on developing a geometric assembly-planning mode l, rathe r than on optimizin g for physical constraints, such as minimizing tool changes and minimizing refixturing. In order to bring assembly sequence planning closer to real-world application, we need to deve lop an approach that can handle complex structures and multiple constraints. Motivated by the success of genetic algorithms in solving difficult and complex combinatorial problems ( Ge n an d Ch e ng 1997) , ge n e tic algorithms have been applied in assembly sequ ence planning to find optimal or near-optimal assembly plans for a general structure. Bonneville et al. ( 1995) first introduced the concept of applying a genetic algorithm to assembly planning. Bonneville et al.’s assembly planner is heavily affected by the initial set of valid assembly plans ( initial population) supplied by an expert. In Bonneville et al.’s GA, only two basic genetic operators, crossover and mutation, are used.
International Journal of Computer Integrated Manu facturing ISSN 0951-192X print/ ISSN 1362-3052 online Ó 2001 Taylor & Francis Ltd http:/ / www.tandf.co.uk/ journals DOI: 10.1080 / 0951192011003498 7
490
S.-F. Chen and Y.-J. Liu
Simple genetic algorithms, with only crossover and mutation operators, are fairly limited in what they can do ( Goldberg 1989, 1998) . Sebaaly and Fujimoto ( 1996) proposed another genetic algorithm to solve assembly sequence planning problems. Their approach clusters the search sp ace in to familie s of similar sequences, where each family contains only one feasible sequence. Subsequen tly, the best linear or nonlinear solutions are found without searching the complete solution space. Sebaaly and Fujimoto’ s genetic algorithm is difficult to implement, and their decision process is performed on a population basis rather than on a part basis. Chen ( 1998) proposed a simplified genetic algorithm for automatically generating assembly sequences. However, Chen’ s earlier genetic algorithm u ses fixe d, u ser-se lectable, ge ne tic-ope rator probability settings. The gene tic assembly planners discussed above all use classical genetic algorithms, whose genetic-operator probability settings remain fixed throughout the algorithm process. Kim et al. ( 1997) pointed out that high mutation rates reduce premature convergence in initial stages and improve overall quality of the final solution. On the other hand, higher mutation probabilities also lower overall algorithm convergence rate. To resolve the contradiction, Kim et al. proposed a functional model, which gen erates a highe r in itial mutation probability setting and a lower final mutation probability setting. Base d on Kim et al.’s ( 1997) and Chen’ s ( 1998) work, in this paper, we design an adaptive genetic algorithm ( AGA) for assembly sequence planning. The difference between an adaptive genetic algorithm and a standard genetic algorithm is that genetic-operator probabilities for an adaptive genetic algorithm vary according to ce rtain rules, bu t genetic-operator probabilitie s remain fixed for a classical genetic algorithm. We use a simulation function to pre-estimate optimal genetic-operator probability settings for each AGA generation. Our simulation fu nction considers the solution-space search characteristics of our genetic operators to find optimal genetic-operator probability settings for each AGA generation. 2. Representation of geometric and physical constraints In practice, valid assembly plans must satisfy a varie ty of con straints. As a re sult, repre sen tin g constraints and evaluating assembly plans based on constraints represent our requirements in automatic assembly planning.
We generally categorize assembly-planning constraints into either geometric constraints or physical constraints. Geometric constraints describe spatial relationships among components. Geometric constraints represent necessary constraints, namely, constraints that must be satisfied for all feasible valid assembly sequences. Physical constraints result from available fixtures, robot accessibility, assembly line layout, etc, so physical constraints affect assembly cost, assembly time, asse mbly performance, and assembly reliability. We consider physical constraints as optimization constraints, namely, constraints that determine optimization of the complete assembly process. 2.1. Validity of an assembly plan All valid assembly sequences must meet geometric constraints for a given structure. We use an MW_matrix to describe geometric constraints between components in an assembly. For each pair of components ( Pi , Pj ) , our MW_matrix records directions in which Pi can be assembled without colliding with Pj . We define the set of valid assembly directions, for each ( Pi , Pj ) , as the moving wedge of Pi with respect to Pj , denoted by MW ( Pi , Pj ) . We compute moving wedges for all pairs of compone n ts an d store all movin g we dge s in an MW_matrix. Pi can be assembled in directions given by the intersection of MW s, which associate Pi and the components assembled before Pi . Here, we define the in te rse ction of MW s, which associate Pi and the components assembled before Pi , as the assembly wedge AW ( Pi ) of Pi . Let X be the set of the components assembled before Pi , then AW …Pi † ˆ \ MW …Pi , Pj †, Pj 2
Directions in AW ( Pi ) are valid assembly directions for Pi . For a given assembly sequence , if 9Pi in such that AW (Pi ) is an empty set, then Pi cannot be assembled in the given sequence . As a result, is an invalid assembly plan. For example, in figure 1, if compone nt C is assembled after components A, F, E and D, the assembly wedge for component C in the given plan is: AW …C † ˆMW …C, A † \ MW …C, F† \ MW …C, E† \ MW …C, D† ˆ f¡x
ˆ f¡yg
yg \ f x ¡ yg \ f‡x
¡yg \ f‡x
yg
Thus, component C can be assembled after A, F, E and D in the Ð y direction. The directions in AW ( Pi ) , then, are the valid assembly directions for Pi .
Adaptive genetic assembly-sequence planner
491
2.3. Physical constraints
Figure 1.
Example structure.
If AW ( Pi ) is an empty set, then Pi cannot be assembled in the given order, so the given assembly plan is invalid. For e xample, if compon en t C is assembled after components A, B and F, the assembly wedge for component C in the plan is: AW …C † ˆ MW…C, A † \ MW…C, B † \ MW…C, F † ˆ f¡x yg \ f‡yg \ f x ¡ yg ˆ
Thus, the given plan is invalid. 2.2. Orientation number of an assembly plan Let be a valid assembly sequ ence. Componen ts in are ordered according to the sequence: ={Q1 ,. . . , Qn }, whe re n is the n u mbe r of compon e nts. If \jkˆi AW…Q k † 6ˆ , no reorientation is needed during asse mbly of Qi ,. . . , Qj . However, if \jkˆi AW…Qk † 6ˆ , ‡1 but \jkˆi AW…Qk † ˆ , one reorientation is needed for asse mbling Q j+ 1 . For example, in figure 1, ={C, F, E, B, A, D} is a valid assembly sequence. The assembly wedges for those components are: AW…F † ˆ f x ‡ yg AW …E† ˆ f¡x ‡ yg AW …B † ˆ f¡yg AW …A † ˆ f‡x ¡ yg AW…D † ˆ f¡xg
Since AW ( F) \AW ( E) ={ Ð x +y}, F an d E can be assembled in the same directions, Ð x or +y. However, since AW ( F) \AW ( E) \AW ( B ) = , one reorientation is needed to assemble component B. Since AW ( B ) \ AW ( A ) ={ Ð y}, components A and B can be assembled in the same direction Ð y. Finally, since AW ( B) \ AW (A) \AW ( D) = , one more reorientation is needed to asse mble D. In total, two reorientations are required in this plan.
In addition to our geometric constraints, we also de fin e a physical c on strain t for e ac h p air o f components, Pi and Pj , as the physical interconnection ( PI) of Pi and Pj ; PI ( Pi , Pj ) represents the tools used to connect Pi and Pj . If Pi and Pj are not a djace n t, PI …Pi , Pj † ˆ . We store all pairs o f physical interconn ection s in a matrix, calle d a PI_matrix. The physical complexity ( PC ) of Pi is the union of physical interconnections currently associated with Pi . Let be the set of components assembled before Pi , then PC…Pi † ˆ [PI …Pi , Pj †, Pj 2
Our previous work ( Chen 1998) presents a detailed description of the MW_matrix and PI_matrix. Later, we describe using our MW_matrix and PI_matrix in our fitness function to evaluate assembly plans. 3. An adaptive genetic algorithm for assembly sequence planning 3.1. Encoding To apply a genetic algorithm to assembly sequ ence planning, we must first encode assembly sequences as artificial chromosomes. Since an assembly tree usually represents an assembly plan, and since an assembly sequence is given by traversin g the tree from top to bottom, in our AGA, assembly sequences are encoded as trees rather than as binary strings, as in conventional genetic algorithms. Such trees are ‘ singular trees’ ; i.e. the degree of each tree node is one. Each node represents a subassembly. For example, we can encode any ( valid or invalid) assembly seque nce for the structure in figure 1 as a tree chromosome ( as shown in figure 2) . Some structures are assembled from several modu les. Fo r subassembly modules, we first n ee d to consider the assembly sequence for each module, and then we can find the assembly sequence for the whole structure by ‘ merging’ each module. Module merging can be done recursively. Thus, even the final assembly tree is singular; each node might encapsulate another singular tree, and so on. 3.2. Fitness function For the second step in creating our AGA, we must define a fitness function to evaluate assembly plans. For simplicity, suppose our goal is to find a sequence with a
S.-F. Chen and Y.-J. Liu
492
where N is the total numbe r of components, and n( ) is the number of reorientations for sequence . For other criteria, we can modify our fitness function depending upon customized requirements. O’ Reilly and Goldberg ( 1998) point out that fitness fu nction defin ition gre atly affects ne xt-ge ne ration selection and genetic algorithm convergence. In order to make our se arching process converge within a reasonable number of generations, we find it necessary to modify our original fitness function above. We use a modified linear scaling sche me to adjust fitness values. First, we classify assembly sequences into two groups by population average fitness value. Then we map average fitness u a vg to fa , and maximum original fitness u m a x to fb . If the original fitness is less than u a vg , we map original fitness values to [ 0, fa ]. If the original fitness is greater than u a vg , we linearly map original fitness values to [fa , fb ]. Thus, we define a new fitness function as: new fitness ˆ
…fa /u avg †u ,
u
u avg
¡f fa ‡ u m axb ¡ua avg …u ¡ u avg †, u
u avg
f
…2 †
where u represents the original fitness calculated from equation ( 1) . We can select fa and fb for the problem at hand. 3.3. Initial population
Figure 2.
Genetic operators.
minimum number of reorientations. We define the fitness function of assembly plan by original fitness ˆ
2N ¡ n… † : for valid assembly plans N /2 : for invalid assembly plan …1 †
The initial population acts as a major factor in determining the convergence quality of genetic algorithms. The initial populations in the genetic algorithms proposed by Bonneville et al. ( 1995) and Chen ( 1998) need to be generated by experts. Both Bonneville et al. ( 1995) and Chen ( 1998) note that, diversity and quality of the initial population heavily affect their GA run time and searching results. However, we note that, for complex structures with a large number of components, even experts may find it difficult or impossible to provide a good initial population. As a result, for our AGA, we automatically generate an initial population. To generate an initial population automatically, first, we randomly generate an assembly sequence. The n, we test the validity of this sequence, by calculating the assembly wedges for all components in the given assembly sequence ( section 2.1) . If the generated sequence is invalid, we continue randomly generating assembly sequences until we create a valid one. If population size n is large, generating n unique valid assembly sequences is time consuming. Thus, we simply use an initial population containing n identical assembly sequences. If we automatically generate, and then replicate, only one valid assembly sequence, our resulting computation time is still acceptable. Com-
Adaptive genetic assembly-sequence planner pared to prior genetic algorithms, proposed by Bonneville et al. ( 1995) and Chen ( 1998) , in our AGA we can create an initial population easily and quickly. As a result, we do not need an initial population generated by an expert. 3.4. Genetic operators Given a proper constraint model, fitness function and initial population, we next need to design a set of ge netic operators for gen erating diverse offsprin g chromosomes from any given assembly-sequence population. For our study, we use three genetic operators, crossover, mutation and cut-and-paste. A tree crossover operator is applied to swap two subtrees between two assembly trees. The crossover point is chosen randomly. Upper sub-trees remain the same. Nodes in the lower sub-trees are reordered to follow the sequence in the other assembly tree. One example is shown in figure 2( a) . Tree 1 is A-F-E-D-C-B, and tree 2 is FC-B-E-D-A. The sequence of the parts ( A-F-E) in the upper sub-tree of tree 1 is unchanged for tree 1¢ afte r crossover. However, the remaining parts ( D-C-B) will follow the precedence in tree 2, so the lower sub-tree of tree 1¢ becomes C-B-D. Thus, afte r crossover, tree 1 becomes A-F-E-C-B-D, and tree two becomes F-C-B-A-E-D. A tree mutation operator is used to swap two nodes within a single assembly tree. The two swapped nodes are chosen randomly. One example is shown in figure 2( b) . Tree A-F-E-D-C-B becomes tree C-F-E-D-A-B afte r components A and C are swapped. A cut-and-paste operator is used to meet different criteria. A cut node and a paste position are chosen randomly. If the adjacent nodes of the cut node satisfy a given criterion, e.g. same orientations, same tool used, etc, they are cut and, then, pasted to the new position together. These three operators are illustrated in figu re 2. 3.5. Dynamic optimization of genetic-operator probability settings For every algorithm iteration in our adaptive genetic assembly planner, we dynamically set optimal geneticoperator probability settings ( GOPS) based on certain conditions. Our reasons for choosing dynamic GOPS follows. First, our studies, and those of Kim et al.’s ( 1997) , sh ow that varying GOPS during GA-base d assembly planning can improve final planner results. In addition, since we put n identical assembly sequences into our initial population, we can vary genetic-operator probability settings to help make our population, over time, as diverse as possible. Diverse populations help
493
reduce premature convergence and improve the quality of final solutions. However, genetic operators that promote diverse populations also affect lon g-term convergence of the genetic algorithm. Finding an efficient way to select GOPS is a challenging problem. In our AGA, we introduce dynamic GOPS to improve global algorithm optimization. 3.5.1. Background of discrete dynamic optimization. Modern control theory represents system descriptions by state e quations in system state space. Consider a discrete dynamic system described by an n-dimensional state vector x ( i ) at step i. The choice of an mdimensional control vector u( i) determines a transition of the system from state x ( i) to state x ( i +1) by the relations where
x …i ‡ 1 † ˆ f ‰x …i †, u…i †, iŠ
…3 †
x …0† ˆ x 0
…4 †
A general optimization problem for such systems is to fin d th e se q u e n c e o f c on tro l ve c to r s u( i ) fo r i = 0, . . .,N Ð 1 to minimize a performance index of the form J ˆ F ‰x …N †Š ‡
N ¡1 X iˆ0
L‰x …i †, u…i †, iŠ
…5 †
subject to equations ( 3) and ( 4) with N, x 0 , and statetransition function f specified. In equation ( 5) , is a custom-built user-defined function of x ( N ) , L is a custom-built user-defined function of ( x ( i ) , u( i ) ,i ) , and J is an optimization-problem objective function. For any given optimization problem, we can specify and L with respect to certain targets. Selecting different functions, and L, will lead to different system performance characteristics, which will be optimal only for the targets defined by our given and L. Equations ( 3) and ( 5) give a parameter optimization problem with
Figure 3.
A 19-component assembly.
494
S.-F. Chen and Y.-J. Liu
Figure 4.
Genetic operators analysis.
Adaptive genetic assembly-sequence planner equality constraints and can be solved by taking control vector histories u( i ) as unknown parameters ( Bryson 1999) .
tends to promote searching for optimal solutions across a wider solution-space area, while reducing the probability of converging to a stable solution state. Our two genetic-operator groups tend to contradict. That is, using higher GOPS for the first operator group helps our genetic algorithm converge to a stable state, but most often causes our search to find only a localoptimal solution. On the other hand, using higher GOPS for the second operator group helps our genetic algorithm search a broader cost surface, but causes difficulty con ve rging to a stable state . Op timally distributing operator probability settings between the two groups is a challenging problem. To address our AGA operator probability-distribution problem, we develop a simulation function that models our noted genetic-operator searching characteristics. Then, given any assembly structure, we find an optimal solution to ou r simu lation fu nction ; our optimal solution provides optimal GOPS for a subsequent assembly planner run on the given assembly structure. Finally, we run our assembly planner on the given structure, using our optimal GOPS settings. Thus, our assembly planner run provides an optimal or nearoptimal assembly sequence for the given assembly structure. We also use our simulation run to determine the number of AGA iterations, n, required in our final, optimized, planner run. Later, we will show how to determine n. We denote crossover, cut-and-paste, and mutation probabilities for each AGA iteration, i, by PC P ( i ) ,
3.5.2. Simulation function. To introduce discrete dynamic optimization methods into our genetic assembly planner, we build an asse mbly planner simulation function, in the form of equations ( 3) –( 5) , to preestimate our planner’ s runn ing process. Then, by solving our assembly planner simulation function, we can obtain an optimal GOPS sequence for assembling any product structure. Our genetic assembly planner uses three genetic operators: crossover, mutation, and cut-an d-paste . To derive our assembly planner simulation function, we first test each genetic operator to discover its inherent solution-space search characteristics. For testing, we use the structure shown in figure 3. Figure 4 shows our simulation test results, averaged over 30 different runs. From figure 4, we note that, generally, genetic operators fall into two groups, base d upon the ir solution-space searching characteristics. We classify crossover and cut-and-paste into the first group. Both crossover and cut-and-paste tend to search a local costsurface area and, thus, help our genetic algorithm converge to a stable state by finding a local minimum solution within a given solution-space re gion. We classify mutation into the second group. Mutation te nds to in trodu ce ne w features while disruptin g establishe d assembly se quence patterns in cu rre ntpopulation assembly sequences. Therefore, mutation Table 1. 1 PC O ( i ) ´ 100 PC P (i ) ´ 100 PM T ( i ) ´ 100 PC O ( i ) ´ 100 PC P (i ) ´ 100 PM T ( i ) ´ 100 PC O ( i ) ´ 100 PC P (i ) ´ 100 PM T ( i ) ´ 100 PC O ( i ) ´ 100 PC P (i ) ´ 100 PM T ( i ) ´ 100 PC O ( i ) ´ 100 PC P (i ) ´ 100 PM T ( i ) ´ 100
0.0 0.0 100.0 13 5.3 5.3 89.4 25 18.9 18.9 62.2 37 35.0 35.0 30.0 49 47.0 47.0 6.0
2 0.0 0.0 100.0 14 6.1 6.1 87.8 26 20.2 20.2 59.6 38 36.3 36.3 27.4 50 47.6 47.6 4.8
3 0.0 0.0 100.0 15 7.1 7.1 85.8 27 21.6 21.6 56.8 39 37.5 37.5 25.0 51 48.2 48.2 3.6
495
The GOPS table using dynamic optimization for figure 3. 4
5
6
7
8
9
10
11
12
0.3 0.3 99.4 16 8.1 8.1 85.8 28 22.9 22.9 54.2 40 38.7 38.7 38.7 52 48.6 48.6 2.8
0.6 0.6 98.8 17 9.1 9.1 81.8 29 24.3 24.3 51.4 41 39.8 39.8 20.4 53 49.1 49.1 1.8
0.9 0.9 98.2 18 10.2 10.2 79.6 30 25.7 25.7 48.6 42 40.9 40.9 18.2 54 49.4 49.4 1.2
1.4 1.4 97.3 19 11.3 11.3 77.4 31 27.1 27.1 45.8 43 41.9 41.9 16.2 55 49.7 49.7 0.6
1.8 1.8 96.4 20 12.5 12.5 75.5 32 28.4 28.4 43.2 44 42.9 42.9 14.2 56 49.8 49.8 0.4
2.4 2.4 95.2 21 13.7 13.7 72.6 33 29.8 29.8 40.4 45 43.9 43.9 12.2 57 50.0 50.0 0.0
3.0 3.0 94.0 22 15.0 15.0 70.0 34 31.1 31.1 37.8 46 44.7 44.7 10.6
3.7 3.7 93.6 23 16.2 16.2 67.6 35 32.5 32.5 35.0 47 45.5 45.5 9.0
4.5 4.5 91.0 24 17.5 17.5 65.5 36 33.8 33.8 32.4 48 46.3 46.3 7.4
S.-F. Chen and Y.-J. Liu
496
Figure 5.
Solutions for figure 3 ( all solutions are averaged over 30 different runs) .
Adaptive genetic assembly-sequence planner PC P ( i ) , and PM T ( i ) respectively. PC O ( i ) , PC P ( i ) and PM T ( i ) must satisfy the following constraints: …6 †
PCO …i † ‡ PCP …i † ‡ PMT …i † ˆ 1 PCO …i †
0, PCP …i †
0, PMT …i †
0
…7 †
Based upon our measured genetic operator searching characteristics, crossover and cut-and-paste promote searching a local area, or range, on the cost surface, while mu tation promotes se arc hing, at ea ch GA iteration, a wider range on the cost surface. Therefore, crossover and cut-and-paste lead to a higher probability of thoroughly searching a local cost-surface range, while mutation leads to a higher velocity of search across the cost surface. Thus, we set PCO …i † ˆ PCP …i † ˆ Prange …i †, PMT …i † ˆ Pve locity …i †
…8†
Next, we define a simulation function to model our GA search process. We define the search velocity of our planner as v…i ‡ 1 † ˆ v…i † ‡ c 1 Pve locity …i †, v…0† ˆ 0
…9 †
and the overall search range of our planner as R…i ‡ 1 † ˆ R…i † ‡ c 2 v…i †Prange …i †, R…0 † ˆ 0
J ˆ ¡R…N †
subject to the constraints in ( 6) and ( 7) . Maximizing the search range, for our given genetic operator search characteristics, gives our planner the best opportunity to find a global-optimal or near-globaloptimal solution, by searching the maximum possible area on the solution-space cost surface most thoroughly. Here, we choose the custom-built user-defined functions of section 3.5.1, [x ( N ) ] and L[x ( i ) , u( i ) , i ], to be: and Let
From se ction 3.5.1, we know that ou r simulation function, described by equations ( 9) and ( 10) , is a simple discrete dynamic optimization system that can be solved using gradien t methods ( Bryson 1999) . Solving our simulation function provides GOPS settings [ Pr a n g e ( i ) Pve lo c ity ( i ) ], i =1, . . .,N Ð 1 for our subsequent AGA planner run. As an e xample, table 1 shows ou r simu lation function GOPS numerical solution for the 19-component example structure in figure 3. Finally, we can determine the iteration number n for our subsequent AGA run. Experimental results show that when using only our genetic operators from our first operator group, i.e. crossover and cutand-paste, our GA converges to a stable solution within c / 2 iterations, where c is the number of components in the assembly. Furthermore, from the solution to our simulation function ( equation s ( 9) and ( 10) ) , the effect of group-one genetic operators on our AGA running process becomes dominant after step 5n / 6. As a result, iteration number n can be approximated by: c 2
…10†
To model the positive search characteristics of both {mutation} and {crossover, cut-and-paste}, our simulation model defines a search range such that the search range depends upon both Pve lo c ity and upon Pr a n g e . To optimize our planner search, then, we need to find the sequence [Pr a n g e ( i ) Pve lo c ity ( i ) ], i =1,. . . ,N Ð 1 to maximize our search range R( N) , i.e. minimize a performance index of the form
497
n¡
5n )n 6
3c
3.5.3. Assembly planner algorithm. Our complete dynamically optimize d genetic-algorithm-based assembly planner algorithm follows.
Table 2. Scheme 1 2
Fixed GOPS schemes.
Crossover
Cut_and_paste
Mutation
0.4 0.1
0.4 0.1
0.2 0.8
‰x …N †Š ˆ ¡R…N † L‰x …i †, u…i †, iŠ ˆ 0. x ˆ ‰v RŠT ˆ state vector u ˆ ‰Prange Pvelocity ŠT ˆ control vector
Figure 6.
An 11-component product.
S.-F. Chen and Y.-J. Liu
498
Algorithm Adaptive_Genetic_Assembly_Planner Input: Structure E and its MW_matrix and PI_matrix Outpu t: assembly sequences of E Step 1. c component number; GOPS_TABLE Calculate E’ s dynamic GOPS Step 2. from a simulation function; Step 3. m population size; Step 4. G initial population; / * offspring * / ; Step 5. F for i 0 to 3c Step 6. while ( population size of offspring F) < m Step 7. do select one genetic operator p base d Step 8. on the ith GOPS from GOPS_TABLE; Select assembly sequence( s) from G Step 9. based on probabilities ( determined by relative fitness) ; offsp rin g_a sse mbly_se q u e n c e ( s) Step 10. perform operation p on the selected asse mbly se qu en ce( s) until a valid solution is found; Insert offspring_assembly_sequence( s) Step 11. into F; Step 12. b choose the best solution in G; Step 13. w choose the worst solution in F; if b is better than w Step 14. do Swap w and b; Step 15. Step 16. G F; ; Step 17. F Step 18. Return G. Figure 5( a ) shows our assembly-planner solution for our figure 3 example, using dynamic optimal GOPS. Our adaptive genetic assembly planner can find a 2.5 reorientation solution in 51 generations ( averaged over 30 different runs) . For a comparison with our AGA method, figures 5( b ) and 5( c ) present GA search results using two sets of fixed GOPS ( given in table 2) to
Figure 7.
evaluate our same figure 3 example. All results are averaged over 30 different runs. From our comparison, we can see that, using fixed-GOPS scheme 1, we can only find a 4.2-reorientation assembly sequence, and, using fix ed-GOPS scheme 2, we can only find a 3.0reorientation assembly sequence. 4. Additional ex amples Our adaptive genetic assembly planner has been fully implemented in C++ on a Pentium II 450 MHz PC with 256 MB memory. Here, we use our planner to find an optimal or near-optimal assembly sequence for the simple 11-component device shown in figure 6. Results for our 11-component example are presented in figure 7. For the transmission example in figure 6, our assembly planner takes 2 seconds to find the nearoptimal solution s. As a comparison, the assembly planner developed by Baldwin et al. ( 1991) , requires an expert user to answer questions concerning assembly-structure part precedence. For the same example, Baldwin et al.’ s method takes 14 minutes to answer 111 questions on a Sun 3/ 60 workstation. Figure 9 presents our AGA test results for a complex 75-component rotor-hammer product ( see figure 8) . Table 3 presents run times for our three example products. Run times include solving simulation functions, bu t do not include generating initial populations. Most example structures from the literature are small structure s ( < 10 components) . Thus, our 75-component rotor-hammer product represents a more complex, more practical example. We find that our AGA run time depends primarily on the number of product components and product assembly complexity.
Results using AGA for our example in figure 6 ( all results are averaged over 30 different runs) .
Adaptive genetic assembly-sequence planner Table 3.
AGA run times for three example products. 19-component 11-component 75-component ( figure 3) ( figure 6) ( figure 8)
CPU time ( sec) ( average over 30 runs)
8
2
Figure 8.
Figure 9.
156
499
5. Conclusions Automatically ge ne ratin g produ ct asse mbly sequences has become increasingly important in manufacturing. However, automatically generating assembly sequences is a complex combinatorial problem. In this pape r, we propose an e fficien t adaptive gen e tic
A 75-component rotor-hammer product.
Results for a 75-component rotor-hammer product ( all results are averaged over 30 different runs) .
500
S.-F. Chen and Y.-J. Liu
asse mbly-se que nce planne r. Ou r assembly planner offers two major advantages over previously proposed assembly planners. First, we can easily optimize highly complex asse mbly-problem cost fun ctions usin g a simple biological selection metaphor. Second, by using dynamic GOPS, our adap tive genetic planner can efficiently gene rate satisfactory assembly seque nces within a small number of generations. Our planner not only considers geometric constraints, but also optimizes for physical constraints, by incorporating both types of constraints into our AGA fitness function. Our adaptive genetic planner runs faster and more efficiently when using our dynamic GOPS control algorithm. In addition, our planner strategy uses random initial-population generation and automates GOPS selection. Thus, our planner strategy eliminates the need for expert user input for either an initial asse mbly-sequence population or for GOPS settings. From our test results, we show that our adaptive genetic asse mbly-sequence planner is rapid and efficient for assembly sequence planning.
References BALDWIN, D., ABELL, T., LUI, M., DE FAZIO , T., and WHITNEY, D., 1991, An integrated computer aid for generating and evaluating assembly sequences for mechanical products. IEEE Transactions on Robotic s and Automation, 7( 1) , 78–94. BRYSON A. E., 1999, Dynamic Optimization ( Menlo Park, CA: Addison Wesley Longman) . BONNEVILLE, F., PERRARD, C., and HENRIOUD, J. M., 1995, A genetic algorithm to generate and evaluate assembly plans. IEEE Symposium on Emerging Technology and Factory Automation, 2, 231–239.
CHEN, S. F., 1998, Assembly planning- a genetic approach. Proceedings of the 24th ASME Design Automation Conference, 12– 16 September ( Atlanta, Georgia) , paper no. DETC98 / DAC5798. DE FAZIO, T. L., and WHITNEY, D. E., 1987, Simplified generation of all mechanical assembly sequences. IEEE Journal of Robotics and Automation, RA-3, 6, 640–658. DELCHAMBRE, A., 1992, Computer-Aided Assembly Planning ( Chapman & Hall, London, UK) . GEN, M., and CHENG, R. W., 1997, Genetic Algorithms and Engineering Design ( Wiley). GOLDBERG, D. E., 1989, Genetic Algorithms in Search, Optimization, and Machine Learning ( Addison-Wesley) ; 1998, A meditation on the application of genetic algorithms. Electromagnetic System Design using Evolutionary Optimization, edited by Y. Rahmat-Samii and E. Michielssen ( New York: Wiley) . GOTTIPOLU , R. B., and GHOSH, K., 1997, Representation and selection of assembly sequences in computer-aided assembly process planning. International Journal of Product Research, 35( 12) , 3447–3465. KIM, B. M., KIM, Y. B., and OH, C. H., 1997, A study on the convergence of genetic algorithms. Computer Industrial Engineering, 33( 3-4) , 581–588. LAPERRIERE , L., and ELMARAGHY, H., 1996, GAPP: A generative assembly process planner. Journal of Manufacturing Systems, 15( 4) , 282–293. O’REILLY, U. M., and GOLDBERG, D. E., 1998, How fitness structure affects subsolution acquisition. Genetic Programming: Proceedings of the Third Annual Conference, edited by J. Koza et al. ( San Francisco, CA: Morgan Kaufmann) . ROMNEY, B., GODARD, C., GOLDWASSER, M., and RAMKUMAR, G., 1995, An efficient system for geometric assembly sequence generation and evaluation. Proceedings of the ASME International Computers in Engineering Conference, pp. 699–712. SEBAALY, M. F., and FUJIMOTO , H., 1996, A genetic planner for assembly automation. Proceedings of the IEEE Conference on Evolutionary Computation, pp. 401–406. WILSON, R. H., and LATOMBE, J. C., 1994, Geometric reasoning about mechanical assembly. Artificial Intelligence, 71( 2) , 371– 396.