MODELING TERNARY PACKED COLUMN DISTILLATION: ARTIFICIAL NEURAL NETWORK APPROACH Honorina L. Lacar ChE Dept., College of Eng'g. DLSU-Manila, Philippines (MSU-Marawi→sending institution) (632) 5244611 Loc 211
[email protected] Servillano S.B. Olaño, Jr. ChE Dept., College of Eng'g. De La Salle University Manila, Philippines (632) 5244611 Loc 222
[email protected] Elmer P. Dadios MEM Dept., College of Eng'g. De La Salle University Manila, Philippines (632) 5244611 Loc 353
[email protected] ABSTRACT A model is essential in predicting the separation performance of a packed column distillation process. Such process can be mathematically modeled, however, mathematical or simulation models suffer from a number of drawbacks, although, they are reliable in most cases. Models become very complex with increase in number of variables and cannot handle random behaviour. Artificial Neural Network (ANN) provides alternative method for modeling complex packed column distillations because it has the capability to map and associate linear and non-linear relationships. In this paper, ANN is used as an alternative method in predicting the separation performance of a ternary distillation in a packed column with structured packing of a wire-gauze type. In the absence of an actual prototype, neural network model is developed using adequate number of training points obtained from the results of a simulation model under steady state conditions. The network model is then applied to predict the effects of reflux ratio and external heat flux through the wall of the distillation column on the concentrations of the products. The liquid stream concentration profile within the column is also predicted. Results show that ANN approach is capable of modeling complex distillation operation with acceptable accuracy.
Key words Packed Column Distillation, Simulation, Artificial Neural Network Modeling
1.
INTRODUCTION
A Simulation Model (SM) for the prediction of the separation performance of ternary packed column distillation process involved complex non-linear problem of simultaneous heat and mass transfer. This model requires good knowledge of the complex transport mechanisms, phase equilibria and hydrodynamic behavior of the liquid and vapor involved in the process. The complication will be enhanced if transient conditions are involved such as those encountered in control systems. Artificial Neural Networks (ANN), not fully studied and explored in the field of chemical engineering, provides an alternative method for modeling complex systems.
Due to the complexity of ternary packed column distillation, an alternative ANN approach was employed to model and predict the separation performance of a ternary distillation in a packed column with structured packing of a wire-gauze type using the acetone-methanol-ethanol feed mixture. Artificial Neural Network (ANN) does not require a prior fundamental understanding of the processes or phenomena being modeled, thus, eliminating the need for numerous mathematical relationships. The architecture of the network used in this paper was Feed Forward Neural Network (FFNN) and the training of the neural network was done using the Back Propagation Neural Network (BPNN) algorithm. In the absence of an actual prototype, this paper developed ANN model using adequate number of training points obtained from the results of a simulation model under steady state conditions developed by [1]. The correlations applied in the simulation model have been verified previously with ternary distillation experimental data under total reflux conditions [2]. Training points were carefully chosen from sets of conditions to study how network predictability would be affected. The developed ANN model predicted the separation performance of a packed column distillation in terms of the effects of reflux ratio and external heat flux through the wall of the distillation column on the concentrations of the products and on liquid stream profile within the column. In the simulation model developed by [1], the following were specified for a conventional packed column (see Figure 1) at steady-state operation: (1) the height of the rectifying sections; (2) the feed flow rate, the feed concentrations and the thermal condition of the feed at the column pressure; (3) the external reflux ratio; (4) the wall heat flux; (5) total condenser and an ideal reboiler; (6) the ratio of the distillate product to feed; and (7) feed location. Then, the compositions of the distillate and bottom products were calculated for this set of conditions. It is for this set of conditions that ANN input and output variables were based on. Figure 1 shows the material flow streams in a typical packed distillation column composed of rectifying and stripping sections, a total condenser and a reboiler. The feed is assumed to be a saturated liquid. The vapor entering the total condenser is condensed to a saturated liquid which, in turn, is split into distillate product and liquid reflux according to the ratio R=Lo/D. The bottom product is withdrawn from the reboiler as
saturated liquid which is in equilibrium with the vapor, at its dew point, entering the bottom of the stripping section. Qc V
Total Condenser
y o,i
DISTILLATE Vz
L
z
z z+dz
Ni
V z +dz
R=L o /D x d,i =x o,i
D x d,i
qw
This paper presents a method to use neural networks to perform the mapping of input to the behaviour of the process with FFNN architecture and BPNN training algorithm. The FFNN is an architecture in which every neuron in a layer is connected to each neuron in the next layer and the BPNN algorithm is a training algorithm that alters the connection weights by calculating the error terms for each layer from the net output error. This error term is propagated from the output layer in the network to the input layer, hence the name back propagation. Figure 2 shows the diagram of ANN for the prediction of the separation performance of a ternary packed column distillation using FFNN architecture and BPNN training algorithm.
L z+dz f-1
FEED F x f,i q=1
V f-1 y f-1,i
L f-1 x f-1,i f+1
V f-1 y f-1,i
layers that are placed between the input and the output layer are called hidden layers. Each PE typically receives many signals over its incoming connections. These signals may arise from other PEs or from the external environment. A PE in a neural network receives input stimuli along its input connections and translates those stimuli into a single output response, which is then transmitted along the PEs output connections. The mathematical expression that describes the translation of the input stimulus pattern to the output response signal is called the transfer function of the PE.
L f-1 x f-1,i qw
2.1 Back Propagation Neural Network (BPNN)
N y N,i
V N+1 y bs,i
x N,i
Qr LN x N,i
Reboiler BOTTOM PRODUCTS B x b,i
BPNN offers the distinctive ability to learn complex nonlinear relations without requiring specific knowledge of the model structure. It has demonstrated surprisingly good performance in various applications. It is known that any continuous function of N variables can be computed using only linear summations and nonlinear but continuously increasing functions of only one variable. Output Responses from Net
Figure 1: Schematic Diagram of a Conventional Packed Column 1
2.
2
...... 4
Output Layer
ARTIFICIAL NEURAL NETWORK (ANN)
Over the past few years, Artificial Neural Network (ANN) has received a great deal of attention and is now being proposed as powerful computational tools. The structures of ANN are roughly based on our present understanding of the biological nervous system. The potential benefits of ANN extend beyond the high computation rates provided by massive parallelism. The application phase of ANN takes relatively little time compared to its training phase and therefore offers potentially faster solutions for problem solving. ANN can be used to map linear as well as non-linear relations. These consist of a number of very simple and highly interconnected processors called "neurons" or Processing Elements (PEs). The PEs are interconnected by connection weights. The neural net can be made to map input patterns to output patterns by adjusting or altering the connection weights. This process is called "learning" during the training phase. The ANN program develops a model during the training, from repetitive exposures to data and readjustment of the weights. A subgroup of PEs is called a layer in the network. The first layer is the input layer and the last layer is the output layer. The
1
2
3
. . . . . . 14
Hidden Layer
1
2
3
...... 7
Input Layer
- neuron Input Signals to Net
- connection weights
Figure 2: ANN for the Prediction of the Separation Performance of a Ternary Packed Column Distillation Using FFNN Architecture with BPNN Training Algorithm The architecture of the BPNN is a hierarchical design consisting of fully interconnected layers of rows of processing units (see Figure 2). Each unit is itself comprised of several individual PEs. This architecture does not have feedback connections, but
errors are backpropagated during training. In here, BPNN has a forward flowing information in the prediction mode and backpropagation error correction in the learning mode. Errors in the output determine measures of hidden layer output errors, which are used as a basis for adjusting of connection weights between the input and hidden layers. Adjusting the two sets of weights between the pairs of layers and recalculating the outputs is an iterative process that is carried on until the errors fall below a tolerance level. 2. The BPNN undergoes supervised training that follows a supervised learning law. In here, the network is provided with a finite number of pattern pairs consisting of an input pattern and a desired output pattern. An input pattern is presented at the input layer. The PEs then pass the pattern digits to the next layer of PEs, the hidden layer. The outputs of the hidden layer PEs are obtained by using perhaps a bias, and a threshold function with their activations determined by the weights and the inputs. These hidden layer outputs become inputs to the outer PEs, which also process using possibly a bias and a threshold function with their activations to determine the final 3. output from the network. The ANN program develops a model during training, from repetitive exposures to data (data could be noisy, highly nonlinear and complex) and readjustment of the weights. In BPNN, the network takes a set of inputs and produces predicted outputs, which are then compared to the actual outputs. An error signal, the difference between the actual output and the predicted output, is propagated back through the network. The error signal alters the weights of all interconnections, so that subsequent predictions are closer to the actual value. Initial 4. weights can be set randomly. Eventually, after a sufficient number of training iterations, the net learns to recognize patterns in the data and, in effect, creates an internal model of the process governing the data or a set of weights for all interconnections. Then the trained network, with the corresponding good weights, can use the internal model to make predictions on the outputs from sets of inputs previously unknown to it. It is important to note that this internal model is not based on any specification of the underlying mechanism for the process, the net itself generates this model.
3.
METHODOLOGY
The following steps were carried out in the development of ANN-based model used for the prediction of the separation performance of a ternary packed column distillation: 1. Data Source. The database used in this study was taken from the results of the simulation model developed by [1]. The data gathered were values of the following variables: feed flow rate (F); concentrations of acetone or component 1 and methanol or component 2 in the feed (xf1, xf2, respectively); reflux ratio (R); external wall heat flux (qw); height of packing per height of a single packing element (Z/Zp); height of feed point per height of a single packing element (Zf/Zp); concentrations of acetone and methanol in the liquid stream within the column (x1 and x2, respectively) & concentrations of acetone and methanol in the bottoms (xb1 and xb2, respectively); concentrations of acetone and methanol in the vapor stream within the column (y1 and y2, respectively) & concentrations of acetone and methanol in the distillate (xd1 and xd2, respectively). The above variables were carefully chosen. These variables play a prominent role in the separation performance of the
5.
distillation process and thus, have great effect on the products’ quality. A total of 7 input variables and 4 output variables (Table 1 of Section 4) were used for ANN modeling to predict the separation performance of a ternary packed column distillation through products' quality. 2. Grouping of the Data Collected. The available data points (one data point is equivalent to one pattern) are grouped into three sets. The 1st was the "training set" which was used for training the network. Out of the 725 data points, 169 data points were used to train the network. The 2nd set was the "test set" which was used to test the performance of the network during training. Two hundred fifty (250) data points were included in the test set. The last set was the "validation set" which was used to test the ANN prediction efficiency of the fully trained net. The remaining 306 data points were kept aside and were used to test the ANN prediction ability of the trained net. 3. Presentation of Data in the Training Set. Neural networks are pattern matchers, thus, the representation of the data contained in the training sets is critical to a successful neural network solution. The selection of the input and output data in the training set is quite vital. For this study, the goal is to develop a neural network that learns to predict the xb1, xb2, xd1, xd2 and x1, x2, y1,y2 of ternary packed column distillation with the following given: F, xf1, xf2, R, qw, Z/Zp, Zf/Zp. Then, it is obvious that the inputs are F, xf1, xf2, R, qw, T, Z/Zp, and Zf/Zp. and outputs are x1and xb1; x2 and xb2; y1 and xd1; y2 and xd2. 4. Normalizing the Raw Data Collected. Generally, majority of the effort in developing a neural network model goes into collecting data and preprocessing them appropriately. The standard process is to normalize the raw data. Here, the requirement is that the inputs to each input PEs should be in the between –1.0 to 1.0, inclusive and the output to each output PEs should be between 0.0 to 1.0. Normalizing the raw data avoids numerical overflows due to very large or very small weights. The approach adopted for normalizing the raw data is normalized value of a variable = raw value of a variable divide by the largest absolute value 5. Training and Testing of Data. Two major processes are considered in constructing the ANN model: training and testing processes. These must carried out to train and test the neural network. The “architecture” is a specification of the neural network topology, with other attributes of the neural network such as learning rule; activation function; update function; learning and momentum factors. It is to be noted that the number of hidden layers and number of nodes in each layer are problem dependent and are empirically selected. Moreover, it is necessary to vary the parameters used in the neural network such as the learning rate, error tolerance, momentum parameter, noise factor so as to get the fastest convergence. The training and testing were performed using ANN software that was developed employing the BPNN learning algorithm [3, 4, 5, 6]for the development of a model to predict the separation performance of a ternary packed column distillation [7]. It is to be noted that the performance of ANN can be greatly improved by finding suitable values for its major design parameters: number of neurons in the hidden layer, learning rate, momentum, and noise factor.
The number of neurons in the hidden layer influences the performance of the network. An ANN with too many hidden neurons may generalize poorly and with too few hidden neurons may never converge. Decreasing the number of neurons also decreases the computer time required to train the network. During training, the network keeps changing the weights of the interconnections; the rate of change is a function of the original weights and a factor. This factor is the product of the learning rate set by the user and the error in prediction, which the network computes from the difference between the predicted output and the actual output. The learning rate value ranges from 0 to 1. Momentum parameter provides a smoothing effect, by making the change in the weights a function of the previous weight change. This parameter helps in faster learning and protects against oscillation. The momentum value ranges from 0 to 1. 6. Validation of Data. Validation is done in testing mode. The set of weights that corresponded to minimum testing error was used in the prediction. This was done by supplying testing file (test.dat) the input data of the validation set and then good set weights were used to evaluate the corresponding outputs. The outputs were generated and stored in the file output.dat.
TRAINING SET Training Data • Inputs • Outputs RAW DATA from Mathematical Model
ADDITIONAL DATA @ conditions not covered by Training, Test, and Validation Sets
The accuracy of the prediction was evaluated on the basis of the percentage error of the actual outputs relative to the target/desired outputs, which is evaluated as: % error = [(desired or target output value - actual output value)/desired output value]x100 A percentage error of less than 10% is good enough since for design and control purposes, a 10% error margin may be acceptable. Hence, for a less than 10% error, the prediction is accurate. Figure 3 shows the framework for ANN model using the BPNN training algorithm that was used in this research.
4.
RESULTS AND DISCUSSION
In this paper, ANN of the BPNN learning algorithm was used in the prediction of the separation performance of a ternary packed column distillation in terms of distillate product concentrations and liquid concentration profile within the column as functions of certain conditions. During the training process, 7 input and 4 output variables were used. The training data, which were concretely selected, consisted of 169 patterns as shown in Table 1. Sufficient data (250 patterns) were used as test set. The remaining 306 patterns were used for validation. It is essential to have enough data as training and test sets to train and evaluate the performance of the network effectively.
TRAINING.DAT Training Data • Inputs • Outputs
1
Ο Ο Ο Ο Ο Ο
Data are normalized TEST SET Test Data • Inputs • Outputs VALIDATION SET Validation Data • Inputs • Outputs
ANN TRAINING Training Mode (using BPNN Software)
TEST.DAT Training Data • Inputs Test Data • Inputs Validation Data • Inputs
Ο Ο Ο Ο
Output.dat (last pattern only)
Ο Ο
Weights.dat Good Weights
0
Data are normalized
ANN TESTING Test Mode (using BPNN Software)
0 ANN OUTPUTS in Output.dat and with Good Weights in Weights.dat
ANN-BASED MODEL (BPNN MODEL)
Figure 3: Framework for ANN Model of the BPNN Training Algorithm to Predict Separation Performance of Ternary Packed Distillation Column
Table 1: Sample Training Data for Prediction of the Separation Performance of Ternary Packed Column Distillation (Note: Only 15 out of 169 patterns are shown.) Input Variables R qw 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0008 0.00 0.0010 0.00 0.0010 0.00
Zf/Zp 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40
In the application of backpropagation algorithm, a number of different layers and processing elements were tried. It should be noted that the number of hidden layers and number of processing elements in each layer are problem dependent and are empirically selected. The time of convergence depended on the number of PE’s in the hidden layer. Moreover, the addition of momentum parameter and noise factor also helped the simulation to converge. It is necessary to vary the parameters used in the neural network such as the learning rate, error tolerance, momentum parameter, and noise factor in order to get the fastest convergence. In this paper, the best result was obtained using the following input parameters. a) Error tolerance = 0.0001 b) Learning parameter = 0.05 c) Maximum number of cycles = 120,000 d) Total number of layers = 3 e) Total number of processing elements for every layer (input hidden output = 7 14 4) f) Momentum parameter = 0.0005 g) Noise factor = 0.0005
Z/Zp 1.0000 0.7667 0.5667 0.4333 0.3667 0.3333 0.3333 0.3000 0.2333 0.1333 0.0667 0.0000 -.0333 1.0000 0.7667
X1/ xb1 0.5295 0.2939 0.2375 0.2279 0.2255 0.2246 0.2457 0.2434 0.2377 0.2265 0.2168 0.2046 0.1430 0.5536 0.3345
1
acetone
0.8
0.6
0.4
Figure 5 summarizes the variation of distillate composition with reflux ratio. Using the 4 trained data (at R=0.8, R=1, R=10 and R=1000 from SM), it can be seen on this graph that the ANN as able to predict
R=10
R=100
R=1000
y2/ xd2 0.3810 0.4142 0.4056 0.3989 0.3964 0.3953 0.3953 0.3979 0.4037 0.4135 0.4206 0.4276 0.4276 0.3752 0.4184
-Training data at R=1 (from SM) - Training data at R=10 (from SM) - Training data at R=1000 (from SM) - ANN
- SM at R=1 - SM at R=10 - SM at R=1000 - SM at R=2 - SM at R=5 - SM at R=100
R=2
R=5
R=1
0.2
0
1
0
1
2
methanol - SM at R=1 - SM at R=10 - SM at R=1000 - SM at R=2 - SM at R=5 - SM at R=100
0.8
x2 [-]
4.1 Effect of Reflux Ratio Figure 4 shows the liquid concentration profiles of each component at various reflux ratios. The abrupt change in the concentration at the feed location is due to the fact that feed is of different conditions to that of the liquid within the column. Training data at reflux ratios 1, 10, and 100 (training data are in solid symbols in three curves) were used. Testing points at the intermediate conditions were on the three curves (same curves with that of the training data). While ANN predictions at additional conditions were made at reflux ratios 2, 5, and 100. It was observed that ANN testing and prediction are almost similar with that of the SM.
Output Variables X2/ xb2 y1/ xd1 0.3810 0.5295 0.4587 0.4288 0.4394 0.4063 0.4237 0.4027 0.4177 0.4018 0.4152 0.4015 0.4025 0.4015 0.4036 0.3958 0.4059 0.3822 0.4098 0.3550 0.4125 0.3310 0.4152 0.3011 0.4073 0.3011 0.3752 0.5536 0.4644 0.4475
Feed
Xf2 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40
Feed
Xf1 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
x1 [-]
F 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.6
3
4
5
6
- Training data at R=1 (from SM) - Training data at R=10 - Training data at R=1000 - ANN
R=1000 R=10
R=1
0.4
R=5
R=2
R=100
0.2
0 0
1
2
3 Z/Zp [-]
4
5
6
Conditions: No. of packing elements in the rectifying section (NR) = 4 No. of packing elements in the stripping section (NS) = 2 xf1=0.25, xf2=0.40, qw=0.0
Figure 4: Effect of Reflux Ratio on Liquid Concentration Profile
accurately the values of the distillate concentrations at other testing conditions (e.g., R=2, R=5, and R=100).
1
Figure 6 shows the liquid concentration profiles at R=0.8. It is evident on this graph that ANN made an accurate prediction on the values of the liquid concentration within the column basing on the results from the mathematical modeling.
0.8
x1 [-]
- Training data at qw=1500 - SM at qw=1500 - ANN at qw=1500
Feed
acetone
- Training data at qw=100 - SM at qw=100 - ANN at qw=100
- SM at qw=1000 - ANN at qw=1000
0.6
0.4
Effect of Wall Heat Flux 0.2
0 0
Figure 8 shows the effect of wall heat flux on the distillate composition. Here, ANN model was able to predict accurately the distillate concentrations at other conditions using the 5 trained data (at values of qw equal to 1500, 500, 100, -100, and -500).
methanol
3
4
5
6
- Training data at qw=1500 (from SM) - SM at qw=1500 - ANN at qw=1500 - Training data at qw=100 (from SM) - SM at qw=100 - SM at qw=1000 - ANN at qw=100 - ANN at qw=1000
0.8
x2 [-]
2
0.6
0.4
1
Distillate Composition, xd [-}
1
1
Feed
4.2
Figure 7 shows the effect of wall heat flux on the concentration profile at a fixed reflux ratio. When heat flux is changed, the behavior of the concentration profile is similar to that when reflux ratio is varied. In here, ANN's prediction is close to that of the mathematical modeling.
- Data from Simulation Modeling (SM) - Data points used for training (from SM) - ANN
0.8
0.2
acetone
0
0.6
0
1
2
3 Z/Zp [-]
Bottom
0.4 methanol
4
5
6
Top
Figure 7: Effect of Wall Heat Flux on Liquid Concentration Profile
0.2
0 10-1
100
101 102 Reflux Ratio [-]
103
Conditions: No. of packing elements in the rectifying section (NR) = 4 No. of packing elements in the stripping section (NS) = 2 xf1 = 0.25, xf2 = 0.40, qw = 0.0
1
Figure 5: Variation of Distillate Composition with External
Feed
Rdo=0.8
0.5
Distillate Composition, xd [-]
Reflux Ratio (R)
xi [-]
0.4 0.3 0.2
0
1
2
3 z/zp [-]
4
5
0.4 methanol
0.2
0
qw [W/sq.m.]
1000
Conditions: No. of packing elements in the rectifying section (NR)= 4 No. of packing elements in the stripping section (NS) = 2 xf1 = 0.25, xf2 = 0.40, R = 2.0
0 Bottom
acetone
0.6
0
ANNs - Artificial Neural Networks SM - Simulation Model Methanol (ANNs) Acetone (SM) Ethanol (ANNs) Methanol (SM) Acetone (ANN Testing) Methanol (ANN Testing) Ethanol (SM) Ethanol (ANN Testing) Acetone (ANNs)
0.1
0.8
- Data points used for training (from SM) - Data from SM - ANN
6
Top
Conditions: NR=4, NS=2, xfa=0.25, xfb=0.40, xfc=0.35, qw=0, Re=200
Figure 6: Liquid Concentration Profiles at Rdo=0.8
Figure 8: Effect of Wall Heat Flux on Distillate Composition
4.3
Effect of Reflux Ratio & Heat Flux
Figure 9 shows the prediction of ANN on the liquid stream acetone and methanol - concentration profile within the column at R=8 with qw=-400 and at R=400 with qw=400. The curves are of the same trend with that in Figures 4 and 7. The similarity in the behavior of the curves proves the accuracy and robustness of ANN in the prediction at additional conditions.
4.4
Error Distribution
Figure 10 shows the error distribution for ANN testing at R=1. The average error of 2.1% was calculated for R=1 and 3.8% for the overall. Since the error is less than 10% margin, then it can be said that ANN prediction is quite accurate.
Feed
1
- acetone at R=8 and qw=--400 - methanol at R=8 and qw=-400 - methanol at R=400 and qw=400 - acetone at R=400 and qw=400
x1 and x2 [-]
0.8
5.
CONCLUSIONS
This paper presented how an ANNs approach can be applied to predict separation performance of a ternary packed distillation column in terms of concentration of the products and concentration profile of the liquid stream within the column. The values predicted by the ANNs were found to be very close to the values obtained from simulation. The average error as predicted by ANNs was 2.1% for R=1 and the overall error was 3.8% which are within the acceptable level. A good agreement between ANNs prediction and that of simulation model has been achieved by careful selection of inputs and concrete selection of training data. Thus, feed flowrate (F), feed concentrations (xf1 & xf2), reflux ratio (R), external wall heat flux (qw), ), height of packing per height of a single packing element (Z/Zp), height of feed point per height of a single packing element (Zf/Zp), were carefully chosen as inputs. As shown, sets of training data for certain conditions could be used to predict product concentration and liquid stream concentration profile at other conditions. Results showed that the accuracy of ANNs prediction was comparable to that achieved by the simulation model for ternary packed column distillation so it is concluded that ANNs approach is capable of modeling this complex distillation operation with acceptable accuracy.
0.6
0.4
0.2
REFERENCES [1]
0 0
1
2
3 Z/Zp [-]
4
5
6
Conditions: NR=4 and NS=2 xf1=0.25, xf2=0.40, xf3=0.35
[2]
Figure 9: ANN Prediction on the Liquid Acetone and Methanol Concentration Profile
[3] 10
[4]
- % Error at R=1 (acetone) - % Error at R=1 (methanol)
% Error
8
[5]
6
4
[6]
2
0 0
1
2
3 Z/Zp [-]
4
5
Figure 10: Percentage Error Distribution for ANN Testing at R=1
6
[7]
Olaño, S.SB., Kosuge, H., Asano, K. Prediction of the separation performance of ternary packed column distillation having structured packing using heat and mass transfer model. Journal of the Japan Petroleum Institute, 40 (1997), 78-86. Olaño, S.SB., Nagura, S., Kosuge, H., Asano, K. Mass transfer in binary and ternary distillation by a packed column with structured packing. J. Chem. Eng., Japan, 28 (1995), 750-757. Rao, V. and Rao, H. C++ Neural Networks and Fuzzy Logic, MIS Press, New York, 1995. Dadios, E.P. and Williams, D.J. Application of neural networks to the flexible pole-cart balancing problem. Proc. 1995 IEEE International Conference on Systems, Man and Cybernetics (Vancouver, Canada, Oct.22-25, 1995), 2506-2511. Dadios, E.P. & et al. Vision guided gantry robot using neural networks. Proc. (ME – SELA ’97) International Conference on Managing Enterprises Stakeholders, Engineering, Logistics & Achievements (Loughborough Univ., Loughborough, U.K., July 22-24, 1997), 663-675. Dadios, E.P. Neural network application to pattern recognition. Proc. 2nd Pacific Asia Conference on Mechanical Engineering, (Manila, Philippines, Sept. 912, 1998), 221-229. Lacar, H. L., Olaño, S. Sb., and Dadios. E.P. Application of artificial neural network to ternary distillation in a packed column. Proc. Regional Symposium on Chemical Engineering and The Ninth National Chemical Engineering and Applied Chemistry Conference (Songkhla, Thailand, Nov. 22-24, 1999), B25-1B25-7.