Computers and Chemical Engineering 29 (2005) 2134–2143
Application of optimal RBF neural networks for optimization and characterization of porous materials A. Shahsavand ∗ , A. Ahmadpour Chemical Engineering Department, Ferdowsi University of Mashad, P.O. Box 91775-1111, Mashad, Iran Received 4 September 2004; received in revised form 29 June 2005; accepted 5 July 2005 Available online 18 August 2005
Abstract Optimization and characterization of porous materials have been extensively studied by various surface phenomena researchers. Efficient methods are required to predict the optimum values of operating parameters in different stages of material preparation and characterization processes. A novel method based on the application of a special class of radial basis function neural network known as Regularization network is presented in the this article. A reliable procedure is introduced for efficient training of the optimal isotropic Gaussian Regularization network using experimental data sets. Two different practical case studies on optimization and characterization of carbon molecular sieves and activated carbons were employed to compare the performances of properly trained Regularization networks with the optimal conventional methods. It is clearly demonstrated that a Regularization network with optimum value of isotropic spread and optimum level of regularization can efficiently filter out the noise and provide better generalization performance over the conventional techniques. © 2005 Elsevier Ltd. All rights reserved. Keywords: Neural network; Regularization network; Characterization; Optimization; Porous materials
1. Introduction Several types of neural networks have been extensively used for empirical modeling of various chemical engineering processes (Himmelblau & Hoskins, 1988; Venkatasubramanian & Chen, 1989; Watanabe, Matsuura, Abe, Kubota, & Himmelblau, 1989; Nascimento, Oliveros, & Braun, 1994; Chan & Nascimento, 1994; Nascimento, Guardani, & Giulietti, 1997; Iliuta s¸i & Lavric, 1999; Nascimento, Giudici, & Guardani, 2000; Guardani, Onimaru, & Crespo, 2001; Shaikh & Al-Dahhan, 2003; Lin, Chen, & Tsutsumi, 2003; Tarca, Grandjean, & Larachi, 2003). Although, characterization and optimization of solid porous materials have been considerably explored by many researches (Szombathely, Brauer, & Jeroniec, 1992; Cascarini de Torre & Bottani, 1996; Lastoskie & Gubbins, 2001; Moussatov, Ayrault, & Castagnede, 2001), however, ∗
Corresponding author. E-mail addresses:
[email protected] (A. Shahsavand),
[email protected] (A. Ahmadpour). 0098-1354/$ – see front matter © 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.compchemeng.2005.07.002
application of neural network for such tasks is relatively new. Real solids have complex micro-structure and majority of solid materials are porous to some extent. Characterization of such materials has always been a topic of great interest (Stoeckli, Kraehenbuehel, Ballerinin, & Bernardini, 1989; Russel & LeVan, 1994; Ahmadpour, 1997; Floquet, Coulomb, & Andr´e, 2004). The macroscopic properties of porous solids are closely connected to their micro-porous structure characterized by parameters such as density, surface area, porosity, pore size distribution, energy distribution and pore geometry. Numerous techniques ranging from simple pycnometry to more sophisticated methods such as radiation scattering or ultrasonic technique are employed to experimentally characterize porous materials (Lee, Chiang, & Tsay, 1996). Although numerous methods have been proposed previously to address the characterization of porous materials, no well-developed theory is still available (Jagiello, Bandosz, & Schwarz, 1996; Lastoskie & Gubbins, 2001). The neural network approach is employed in this article to explore
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
Nomenclature ADF CMS CV d.f. e- k G H(λ) IN LOO N nλ nσ OLR p RBF RBFN r2 RT -t j w -λ x- i x- max x- min y yˆ (x- )
approximate degrees of freedom carbon molecular sieve cross validation criterion degrees of freedom kth unit vector of size N Green’s matrix of size (N × N) smoother matrix identity matrix of size (N × N) leave one out criterion number of training exemplars number of steps for intermediate values of λ number of steps for intermediate values of σ optimum level of regularization dimension of input space radial basis function radial basis function network correlation coefficient residence time (min) jth neuron of RBF network synaptic weights vector of size N ith input vector of size N a vector containing maximum values of x a vector containing minimum values of x response vector of size N computed response of network
Greek letters λ regularization parameter λman maximum anticipated value of λ λmin minimum anticipated value of λ λ* optimal regularization parameter σj jth isotropic spread of Gaussian basis function σ* optimal isotropic spread σ man maximum anticipated value of σ minimum anticipated value of σ min
the relationship between the characterization parameters of solid particles and the related operating variables. To the best of our knowledge, this approach has not been previously reported. Characterization or even optimization of porous materials can be viewed as a function approximation problem. In this approach, the underlying relationship between a desired response and various input parameters or operating conditions should be investigated. The close relationship between the function approximation problem and the feed-forward artificial neural networks was explored earlier (Shahsavand, 2000). Within this viewpoint, feed-forward neural networks are viewed as approximation techniques for reconstructing input–output mappings in high-dimensional spaces. Experimental data are required to effectively construct appropriate mapping.
2135
Chemical engineering data are usually contaminated with relatively high measurement errors. Proper noise filtering facilities are essential to avoid over-fitting phenomenon. Special class of feed-forward neural networks known as radial basis function networks (RBFN), which are originated from the well-studied subject of multivariate regularization theory, provides powerful method for hyper-surface reconstruction coupled with efficient noise elimination (Shahsavand, 2003). These networks enjoy the best approximation property among all feed-forward networks (Poggio & Girosi, 1990a,b; Haykin, 1999). Hunt, Sbarbaro, Zbikowski and Gawthrop (1992) presented further theoretical support in favor of such networks. The training of RBFNs with known centers and spreads reduces to the solution of an over-determined set of linear equations, which can be achieved by a variety of highly stable techniques (Golub & Van Loan, 1996). As shown in the previous investigation (Shahsavand & Ahmadpour, 2005), these networks are ideal for capturing the true underlying trend from noisy chemical engineering data sets.
2. Theoretical background of RBF networks Poggio and Girosi (1990a,b) proved that the ultimate solution of the ill-posed problem of multivariate regularization theory could be represented in the concise form of: (G + λIN )w - λ = y-
(1)
where G is the N × N symmetric Green’s matrix, λ the regularization parameter, IN the N × N identity matrix, w - λ the synaptic weight vector and yi is the response value corresponding to the input vector x- i , i = 1, 2, . . ., N. Fig. 1 illustrates the equivalent network (known as Regularization network) for the above equation with N being the number of both training exemplars and neurons of Regularization network. The activation function of the jth hidden neuron is a Green’s function G(x- , x- j ) centered at a particular data point x- i , j = 1, 2, . . ., N. For a special choice of stabilizing operator, the Green’s function reduces to a multidimensional factorizable isotropic Gaussian basis function with infinite number of continuous derivatives (Poggio & Girrosi, 1990b; Haykin, 1999).
Fig. 1. Regularization network with single hidden layer.
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
2136
x − xj 2 G(x- , x- j ) = exp − - -2 2σj
=
p k=1
(xk − xj,k )2 exp − 2σj2
(2) where σ j denotes isotropic spread of the jth Green’s function being identical for all input dimensions. The performance of Regularization network strongly depends on the appropriate choice of the isotropic spread and the proper level of regularization. The leave one out (LOO) cross validation (CV) criterion (Golub, Heath, & Wahba, 1979; Golub & Van Loan, 1996) can be used for efficient computation of the optimum regularization parameter λ* for a given σ (Shahsavand, 2000, 2003). Evidently, the RBF network of Eq. (2) consists of three sets of parameters, namely: centers, spreads and synaptic weights. The centers and spreads appear nonlinearly in the training cost function of the network and their efficient calculation requires heavy optimization techniques, while the linear synaptic weights can be readily computed. For a network consisting of “N” Green’s functions (neurons) with “p” input dimensions, the number of parameters are N × p for centers, N × p × p for spreads and N for weights (Haykin, 1999). Training of an RBF network requires calculation of N linear synaptic weights, selection of N × p × (p + 1) nonlinear centers and spreads and computation of optimum level of regularization (λ* ). The above problem can be avoided by using an isotropic spread (constant but unknown value) for all neurons. In such a case, the problem of finding the optimum values of linear weights, isotropic spread (σ) and regularization parameter (λ) reduces to the solution of linear sets of equations, which is trivial. The novelty of proposed method is the development of a convenient procedure for de-correlating the above parameters and selecting the optimal values of λ* and σ * using only linear optimization techniques. As it will be shown, the plot of λ* versus σ suggests a threshold σ * that can be regarded as the optimal isotropic spread for which the Regularization network provides appropriate model for the training data set. It is also pointed out that the effective degrees of freedom of a Regularization network is a function of both regularization level and isotropic spread. The following procedure (shown in Fig. 2) is employed for efficient training of the optimal isotropic Gaussian Regularization network using experimental data set. (a) Specify the preliminary data such as type of Green’s function, input space dimension (p), the number (N) and values of training exemplars [inputs (x- i , i = 1, 2, . . . , N) and the corresponding responses (yi , i = 1, 2, . . . , N)], minimum and maximum values of isotropic spread (σ) and regularization parameter (λ), and number of steps for intermediate values of σ and λ. (b) Normalize the input variables1 and store x- min and x- max . 1 The normalization of input space does not create any limitation for practical applications, but extremely enhances the selection of isotropic spreads.
Fig. 2. Flow diagram of the optimal learning procedure for training Regularization networks.
(c) Set the N normalized input exemplars as the N neurons of the Regularization network. (d) Select the isotropic spread between its minimum and maximum value (0 and 1). (e) Construct the Green’s matrix for the specified values of isotropic spread. (f) Specify the regularization parameter between its minimum and maximum value (although, for most practical cases, the optimum value of λ lies between 10−5 and 10−2 , however, it is safer to search for it in the interval of 10−10 to 102 ). (g) Find the inverse of (G + λIN ) using stable techniques such as singular value decomposition. (h) Find the cross validation criterion from CV(λ) = 2 N T e- k (IN −G(G+λIN )−1 )y 1 N e- Tk (IN −G(G+λIN )−1 )e- k with e- k being the kth unit k=1
vector of size N.
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
2137
(i) Repeat from step (f) for the next value of regularization parameter. (j) Find the optimum value of regularization parameter (λ* ), which minimizes the CV criterion for the corresponding spread. (k) Repeat from step (d) for the next value of isotropic spread. (l) Find the optimum value of isotropic spread (σ * ), which maximizes the value of λ* . (m) Construct the (G* + λ* IN ) matrix using the optimal spread (σ * ) and the corresponding optimum regularization level (λ* ). ∗ (n) Compute the optimal synaptic weight vector (w - λ ) via (G∗ + λ∗ IN )−1 y. The optimal synaptic weights can be used for prediction of ∗ any response given the input values (y = Gw - λ ). Evidently, the Green matrix should be constructed using the normalized input variables, the training centers and the optimal value of isotropic spread.
Table 1 Experimental data on CMS selectivity
3. Experimental case studies
experimental data are used to explore the application of radial basis function neural networks for empirical modeling of both optimization and characterization of porous materials. In the first trial, a set of experimental measurements presented in Table 1 are used to train the Regularization network and the optimum process conditions are found for maximum
The capabilities of the above algorithm for efficient training of Regularization network were demonstrated in the previous study using a synthetic example (Shahsavand and Ahmadpour, 2005). In the present investigation, two sets of
Sample
Temperature (◦ C)
Residence time (RT) (min)
Selectivity (O2 /N2 )
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
750 750 750 750 750 760 765 780 785 800 800 800 810 820 830 830 850 850 850 850
5 10 13 17 20 20 10 14.5 10 5 10 20 10 10 5 10 5 7.5 10 20
1.2 1.34 1.54 2.72 4.4 3.9 1.57 2.59 1.72 1.5 5.14 3.95 3.36 3.2 2.95 2.63 1.85 2.5 2.0 1.46
Fig. 3. (a) Three-dimensional plot of the training data set, (b) trend analysis of selectivity versus temperature at constant residence times.
Table 2 Replicated experimental data for determination of measurement error Run
Temperature (◦ C)
Residence time (RT) (min)
Selectivity (O2 /N2 )
Average selectivity
1a 1b 1c
800
10
4.72 5.46 5.25
5.14
8.2 6.2 2.1
2a 2b
830
10
3.15 2.11
2.63
19.8 19.8
3a 3b
850
5
1.95 1.75
1.85
5.4 5.4
Deviation (%)
2138
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
selectivity of O2 /N2 in air separation with carbon molecular sieve (CMS) adsorbents. Details of experimental procedures for preparation and measurement processes of these porous materials are presented elsewhere (Abedinzadegan Abdi, Mahdiarfar, Jalilian, Ahmadpour, & Mirhabibi, 2001; Ahmadpour et al., 2002). Fig. 3 shows the discrete three-dimensional plot of the above data set and trend analysis of selectivity values versus activation temperature at constant residence times. Although, the dependency of O2 /N2 selectivity to temperature and residence time shows distinct maxima or minimum, however, it is somehow difficult to represent the 3D points with a prespecified function or surface. The interesting point is that the selectivity becomes independent of residence time at relatively elevated temperatures (850 ◦ C). The entire process of preparation, treatment and characterization of the CMS adsorbents includes several experimental steps. Some of the above tests were repeated to provide an estimation of the overall measurement error for these practical steps. The results shown in Table 2 indicate that a maximum deviation of 20% in the reported selectivity values
Fig. 4. Variation of optimum level of regularization (λ* ) and approximate degrees of freedom (ADF) with the isotropic spread (σ) of Regularization network.
may be anticipated for the experimental data set. Evidently, the overall measurement error can be greater than the above values, due to the complexity of the whole process of CMS adsorbents production and characterization.
Fig. 5. The 3D and contour map plots of the generalization performance of the optimally trained Regularization network.
Table 3 The top five equations fitted by 3D table-curve software No.
Equation and coefficients
1
z = a + b/x + c/x2 + d/x3 + e/x4 + f/x5 + g ln y + h(ln y)2 + i(ln y)3 + j(ln y)4 + k(ln y)5 a = 4.91882e+06, b = −1.9658e+10, c = 3.14044e+13, d = −2.5067e+16, e = 9.99753e+18, f = −1.5938e+21, g = −318.422047, h = 466.592569, i = −280.044128, j = 75.6618794, k = −7.6376009
2
z = a + bx + cx2 + dx3 + ex4 + fx5 + g ln y + h(ln y)2 + i(ln y)3 + j(ln y)4 + k(ln y)5 a = −4.4256e+06, b = 27789.17952, c = −69.7497272, d = 0.08747608, e = −5.4817e−05, f = 1.3731e−08, g = −431.971549, h = 564.970729, i = −321.877946, j = 84.39084263, k = −8.35259351
3
z = a + b/x + c/x2 + d/x3 + e/x4 + f/y + g/y2 + h/y3 + i/y4 + j/y5 a = −27228.8612, b = 7.99e+07, c = −8.752e+10, d = 4.23156e+13, e = −7.6128e+15, f = 2894.78114, g = −67061.3177, h = 709405.4168, i = −3.4648e+06, j = 6.30027e+06
4
z = a + bx + cx2 + dx3 + ex4 + f/y + g/y2 + h/y3 + i/y4 + j/y5 a = 20394.95815, b = −93.8879574, c = 0.159771566, d = −0.00011904, e = 3.26365e−08, f = 2895.432235, g = −67114.469, h = 710334.0983, i = −3.4709e+06, j = 6.314e+06
5
z = a +b ln x + c(ln x)2 + d(ln x)3 + e/y + f/y2 + g/y3 + h/y4 + i/y5 a = 2.30757e+06, b = −1.0381e+06, c = 155664.0355, d = −7780.36507, e = 2893.136756, f = −67064.9057, g = 709816.7218, h = −3.4683e+06, i = 6.30913e+06
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
The data in Table 1 were used to train a Regularization network with 20 centers positioned exactly at training exemplars. The procedure described in Fig. 2 was employed to select the optimum values of isotropic spread (σ * ) and regularization parameter (λ* ). The leave one out cross validation criterion (Golub & Van Loan, 1996) was exploited to select
2139
the optimum level of regularization. Fig. 4 illustrates the variation of optimum level of regularization and the corresponding approximate degrees of freedom with the isotropic spread of the trained Regularization network. Using the definition of the smoother matrix H(λ) = G(G + λIN )−1 (Hastie & Tibshirani, 1990), the approximate degrees of freedom which
Fig. 6. Generalization performances of various models fitting the experimental data.
2140
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
Fig. 7. Generalization performance of Regularization network with d.f. = 20. Table 4 Comparison of coefficients of determination for various models Model
r2
Remarks
1 2 3 4 5 Optimal Regularization network Regularization network
0.5284071464 0.5245098605 0.512376144 0.5122967927 0.5122232765 0.76905362 0.97159379
σ * = 0.45, λ* = 0.2038 σ * = 0.15, λ* = 1e−7
gives an indication of the amount of fitting that H does is defined as the tr(H).2 Fig. 4 reveals that the optimum value of isotropic spread (σ * = 0.45) belongs to the maximum regularization level of λ* = 0.2038. The generalization performance of the optimally trained Regularization network (σ * = 0.45 and λ* = 0.2038) was then computed on a 50 × 50 uniformly spaced grid in the normalized domain of inputs (0 ≤ x1 , x2 ≥ 1). Fig. 5 illustrates the three-dimensional plot of such generalization performance for de-normalized inputs. Because of employing both the optimum level of regularization and optimal isotropic, the constructed surface does not follow the noise and provides a reasonably smooth surface. The 3D plot indicates two distinct maxima, which can be investigated by further experiments. The same data set was again used by two conventional software (3D table-curve and SigmaPlot 2000) to find the appropriate models fitting the experimental data. Table 3 shows the top five equations3 fitted to the experimental data4 by 3D table-curve software. The optimum values of the model parameters were then verified with SigmaPlot 2000. Fig. 6 compares the generalization performance of the optimum Regularization network with the above models. As Table 4 2 3 4
Sum of the eigenvalues (or diagonal elements) of matrix H. With different forms. Sorted by square of correlation coefficient (r2 ).
shows, the optimal Regularization network provides the best coefficient of determination (square root of correlation coefficient) and hence gives the finest fit to the experimental data. The folds in the polynomial surfaces of Fig. 6 (table-curve predictions) are due to high level of noise in the experimental data and over-fitting phenomena. Evidently, such Table 5 Experimental data for characterization of activated carbons (Patrick, 1995) Sample
Packing density (g/ml)
BET surface area (m2 /g)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0.37 0.45 0.56 0.48 0.44 0.56 0.45 0.49 0.47 0.48 0.45 0.39 0.45 0.44 0.42 0.32 0.59 0.46 0.5 0.46 0.5 0.13 0.3 0.45 0.3 0.3 0.46 0.28 0. 3 0.45 0.23
480 700 780 997 1030 1030 1050 1100 1100 1190 1190 1240 1260 1270 1270 1280 1350 1359 1370 1375 1420 1600 1610 1620 1650 1680 1730 2500 2500 3000 3410
Methane storage (v/v) 54 64 74 75 73 82 74 79 77 80 78 73 80 79 77 67 96 83 88 84 89 51 71 89 72 73 94 84 87 127 88
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
folds lead to poor generalization performance and are not reliable. Obviously, decreasing the value of isotropic spread fits the noise and forces the correlation coefficient toward unity. As Fig. 4 illustrates, the approximate degrees of freedom tends to 20 for very small spreads. Fig. 7 clearly shows that the Regularization network with the choice of spread (which corresponds to maximum approximate degrees of freedom (d.f.)) can fit the noise and exactly reproduces the training data. Evidently, the optimal prediction of Regularization network is more appropriate due to the high level of noise in measured values. Characterization of activated carbons has been considered as another application of optimal Regularization network. Table 5 shows the experimental data describing the dependency of methane storage in various activated carbons to their packing densities and BET surface areas (Patrick, 1995). Fig. 8 illustrates the variation of optimum levels of regularization (OLR) to their corresponding values of isotropic spread. A Regularization network was trained with the experimental data of Table 5 with optimum value of isotropic spread (σ * = 0.45) and optimum level of regularization (λ* = 0.0001). Fig. 9 presents the remarkable gener-
2141
Fig. 8. Variation of optimum level of regularization (λ* ) with the isotropic spread (σ) of Regularization network for characterization of activated carbon example.
Fig. 9. Comparison of generalization performances of conventional software and Regularization network with optimum level of regularization.
2142
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143
alization performance of such network on a 50 × 50 uniform grid. As Fig. 9 illustrates, the generalization performance of conventional software is comparable with Regularization network in the presence of sufficient data and low level of noise. Since, in most chemical engineering applications, the data are contaminated with noise, therefore, efficient algorithms are required to filter out the noise and capture the true underlying trend from the noisy data sets. Regularization networks with optimum value of isotropic spread and level of regularization are inherently equipped with proper facility to perform such a demanding task.
4. Conclusion The optimization and characterization of porous materials were considered as a hyper-surface reconstruction problem in the present investigation. Feed-forward artificial neural networks are well suited to perform such a task. A special class of radial basis function neural network known as Regularization network was employed to predict the optimal values of operating parameters in various stages of porous material preparation and characterization processes. A novel and efficient algorithm was presented for training the optimal isotropic Gaussian Regularization network. Two experimental data sets were employed to illustrate the appropriate performance of properly trained Regularization networks for hyper-surface reconstruction purposes. It was also demonstrated that the proposed method of Regularization network provides much better generalization performance than conventional techniques in the presence of noise. The later methods simply over-fit the noise and would result in unrealistic folds with poor generalization performances. The proposed algorithm can be easily extended to other applications.
Acknowledgments The authors wish to acknowledge the valuable financial support of Dr. M.M Akbarnejad, head of catalyst research center of Research Institute of Petroleum Industry (RIPI, Iran) and Mr. A.M. Rashidi for supplying the experimental data.
References Abedinzadegan Abdi, M., Mahdiarfar, M., Jalilian, A., Ahmadpour, A., & Mirhabibi, A. R. (2001). Preparation of carbon molecular sieve from a new natural source. In Proceedings of the American Carbon Society’s 25th Conference on Carbon “CARBON 01”. Ahmadpour, A., Abedinzadegan Abdi, M., Mahdiarfar, M., Rashidi, A. M., Jalilian, A., & Mirhabibi, A. R. (2002). New carbon molecular sieves for air and hydrocarbon separations. In Proceedings of the International Conference on Carbon “CARBON 02”.
Ahmadpour, A. (1997). Fundamental studies on preparation and characterization of carbonaceous adsorbents for natural gas storage. PhD Thesis, University of Queensland, Australia. Cascarini de Torre, L. E., & Bottani, E. J. (1996). Adsorption energy distribution function. Colloids and Surfaces, 116, 285. Chan, W. M., & Nascimento, C. A. O. (1994). Use of neural networks for modeling of olefin polymerization in high pressure tubular reactors. Journal of Applied Polymer Science, 53, 1277. Floquet, N., Coulomb, J. P., & Andr´e, G. (2004). Hydrogen sorption in MCM-41 by neutron diffraction study. Characterization of the porous structure of MCM-41 and the growth mode of the hydrogen confined phases. Microporous and Mesoporous Materials, 72(1–3), 143. Golub, G. H., & Van Loan, C. G. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins University Press. Golub, G. H., Heath, M., & Wahba, G. (1979). Generalized cross validation as a method for choosing a good ridge parameter. Technometrics, 21(2), 215. Guardani, R., Onimaru, R. S., & Crespo, F. C. A. (2001). Neural network model for the on-line monitoring of a crystallization process. Braz. J. Chem. Eng., 18(3). Hastie, T. J., & Tibshirani, R. J. (1990). Generalized Additive Models (1st ed.). London: Chapman and Hall. Haykin, S. (1999). Neural Networks: A Comprehensive Foundation (2nd ed.). Prentice Hall: New Jersey. Himmelblau, D. M., & Hoskins, J. C. (1988). Artificial neural network models of knowledge representation in chemical engineering. Computers and Chemical Engineering, 12, 881. Hunt, K. J., Sbarbaro, D., Zbikowski, R., & Gawthrop, P. J. (1992). Neural networks for control systems—A survey. Automatica, 28(6), 1083. Iliuta s¸i, I., & Lavric, V. (1999). Two-phase downflow and upflow fixedbed reactors hydrodynamics modeling using artificial neural network. Chem. Ind., 53(6), 76. Jagiello, J., Bandosz, T. J., & Schwarz, J. A. (1996). Characterization of microporous carbons using adsorption at near ambient temperatures. Langmuire, 12, 2837. Lastoskie, C. M., & Gubbins, K. E. (2001). Characterization of porous materials using molecular theory and simulation. Advances in Chemical Engineering, 28, 203. Lee, C. K., Chiang, A. S. T., & Tsay, C. S. (1996). The characterization of porous solids from gas adsorption measurements. Key Engineering Materials, 115, 21. Lin, H. Y., Chen, W., & Tsutsumi, A. (2003). Long-term prediction of nonlinear hydrodynamics in bubble columns by using artificial neural networks. Chemical Engineering and Processing, 42(8/9). Moussatov, A., Ayrault, C., & Castagnede, B. (2001). Porous material characterization—Ultrasonic method for estimation of tortuosity and characteristic length using a barometric chamber. Ultrasonic, 39, 195. Nascimento, C. A. O., Oliveros, E., & Braun, A. M. (1994). Neural network modeling for photochemical processes. Chemical Engineering and Processing, 33, 319. Nascimento, C. A. O., Guardani, R., & Giulietti, M. (1997). Use of neural networks in the analysis of particle size distributions by laser diffraction. Powder Technology, 90, 89. Nascimento, C. A. O., Giudici, R., & Guardani, R. (2000). Neural network based approach for optimization of industrial chemical processes. Computers and Chemical Engineering, 24, 2303. Patrick, J. W. (1995). Porosity in Carbons: Characterization and Applications. London: Edward Arnold. Poggio, T., & Girosi, F. (1990a). Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247, 978. Poggio, T., & Girosi, F. (1990b). Networks for approximation and learning. Proceedings of the IEEE, 78, 1481. Russel, B. P., & LeVan, M. D. (1994). Pore size distribution of BPL activated carbon determined by different methods. Carbon, 32, 845. Shahsavand, A., 2000. Optimal and Adaptive Radial Basis Function Neural Networks. PhD. Thesis, University of Surrey, UK.
A. Shahsavand, A. Ahmadpour / Computers and Chemical Engineering 29 (2005) 2134–2143 Shahsavand, A. (2003). A novel method for predicting the optimum width of the isotropic Gaussian Regularization networks. In Proceedings of the ICNN2003. Shahsavand, A., & Ahmadpour, A. (2005). An optimal regularization network for hypersurface reconstruction. NeuroComputing, submitted for publication. Shaikh, A., & Al-Dahhan, M. (2003). Development of an artificial neural network correlation for prediction of overall gas holdup in bubble column reactors. Chemical Engineering and Processing, 42(8/9). Stoeckli, H. F., Kraehenbuehel, F., Ballerinin, L., & Bernardini, S. (1989). Recent developments in the Dubinin equation. Carbon, 27, 125.
2143
Szombathely, M. V., Brauer, P., & Jeroniec, M. (1992). The solution of adsorption integral equations by means of regularization method. Journal of Computational Chemistry, 13(1), 17. Tarca, L. A., Grandjean, P. A., & Larachi, F. V. (2003). Reinforcing the phenomenological consistency in artificial neural network modeling of multiphase reactors. Chemical Engineering and Processing, 42(8/9). Venkatasubramanian, V., & Chan, K. (1993). A neural network methodology for process fault diagnosis. AIChE Journal, 35. Watanabe, K., Matsuura, I., Abe, M., Kubota, M., & Himmelblau, D. M. (1989). Incipient fault diagnosis of chemical engineering processes via artificial neural networks. AIChE Journal, 35(11), 1803.