22
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000
Optimal Design of CMAC Neural-Network Controller for Robot Manipulators Young H. Kim and Frank L. Lewis, Fellow, IEEE
Abstract—This paper is concerned with the application of quadratic optimization for motion control to feedback control of robotic systems using cerebellar model arithmetic computer (CMAC) neural networks. Explicit solutions to the Hamilton–Jacobi–Bellman (H–J–B) equation for optimal control of robotic systems are found by solving an algebraic Riccati equation. It is shown how the CMAC’s can cope with nonlinearities through optimization with no preliminary off-line learning phase required. The adaptive-learning algorithm is derived from Lyapunov stability analysis, so that both system-tracking stability and error convergence can be guaranteed in the closed-loop system. The filtered-tracking error or critic gain and the Lyapunov function for the nonlinear analysis are derived from the user input in terms of a specified quadratic-performance index. Simulation results from a two-link robot manipulator show the satisfactory performance of the proposed control schemes even in the presence of large modeling uncertainties and external disturbances. Index Terms—CMAC neural network, optimal control, robotic control.
I. INTRODUCTION
T
HERE has been some work related to applying optimalcontrol techniques to the nonlinear robotic manipulator. These approaches often combine feedback linearization and optimal-control techniques. Johansson [6] showed explicit solutions to the Hamilton–Jacobi–Bellman (H–J–B) equation for optimal control of robot motion and how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. Dawson et al. [5] used a general-control law known as modified computed-torque control (MCTC) and quadratic optimal-control theory to derive a parameterized proportional-derivative (PD) form for an auxiliary input to the controller. However, in actual situations, the robot dynamics is rarely known completely, and thus, it is difficult to express real robot dynamics in exact mathematical equations or to linearize the dynamics with respect to the operating point. Neural networks have been used for approximation of nonlinear systems, for classification of signals, and for associative memory. For control engineers, the approximation capability of neural networks is usually used for system identification or identification-based control. More work is now appearing on the use of neural networks in direct, closed-loop controllers that yield guaranteed performance [13]. The robotic application of Manuscript received June 2, 1997; revised June 23, 1999. This research was supported by NSF Grant ECS-9521673. The authors are with the Automation and Robotics Research Institute, University of Texas at Arlington, Fort Worth, TX 76118-7115 USA (e-mail:
[email protected];
[email protected]). Publisher Item Identifier S 1094-6977(00)00364-3.
neural-network based, closed-loop control can be found [12]. For indirect or identification-based, robotic-system control, several neural network and learning schemes can be found in the literature. Most of these approaches consider neural networks as very general computational models. Although a pure neural-network approach without a knowledge of robot dynamics may be promising, it is important to note that this approach will not be very practical due to high dimensionality of input–output space. In this way, the training or off-line learning process by pure connectionist models would require a neural network of impractical size and unreasonable number of repetition cycles. The pure connectionist approach has poor generalization properties. In this paper, we propose a nonlinear optimal-design method that integrates linear optimal-control techniques and CMAC neural-network learning methods. The linear optimal control has an inherent robustness against a certain range of model uncertainties [9]. However, nonlinear dynamics cannot be taken into consideration in linear optimal-control design. We use the CMAC neural networks to adaptively estimate nonlinear uncertainties, yielding a controller that can tolerate a wider range of uncertainties. The salient feature of this H–J–B control design is that we can use a priori knowledge of the plant dynamics as the system equation in the corresponding linear optimal-control design. The neural network is used to improve performance in the face of unknown nonlinearities by adding nonlinear effects to the linear optimal controller. The paper is organized as follows. In Section II, we will review some fundamentals of the CMAC neural networks. In Section III, we give a new control design for rigid robot systems using the H–J–B equation. In Section IV, a CMAC controller combined with the optimal-control signal is proposed. In Section V, a two-link robot controller is designed and simulated in the face of large uncertainties and external disturbances. II. BACKGROUND Let
denote the real numbers, the real -vectors, and the real matrices. We define the norm of a vector as and the norm of a matrix
as where and are the largest and smallest eigenvalues of a matrix. The absolute . value is denoted as and , the Frobenius norm is Given with as the trace defined by . operator. The associated inner product is The Frobenius norm is compatible with the two-norm so that with and .
1094–6977/00$10.00 © 2000 IEEE
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER
23
Fig. 1. Architecture of a CMAC neural network.
A. CMAC Neural Networks Fig. 1 shows the architecture and operation of the CMAC. The : CMAC can be used to approximate a nonlinear mapping where is the application in the -diin the application output mensional input space and space. The CMAC algorithm consists of two primary functions for determining the value of a complex function, as shown in Fig. 1
2) Multidimensional Receptive-Field Functions: Given any , the multidimensional receptivefield functions are defined as (3)
with
,
. The output of the CMAC is given by (4)
(1) where continuous -dimensional input space; -dimensional association space; -dimensional output space. is fixed and maps each point in the The function input space onto the association space . The function computes an output by projecting the association vector onto a vector of adjustable weights such determined by that
where output-layer weight values; continuous, multidimensional receptive-field function; number of the association point. The effect of receptive-field basis function type and partition number along each dimension on the CMAC performance has not yet been systematically studied. The output of the CMAC can be expressed in a vector notation as :
(2) in (1) is the multidimensional receptive field function. 1) Receptive-Field Function: Given , let be domain of interest. and strictly increasing parFor this domain, select integers titions
(5) where matrix of adjustable weight values vector of receptive-field functions. Based on the approximation property of the CMAC, there exists ideal weight values , so that the function to be approximated can be represented as (6)
For each component of the input space, the receptive-field basis function can be defined as rectangular [1] or triangular [4] or any continuously bounded function, e.g., Gaussian [3].
the “functional reconstructional error” and with bounded.
24
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000
Then, an estimate of
Property 1—Inertia: The inertia matrix bounded
can be given by
is uniformly
(7) and are estimates of the ideal weight values. The Lyawhere punov method is applied to derive reinforcement adaptive learning rules for the weight values. Since these adaptive learning rules are formulated from the stability analysis of the controlled system, the system performance can be guaranteed for closed-loop control. B. Robot Arm Dynamics and Properties
Property 2—Skew Symmetry: The matrix (17) is skew-symmetric. III. OPTIMAL-COMPUTED TORQUE-CONTROLLER DESIGN
The dynamics of an -link robot manipulator may be expressed in the Lagrange form [9]
A. H–J–B Optimization Define the velocity-error dynamics (18)
(8) with joint variable; inertia; Coriolis/centripetal forces; gravitational forces; diagonal matrix of viscous friction coefficients; Coulomb friction coefficients; external disturbances. . The external control torque to each joint is , the tracking errors are Given a desired trajectory and
(16)
(9)
The following augmented system is obtained:
(19) or with shorter notation (20) , is defined as is as follows: mance index
with
, and . . A quadratic perfor-
(21)
and the instantaneous performance measure is defined as (10)
with the Lagrangian
is the constant-gain matrix or critic (not necwhere essarily symmetric). The robot dynamics (8) may be written as (11) where the robot nonlinear function is
(12) and, for instance (13) captures all the unknown dynamics of This key function the robot arm. Now define a control-input torque as
(22) , the control objective is Given the performance index that minimizes (21) subto find the auxiliary control input ject to the differential constraints imposed by (19). The optimal . It is control that achieves this objective will be denoted by worth noting for now, that only the part of the control-input-toin (14) is penalized. This is rearobotic-system denoted by sonable from a practical standpoint, since the gravity, Coriolis, and friction-compensation terms in (12) cannot be modified by the optimal-design phase. to minimize A necessary and sufficient condition for (21) subject to (20) is that there exist a function satisfying the H–J–B equation [10] (23)
(14) an auxiliary control input to be optimized later. with The closed-loop system becomes (15)
where the Hamiltionian of optimization is defined as (24)
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER
25
where is given by (12). It is referred to as an optimalcomputed torque controller (OCTC).
and is referred to as the value function. It satisfies the partial differential equation (25) ,
The minimum is attained for the optimal control and the Hamiltonian is then given by
B. Stability Analysis Theorem 2: Suppose that matrices and exist that satisfy the hypotheses of Lemma 1, and in addition, there exist con, and the spectrum stants and such that on . of is bounded in the sense that Then using the feedback control in (29) and (20) results in the controlled nonlinear system (35)
(26) Lemma 1: The following function and a positive symmetric matrix H–J–B equation:
composed of , satisfies the
(27)
This is globally exponentially stable (GES) regarding the origin . in is a suitable Proof: The quadratic function Lyapunov function candidate, because it is positive radially, . It is continuous and has a unique minimum at growing with the origin of the error space. It remains to show that for all . From the solution of the H–J–B equation (A12), it follows that
where and in (10) and (27) can be found from the Riccati differential equation (28) The optimal control
(36) Substituting (29) for (31) gives
that minimizes (21) subject to (20) is
See Appendix A for proof. Theorem 1: Let the symmetric weighting matrices chosen such that
,
(29)
(37)
be
The time derivative of the Lyapunov function is negative definite, and the assertion of the theorem then follows directly from the properties of the Lyapunov function [9].
(30) . Then the and required in Lemma with 1 can be determined from the following relations: (31)
IV. CMAC NEURAL-CONTROLLER DESIGN The block diagram in Fig. 2 shows the major components that embody the CMAC neural controller. The external-control torques to the joints are composed of the optimal-feedback control law given in Theorem 1 plus the CMAC neural-network output components. The nonlinear robot function can be represented by a CMAC neural network
(32) (38) with (32) solved for using Lyapunov equation solvers (e.g., MatLab [15]). See Appendix B for proof. Remarks: 1) In order to guarantee positive definiteness of the constructed matrix , the following inequality [7] must be satisfied
is a multidimensional receptive-field function for where the CMAC. of can be written as Then a functional estimate (39) The external torque is given by
(33) calculated 2) With the optimal-feedback control law to apply to the robotic using Theorem 1, the torques system are calculated according to the control input (34)
(40) where
is a robustifying vector. Then (11) becomes
(41)
26
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000
Fig. 2. CMAC neural controller based on the H–J–B optimization.
with the weight-estimation error description of (41) can be given by
. The state-space
Evaluating (47) along the trajectory of (43) yields
(42) (48) with , , and given in (19) and (20). Inserting the optimal-feedback control law (29) into (42), we obtain
Using the Riccati equation (28), we have
, and from (49)
(43)
Then the time derivative of Lyapunov function becomes
be provided by the Theorem 3: Let the control action optimal controller (29), with the robustifying term given by (50)
(44) and defined as the instantaneous-perforwith mance measure (10). Let the adaptive learning rule for neuralnetwork weights be given by
Applying the robustifying term (44) and the adaptive learning rule (45), we obtain
(51)
(45) and . Then the errors , , with are “uniformly ultimately bounded.” Moreover, the and and can be made arbitrarily small by adjusting errors weighting matrices. Proof: Consider the following Lyapunov function:
The following inequality is used in the previous derivation
(52) Completing the square terms yields
(46) where is positive definite and symmetric given by (31). The time derivative of the Lyapunov function becomes (47)
(53)
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER
27
which is guaranteed negative as long as either (54) or (55) holds
(54)
(58) is a signum function. where sqn The weighting matrices are as follows:
(55) and are convergence regions. According to a where standard Lyapunov theory extension [11], this demonstrates uni, , and . formly ultimate boundedness of Remarks: is 1) The OCTC is globally asymptotically stable if fully known, whereas the neural-adaptive controller is UUB. In both cases, there is a convergence of tracking errors. UUB is a notion of stability in the practical sense that is usually sufficient for the performance of closed-loop systems, provided that the bound on system states is small enough. 2) Robotic manipulators are subjected to structured and/or unstructured uncertainties in all applications. Structured uncertainty is defined as the case of a correct dynamical model but with parameter uncertainty due to tolerance variations in the manipulator-link properties, unknown loads, and so on. Unstructured uncertainty describes the case of unmodeled dynamics that result from the presence of high-frequency modes in the manipulator, nonlinear friction. The adaptive optimizing feature of the proposed neural controller is suitable even without full knowledge of the system dynamics. 3) From Barron results [2], there exist lower bounds of order on the approximation error if only the parameters of a linear combination of basis functions are adjusted. Our stability proof shows that the effect of the bounds on the approximation error can be alleviated by the judicious choice of weighting matrices and . 4) It is emphasized that the neural-weight values may be initialized at zero, and stability will be maintained by the in the performance-measurement optimal controller loop until the neural network learns. This means that there is no off-line learning or trial and error phase, which often requires a long time in other works. 5) The advantage of the CMAC control scheme over other existing neural-network architectures is that the number of adjustable parameters (i.e., weight values) is significantly less, since only weights in the output layer are to be adjusted. It is very suitable for closed-loop control. V. SIMULATION RESULTS The dynamic equations for an -link manipulator can be found in [9]. The cost functional to be minimized is (56) An external disturbance and frictions are (57)
(59) Solving the matrices
and
using MatLab [15] yields (60)
The motion problem considered is for the robot end-effector 0.05 m and to track a point on a circle centered at radius 0.05 m, which turns 1/2 times per second in slow motion and two times per second in fast motion. It was pointed out that control-system performance may be quite different in low-speed and high-speed motion. Therefore, we carry out our simulation for two circular trajectories. The desired positions in low speed are
(61) and the high-speed positions profiles are
(62) By solving the inverse kinematics, we obtain the desired jointangle trajectory in fast motion. The responses of the OCTC, where all nonlinearities are exactly known, are shown in Fig. 3 without disturbances and friction. The simulation was performed in low speed and high speed. After a transient due to error in initial conditions, the position errors tend asymptotically toward zero. To show the effect of unstructured uncertainties, we dropped in gravity forces. The simulation rea term sults are shown in Fig. 4(a) in low speed. Note that there is a steady-state error with OCTC. Fig. 4(b) shows the effect of external disturbances and friction forces, which is difficult to model and compensate. This is corrected by adding a CMAC neural network as follows. The CMAC can be characterized by: ; • number of input spaces: • number of partitions for each space: ; ; • number of association points: • receptive field-basis functions: with , and ;
28
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000
(a)
(b)
Fig. 3. Performance of OCTC (34): (a) tracking error for slow motion and (b) tracking error for fast motion (solid: joint 1, dotted: joint 2).
(a)
(b)
Fig. 4. Performance of OCTC (34): (a) tracking error with modeling error for slow motion and (b) tracking error with disturbance and friction for slow motion (solid: joint 1, dotted: joint 2).
(a)
(b)
Fig. 5. Performance of CMAC neural network controller (40): (a) tracking error for slow motion and (b) tracking error for fast motion (solid: joint 1, dotted: joint 2).
• learning rate in the weight-tuning law:
and ; • simulation time: 20 s. The results in Figs. 5 and 6 clearly show the ability of the CMAC neural-network controller to overcome uncertainties, both structured and unstructured. Note that the problem noted in Fig. 4 with OCTC does not arise here, as all the nonlinearities are assumed unknown to the CMAC neural controller.
VI. CONCLUSION We have developed a hierarchical, intelligent control scheme for a robotic manipulator using the HJB optimization process and the CMAC neural network. It has been shown that the entire closed-loop system behavior depends on the user-specified performance index and , through the critic-gain matrix . The Lyapunov function for the stability of the overall system is automatically generated by the weighting matrices. In the derivation of the optimal-computed torque controller, it has been assumed that nonlinearities in the robotic manipulator are completely known. However, even with the knowledge of nonlinearities, it is difficult to achieve the control objective in the pres-
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER
29
(a)
!
(b)
!
Fig. 6. Performance of CMAC neural-network controller (40): (a) tracking error with disturbance and friction for fast motion and (b) tracking error of mass variation (m ; 2:3 4:0 kg at 5 s, m ; 4:0 2:3 kg at 12 s) with disturbance and friction for fast motion (solid: joint 1, dotted: joint 2).
ence of modeling uncertainties and frictional forces. The salient feature of the CMAC neural-HJB design is that the control objective is obtained with completely unknown nonlinearities in the robotic manipulator. The proposed neural-adaptive learning shows both robustness and adaptation to changing system dynamics. To that end, a critic signal is incorporated into the adaptive-learning scheme. The application potential of the proposed methodology lies in the control design in areas such as robotics and flight control and in motion-control analysis (e.g., of biomechanics). APPENDIX A PROOF OF LEMMA 1
matrix whose elements are partial derivatives of the elements of w.r.t. . (24) is the sum of (A5) A candidate for the Hamiltonian and the Lagrangian (22). Now we are ready to evaluate how depends on . The for which has its minimum values is obtained from the partial derivative w.r.t. . is unconstrained, (A3) requires that Since (A7) which gives a candidate for the optimal control
The theorem claims that the HJB equation (A8) (A1) since is satisfied for a function (A9) (A2) We know that (A3) is satisfied by (A5) and (A6) into (A8) gives
where (A3) To derive optimal-control law, the partial derivatives of the function need to be evaluated. Here, we have the time derivative of the function (A4) The gradient of
with respect to the error state
is (A5)
with
(A10) Notice that the relation (A11) is used. A necessary and sufficient condition for optimality is that the chosen value function satisfies (23). Substituting (24) for (23) yields (A12)
(A6) In (A6), has dimension and the notation
, given (A8). Inserting
is a zero vector, is used to represent the
where it is understood that the partial derivatives of in (A12) . Inserting are being evaluated along the optimal control (A4) into (A12), we obtain (A13)
30
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000
Whence the application of robot property 2, (17) shows that the matrices of (31) and (32) solve the algebraic Riccati equation of (A20)
Inserting (20), (22), and (A10) into (A13) gives
(A14)
, (A14) can be written
Since as
(A20)
(A15)
This completes the proof. REFERENCES
We can summarize by stating that if a matrix can be found , then the value function given that satisfies (A15) in (A2) satisfies the HJB equation (A1). In this case, the desired optimal control is given by (A10). Note that if the matrix satisfies the algebraic Riccati equation (28), then satisfies (A15). This completes the proof.
APPENDIX B PROOF OF THEOREM 1 From Lemma 1, it is known that
(A16)
solves the HJB equation for equation from the quadratic form
, solving the matrix
(A17)
The optimal-feedback control law that minimizes
is
[1] J. S. Albus, “A new approach to manipulator control: The cerebellar model articulation controller (CMAC),” J. Dynamic Syst., Meas., Contr., vol. 97, no. 3, pp. 220–227, 1975. [2] A. R. Barron, “Universal approximation bounds for superposition of a sigmoidal function,” IEEE Trans. Inform. Theory, vol. 39, pp. 930–945, Mar. 1993. [3] C.-T. Chiang and C.-S. Lin, “CMAC with general basis functions,” Neural Networks, vol. 9, no. 7, pp. 1199–1211, 1996. [4] S. Commuri, F. L. Lewis, S. Q. Zhu, and K. Liu, “CMAC neural networks for control of nonlinear dynamical systems,” Proc. Neural, Parallel and Scientific Computing, vol. 1, pp. 119–124, 1995. [5] D. Dawson, M. Grabbe, and F. L. Lewis, “Optimal control of a modified computed-torque controller for a robot manipulator,” Int. J. Robot. Automat., vol. 6, no. 3, pp. 161–165, 1991. [6] R. Johansson, “Quadratic optimization of motion coordination and control,” IEEE Trans. Automat. Contr., vol. 35, pp. 1197–1208, Nov. 1990. [7] D. E. Koditschek, “Quadratic Lyapunov functions for mechanical systems,” Yale Univ., Tech. Rep. 703, Mar. 1987. [8] S. H. Lane, D. A. Handelman, and J. J. Gelfand, “Theory and development of higher-order CMAC neural networks,” IEEE Contr. Syst. Mag., pp. 23–30, Apr. 1992. [9] F. L. Lewis, C. T. Abdallah, and D. M. Dawson, Control of Robot Manipulators, New York: Macmillan, 1993. [10] F. L. Lewis and V. L. Syrmos, Optimal Control, 2nd ed, New York: Wiley, 1995. [11] K. S. Narendra and A. M. Annaswamy, “A new adaptive law for robust adaptation without persistent excitation,” IEEE Trans. Automat. Contr., vol. AC-32, pp. 134–145, Feb. 1987. [12] F. L. Lewis, A. Yesildirek, and K. Liu, “Multilayer neural-net robot controller with guaranteed tracking performance,” IEEE Trans. Neural Networks, vol. 7, pp. 388–399, Mar. 1996. [13] M. M. Polycarpou, “Stable adaptive neural control of scheme for nonlinear systems,” IEEE Trans. Automat. Contr., vol. 41, pp. 447–451, Mar. 1996. [14] Y.-F. Wong and A. Sideris, “Learning convergence in the cerebellar model articulation controller,” IEEE Trans. Neural Networks, vol. 3, pp. 115–121, Jan. 1992. [15] MatLab Users Guide, Control System Toolbox. Natick, MA: Mathworks, 1990.
(A18)
Let the weighting matrices be given by (30). in (20) and Insertion of expressions for matrices in (27) into (A2), we have
(A19)
Young Ho Kim was born in Taegu, Korea, in 1960. He received the B.S. degree in physics from Korea Military Academy in 1983, the M.S. degree in electrical engineering from the University of Central Florida, Orlando, in 1988, and the Ph.D. degree in electrical engineering from the University of Texas at Arlington, Fort Worth, in 1997. From 1994 to 1997, he was a Research Assistant at the Automation and Robotics Research Institute, University of Texas, Arlington. He has published extensively in the fields of feedback control using neural networks and fuzzy systems. He authored the book High-Level Feedback Control with Neural Networks. His research interests include optimal control, neural networks, dynamic recurrent neural networks, fuzzy-logic systems, real-time adaptive critics for intelligent control of robotics, and nonlinear systems. Dr. Kim received the Korean Army Overseas Scholarship. He received the Sigma Xi Doctoral Research Award in 1997. He is a member of Sigma Xi.
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER
Frank L. Lewis (S’78–M’81–SM’86–F’94) was born in Wuzburg, Germany. He received the B.S. degree in physics and electrical engineering and the M.S. degree in electrical engineering at Rice University, Houston, TX, in 1971. He received the M.S. degree in aeronautical engineering from the University of West Florida, Pensacola, in 1977. He received the Ph.D. degree from Georgia Institute of Technology, Atlanta, in 1981. In 1981, he was employed as a Professor of Electrical Engineering with the University of Texas, Arlington. He spent six years in the United States Navy, serving as Navigator aboard the frigate USS Trippe (FF-1075) and Executive Officer and Acting Commanding Officer aboard USS Salinan (ATF-161). He has studied the geometric, analytic, and structural properties of dynamical systems and feedback control automation. His current interests include robotics, intelligent control, neural and fuzzy systems, nonlinear systems, and manufacturing process control. He is the author/coauthor of two U.S. patents, 124 journal papers, 20 chapters and encyclopedia articles, 210 refereed conference papers, and 7 books. Dr. Lewis is a registered Professional Engineer in the State of Texas and was selected to the Editorial Boards of International Journal of Control, Neural Computing and Applications, and International Journal of Intelligent Control Systems. He is the recipient of an NSF Research Initiation Grant and has been continuously funded by NSF since 1982. Since 1991, he has received $1.8 million in funding from NSF and upwards of $1 million in SBIR/industry/state funding. He was awarded the Moncrief-O’Donnell Endowed Chair in 1990 at the Automation and Robotics Research Institute, Arlington, TX. He received a Fulbright Research Award, the American Society of Engineering Education F. E. Terman Award, three Sigma Xi Research Awards, the UTA Halliburton Engineering Research Award, the UTA University-Wide Distinguished Research Award, the ARRI Patent Award, various Best Paper Awards, the IEEE Control Systems Society Best Chapter Award, and the National Sigma Xi Award for Outstanding Chapter (as President). He was selected as Engineer of the year in 1994 by the Ft. Worth, TX, IEEE Section. He was appointed to the NAE Committee on Space Station in 1995 and to the IEEE Control Systems Society Board of Governors in 1996. In 1998, he was selected as an IEEE Control Systems Society Distinguished Lecturer. He is a Founding Member of the Board of Governors of the Mediterranean Control Association.
31