Feasibility and Stability of Constrained Finite Receding Horizon Control James A. Primbs 1 Control and Dynamical Systems, California Institute of Technology, Pasadena, California 91125, USA
Vesna Nevistic Automatic Control Laboratory, Swiss Federal Institute of Technology (ETH), CH-8092 Zurich, Switzerland
Abstract Issues of feasibility and stability are considered for a nite horizon formulation of receding horizon control for linear systems under mixed linear state and control constraints. We prove that given any compact set of initial conditions that is feasible for the in nite horizon problem, there exists a nite horizon length above which a receding horizon policy will provide both feasibility and stability, even when no end or stability constraint is imposed. Finally, computations for determining a sucient horizon length are carried out on a simple open-loop stable example under control saturation constraints. Key words: predictive control; optimal control; linear systems; state space; constraints; discrete time.
1 Introduction The search for a satisfactory methodology that explicitly incorporates constraints in optimal control design is perhaps the most fundamental of the open research areas remaining in control theory. Receding Horizon Control (RHC) is one approach which explicitly addresses the problem of constraints. Unfortunately, theoretical aspects associated with stability Corresponding author: phone (626)395-8419, fax (626)796-8914, email
[email protected]. Supported by NSF.
1
Preprint submitted to Elsevier Preprint
3 October 1999
and performance properties of receding horizon control have proven to be challenging. The problem of stability has generally been approached from two points of view. The rst is that of designing stabilizing receding horizon controllers, with the two most well known approaches being the use of an end constraint [Kwon and Pearson, 1977] or an in nite horizon formulation [Rawlings and Muske, 1993]. The second line of research has been to characterize theoretical properties of receding horizon control, with the goal being a deeper and more fundamental understanding. Examples of this line of work include [Mayne and Michalska, 1990], [Za riou, 1990], [Chen and Shaw, 1982], [Keerthi and Gilbert, 1988], [Alamir and Bornard, 1995], [Bitmead et al., 1989], [Shamma and Xiong, 1997], and [De Nicolao et al., 1998]. This paper falls under this second category. We tackle the fundamental questions relating to the ability of receding horizon control based on a nite horizon quadratic program to stabilize a constrained linear system. Previous results have explored similar issues, usually under some assumption of end constraints ([Mayne and Michalska, 1990], [Keerthi and Gilbert, 1988], and [Chen and Shaw, 1982]), an extension to an in nite horizon ([Chmielewski and Manousiouthakis, 1996], [Scokaert and Rawlings, 1996], [De Nicolao et al., 1998], and [Chen and Allgower, 1996]), or optimizations over a limited set of control actions [Alamir and Bornard, 1995]. This paper considers the most basic constrained nite horizon schemes, without extraneous constraints or restrictions. We prove that given any set of compact initial conditions that is feasible and nite for the in nite horizon problem, there exists a receding horizon scheme that uses a nite horizon length that will remain feasible and stabilize the constrained system from those initial conditions. Speci cally, consider discrete-time linear systems subject to mixed linear state and control constraints:
x(k + 1) = Ax(k) + Bu(k); x(0) = x ; subject to: Ex(k) + Fu(k) 0
(1) (2)
where x(k) 2 IRn and u(k) 2 IRm denote the state and control, respectively, with A 2 IRnn and B 2 IRnm. The constraints are written in vector form with 2 IRq , E 2 IRqn and F 2 IRqm . A popular design paradigm for linear time-invariant systems is linear-quadratic (LQ) optimal control. Furthermore, it provides a foundation for the formulation of the standard receding horizon scheme. The LQ optimal control problem may be posed in either an in nite or nite horizon framework, respecting the following objectives: 2
In nite Horizon Objective: J (x ) = inf u 0
()
1 X k=0
xT (k)Qx(k) + uT (k)Ru(k) ;
(3)
subject to the system dynamics (1), and constraint (2). When it is impossible to satisfy the constraints (2) over the in nite horizon, we will resort to the convention of de ning J (x ) = 1. 0
Finite Horizon Formulation: The corresponding nite horizon problem is de ned by the objective function:
JN (x ) = inf u 0
()
"
xT (N )P
0
x(N ) +
NX ?1 k=0
xT (k)Qx(k) + uT (k)Ru(k)
#
(4)
subject to the system dynamics (1), and constraint (2). Similar to the in nite horizon case, we de ne JN (x ) = 1 when the constraints are infeasible over the horizon length N starting from the initial condition x . P is a positive de nite matrix referred to as a terminal weight. 0
0
0
Receding Horizon Formulation: A receding horizon approach proceeds by solving the nite horizon LQ problem JN at each time step and implementing only the rst control action. We will denote the receding horizon policy based on the cost JN by u^N .
Assumptions and De nitions We will make the following assumptions concerning the constrained optimal control problem: (i) Q > 0, R > 0. (ii) [A,B] is controllable. (iii) P = Q. This implies that the nite horizon costs JN are monotonically non{decreasing in N . (iv) The origin is an interior point of the constraint set. 0
Note that due to the convexity of the cost and constraints, the in nite horizon cost J (x) will be convex and hence continuous on the set of points where J is nite. Furthermore, it is easily seen that JN (x) is positive de nite for any N since JN (x) xT Qx. 3
For a set W , let W denote its interior, W its closure, and W c its complement. Given two sets W and V , de ne W ? V = W \ V c.
S will denote the sub{level set of the in nite horizon cost J (x) i.e.: S = fx : J (x) g
(5)
Finally, recall that a set is invariant under dynamics if, from any initial condition in the set, the trajectory produced by those dynamics remains in the set for all positive time. We will use the following de nition.
De nition 1 A set W is said to be RH N{invariant if W is an invariant set under the closed{loop system using the receding horizon (RH) controller of horizon length N i.e. W is RH N{invariant i x(k) 2 W ) x(k + 1) = Ax(k) + B u^N (x(k)) 2 W: The paper proceeds as follows. First we address the issue of feasibility. Due to the \short sightedness" of a nite horizon formulation, states can be feasible for the nite horizon optimization while at the same time there can exist no feasible stabilizing trajectory over the in nite horizon. In Section 2 we prove (Thm. 3) that with a long enough horizon, a receding horizon controller will never lead to such states. With feasibility over the in nite horizon guaranteed, the main result (Thm. 7) is given in Section 3 where we prove that stability is always possible over any compact set of in nite horizon feasible initial conditions by the use of a nite horizon. The idea behind the proof of this result is standard in that we consider a nite horizon cost as a candidate Lyapunov function. But, the key is really to demonstrate that the nite horizon costs have appropriate convergence properties (Lemma 5) which make the Lyapunov argument carry through. A simple example is provided in Section 4 which demonstrates a computational technique for determining both a sucient horizon for stability, and performance bounds on its in nite horizon performance.
2 Feasibility and Constraints Due to the use of a nite horizon, feasibility of the in nite horizon problem can become a serious concern in the implementation of receding horizon policies. Finite receding horizon policies may drive the state into regions of state space from which the in nite horizon optimal control problem is unsolvable. The goal of this section is to classify the region from which the constrained optimal 4
control problem is solvable and then show that there exists a horizon above which receding horizon policies always remain in this feasible region.
The feasible region We begin with a characterization of the set of points from which the optimal in nite horizon cost is nite. While this topic has been covered before ([Zheng and Morari, 1995], and [Gilbert and Tan, 1991]), we include the following for completeness. The feasible region can be characterized by the following \backward" recursion. Beginning from the origin, the set of points that can reach the origin in a single step while satisfying the constraints is classi ed. Next, we consider the set of points that may reach the previous set in a single step. This process is carried out ad in nitum as given below: (1) Let I = f0g. (2) Take Ik to be Ik = fx : 9u; Ex + Fu ; Ax + Bu 2 Ik g: 0
+1
De ne: I1 =
1 [ k=0
+1
Ik : Then the following is true (see the appendix for a proof):
Theorem 2 x 2 I1 , J (x) < 1. Hence, I1 is the set from which the in nite horizon LQ problem admits a nite solution.
Feasibility While I1 classi es the region from which the optimal control problem has a nite solution, the question still remains as to whether a receding horizon policy can prevent the state from leaving the feasible set I1. The following theorem, whose proof can be found in the appendix, provides the answer.
Theorem 3 Let > 0 be xed and consider the sub{level set of J (x), (i.e., S (eqn. (5))). Then there exists an N 0 such that for any N N 0 , S is RH
N{invariant (i.e. S is an invariant set under the closed loop dynamics using a receding horizon controller of horizon length N ).
Apart from being an important result in its own right, the previous theorem is a critical step toward the goal of establishing the stability of nite receding horizon policies. By choosing a suciently long horizon, it allows us to restrict our attention to the sub-level sets of J (x), (i.e. S ), where in nite horizon feasibility is guaranteed and stability analysis is tractable. This task is undertaken in the next section. 5
3 Main Result In this section, we prove the main result concerning the stability of constrained nite receding horizon policies. Our approach is standard in that we consider a nite horizon cost (JN ? ) as a Lyapunov function. This converts the question of stability to a question about the convergence properties of the nite horizon costs. Hence, below we establish the required convergence properties as preliminaries. With these, the proof of stability over any compact set of in nite horizon feasible initial conditions by a suciently long nite horizon policy (Thm. 7) follows directly. 1
3.1 Preliminaries
Our approach to stability will rely on convergence properties of the nite horizon costs JN . In particular, the following theorem from real analysis will be a key tool:
Theorem 4 (Dini): Let 'n be a sequence of upper semi-continuous real-valued functions on a compact set X , and suppose that for each x 2 X the sequence 'n(x) decreases monotonically to zero. Then 'n converges to zero uniformly.
By our choice of terminal weight P = Q (assumption (iii)), the nite horizon costs JN (x) increase monotonically in N to J (x) on any compact set contained in I1. Recognizing this, one may apply Dini's theorem to the functions J (x) ? JN (x) to show that JN ? (x) ? JN (x) converges uniformly to zero. Unfortunately, we will need a stronger convergence result, as given in the following lemma: 0
1
JN x , 8x 6= Lemma 5 Let I be a compact subset of I1. De ne 'N (x) = J xx?T Qx 0; x 2 I , and 'N (0) = limx! 'N (x). Then 'N converges uniformly to zero in ( )
I.
( )
0
PROOF. Each 'N is upper semi-continuous by de nition and it is clear that for x = 6 0, 'N (x) ! 0. We still must prove that limx! 'N (x) = 0. By 0
assumption (iv), in a small enough neighborhood of the origin the constraints will not be active for the optimal unconstrained control action. Hence J (x) = xT Px, and JN (x) = xT PN x where P is the solution to the algebraic Riccati equation and PN solves the Riccati dierence equation [Bertsekas, 1976]. So T 'N (x) = x (PxT?QxPN )x (P(?QP)kNx)kkxk = (P(?QP) N ) ! 0 2
2
6
since PN ! P from standard LQ theory. So, 'N (x) is a sequence of upper semi-continuous functions that converge pointwise and monotonically to zero on a compact set. Hence by Dini's theorem, the convergence is uniform. Finally, we establish a technical result concerning the compactness of the sublevel sets of J (x).
Lemma 6 Let S be the sub-level set of the in nite horizon cost J , then S is compact.
PROOF. Since Q > 0 it is clear that S is a bounded set. Hence we only
need to show that it is closed. Let xk be a sequence in S with limit x1. Then either J (x1 ) = lim J (xk ) which shows that x1 2 S , or x1 62 I1. If x1 62 I1, then there must exist an N such that JN (x1) > . If JN (x1) = 1, then by Lemma 10, there exists a neighborhood of x1 which is infeasible for JN , contradicting the fact that xk converges to x1 (since each xk 2 I1). On the other hand, if JN (x1) is nite with JN (x1) > , then by the continuity of JN , there exists an xk close enough to x1 so that JN (xk ) > , contradicting that xk 2 S . Hence, x1 2 S , showing that S is closed. 3.2 Stability
Given the preliminaries of the preceding section, we now establish the main result, which states that given a compact set of initial conditions, there exists a horizon length (independent of the initial condition) which will stabilize every initial condition using a receding horizon policy.
Theorem 7 Let I be a compact subset of I1. Then there exists an N such that for N N , the receding horizon policy is asymptotically stabilizing for any initial condition in I .
PROOF. De ne = maxx2I J (x). Once again let S denote the sub{level set of J (x). We only consider N N 0 where N 0 is as given in Theorem 3. Hence, we know that S is RH N{invariant. We would like to use JN ? as a Lyapunov function for the receding horizon controller u^N . Consider the following equation: 1
JN ? (x(k)) ? JN ? (x(k + 1)) (6) = [xT (k)Qx(k) ? (JN (x(k)) ? JN ? (x(k)))] + u^TN (x(k))Ru^N (x(k)) 1
1
1
7
which follows from the principle of optimality [Bertsekas, 1976]. It is clear that if xT (k)Qx(k) ? (JN (x(k)) ? JN ? (x(k))) > 0 for all x 2 S , then the right hand side of (6) is positive and JN ? is a Lyapunov function, proving stability. Now consider the sequence of functions 'N (x) = (J (x) ? JN (x))=(xT Qx). By Lemma 5, this sequence of functions converges uniformly to zero on S. So there exists an N such that for all x 2 S , 'N ? < 1, or equivalently xT Qx J (x) ? JN ? (x) JN (x) ? JN ? (x) which is the desired inequality. By standard Lyapunov theorems [Vidyasagar, 1993], JN is a continuous, positive de nite Lyapunov function and this proves asymptotic stability. 1
1
1
1
1
Remark 8 End constraints represent the upper extreme for the possible
choices of terminal weights (heuristically they may be thought of as the terminal weight P0 = 1 (cf. (4)) ). So far, we have limited the discussion to the terminal weight P0 = Q (assumption (iii)). It should be mentioned that simple modi cations of the above results hold for arbitrary positive de nite P0 between Q and 1. These follow from basic sandwiching arguments (see [Primbs and Nevistic, 97] for details.)
Remark 9 It can be shown that a bound for the in nite horizon performance
of the nite receding horizon controller is obtainable through only nite horizon computations involving JN . Furthermore, this establishes that the performance of the nite horizon controller converges to the performance of the optimal in nite horizon controller as the horizon N tends to in nity (see [Shamma and Xiong, 1997] for details.) These calculation are demonstrated on a simple example in the following section.
4 Example: An open-loop stable system In this section, we present one approach to the actual computation of a sucient horizon for stability. Consider the following stable dynamics
0 1 1 0 4 = 3 ? 2 = 3 CA x(k) + B@ 1 CA u(k) x(k + 1) = B@ 1
0
0
(7)
subject to the saturation constraint juj . We choose:
0 1 10 Q = B@ CA ; 01
8
R=1
(8)
for cost parameters in the receding horizon scheme corresponding to the nite horizon cost in (4), and consider the set of initial conditions
I = f(x ; x ) : jx j 2; jx j 2g : 1
2
1
2
By Theorem 7 we know that there exists a nite horizon length which will stabilize all initial conditions in I . We compute this horizon length as follows: Let xT Px denote the in nite horizon cost of the open loop system where P is obtained from the Riccati equation AT PA ? P + Q = 0. Then xT Px is an upper bound for J (x) over the set I . If = maxx2I xT Px thennfrom any initialo condition in I , the state must be contained in the set W = x : xT Qx at the next step under a receding horizon policy. Furthermore, if the system is stable from any initial condition in I with horizon length N , then there will exist a subset of W which contains I and is RH-N invariant. Now, consider the following parameters:
JN (x) and = max JN (x(1)) : N = x2max N x2W ?f g W ?f g JN (x) JN (x(0)) +1
0
0
If N < 1 then the receding horizon controller of horizon N is stable for all initial conditions in I . Additionally, with N and N ? one can bound the performance of the receding horizon controller over the initial conditions in I . It is proved in [Shamma and Xiong, 1997] that JN (x) J (x) JuN (x) N JN (x) with 1
^
! ! N? ? 1 N N = 1 + 1 ? N N? 1
1
where JuN represents the in nite horizon cost of the implemented receding horizon controller. ^
In this example, computations were carried out for three dierent levels of saturation: = 1, = 0:5, and = 0:1. Speci cally, N and N were calculated using the nonlinear programming software NPSOL. Those calculations indicated that for all three levels of saturation, a horizon of N = 3 was sucient for stability. To ensure performance within 5% of optimal, horizons of 7; 9 and 10 were found to be sucient for the saturation levels = 1, = 0:5, and = 0:1, respectively. 9
5 Concluding Remarks It was proven that for any speci ed compact set of in nite horizon feasible initial conditions, a suciently long horizon will guarantee feasibility and stability of a constrained receding horizon policy with the nite horizon cost acting as a Lyapunov function. The underlying concepts used to prove this result are all transferable to more general systems. In particular, it should be clear that the feasibility and stability theorems (Thms. 3, 7) are all valid for nonlinear systems and general costs under appropriate modi cations of the assumptions.
A Proof of Theorems 2 and 3 Proof of Theorem 2 Assume x 2 I1, then 9K such that x 2 IK . Hence, by construction of the set IK , this means that there exists a sequence of K controls u(0); :::; u(K ) such that this control Psequence will bring the state to zero at x(K + 1) = 0. Hence, we have J (x) Ki xT (i)Qx(i) + u(i)T Ru(i) < 1: +1 =0
Now assume x 62 I1. Since [A; B ] is controllable, for n equal to the size of the state and from any point in a small enough neighborhood of the origin, we may perform deadbeat control (unsaturated, Assumption (v)), that will take the state to the origin. Hence In contains a neighborhood of the origin. Now, since x 62 I1, then no sequence of controls exists such that the state will ever enter In. Hence for all k, the state x(k) lies outside a neighborhood of the origin. Let be the minimum of xT Qx outside of this neighborhood. Since P 1 > 0, we have that J (x) k = 1: =0
Before establishing the feasibility theorem, we state three preliminary results which are required for its proof.
Lemma 10 For any nite horizon length N , the constraints (2) de ne a closed set of initial conditions in state space.
The next two lemmas concern where the state can lie after a single step in a RH policy. The rst shows that it must lie in a bounded set, while the second establishes that for a long enough horizon it must lie in the feasible set I1.
Lemma 11 Let W = fx : xT Qx g: Then x(k) 2 S ) x(k + 1) 2 W under a RH policy of any horizon length N.
10
PROOF. Note that without loss of generality, we may assume x(k) = x(0). So for x(0) 2 S , we have the following chain of inequalities: J (x(0)) JN (x(0)) xT (0)Qx(0) + JN ? (x(1)) JN ? (x(1)) xT (1)Qx(1) 1
1
which implies that x(1) 2 W .
Lemma 12 There exists a nite horizon length N such that for x(k) 2 S , x(k + 1) 2 I1. PROOF. We recall that for x(0) 62 I1, there exists a neighborhood of the
origin which the state may never enter (cf. Theorem 2). Outside of this neighborhood, the minimum of xT Qx is > 0. Then clearly for x 62 I1, we have the following lower bound: JN (x) N. Now, by merely choosing N such that (N ? 1) > , it is clear that for x(0) 2 S , we must have x(1) 2 I1.
Proof of Theorem 3 The proof is divided into two steps. The rst establishes that there exists a neighborhood of the origin from which any state in this neighborhood will not leave S by the next step. By \removing" the origin in this fashion, a positive lower bound for xT Qx can be obtained for the rest of S . This lower bound is used in the second portion of the proof, which uses the compactness of the set W and argues by contradiction to prove the result.
Step 1: First we show that there exists a smaller sub{level set contained in S , which we will denote S with < such that if x(0) 2 S , then x(1) 2 S .
In other words, if the state is in S , then there is no possibility of it leaving S on the next step.
n
o
Let be the largest number such that x : xT Qx S: To show that x(0) 2 S ) x(1) 2 S we note that for x(0) 2 S : J (x(0)) JN (x(0)) JN ? (x(1)) xT (1)Qx(1): From the de nition of this implies x(1) 2 S . 1
Step 2: We are left to consider x(0) 2 S ? S and prove that a suciently
long horizon will render S RH N-invariant. The proof of this proceeds by contradiction. Suppose there does not exist a horizon length N that makes S RH N-invariant. So, for every j , there exists a horizon length Nj j and an xj (0) 2 S ? S such that xj (1) = Axj (0) + B u^Nj (xj (0)) 62 S (i.e. 11
xj (1) 2 W ? S). For notational convenience, call xj = xj (1). Then xj is a sequence in the compact set W (cf. Lemma 11). Therefore, xj has a convergent subsequence xk ! x1. Note the following properties of x1.
J (x1) : Since either x1 62 I1 so J (x1 ) = 1 or J (x1) = lim J (xk )
(Recall from Lemma 12 that for some k > K large enough, all the xk lie in I1 and hence have well de ned in nite horizon costs J (xk ). So lim J (xk ) is well de ned.). For any > 0, there exists an N such that 1 > JN (x1 ) > ? . For assume it is not true. Then either (1) JN (x1) ? for all N , in which case J (x1) ? which is not possible, or (2) There exists a nite N where x1 is not feasible for JN . Then by Lemma 10, the constraints de ne a closed set of feasible initial conditions over any nite horizon N , and x1 is not in this closed set. Hence there exists an open neighborhood of x1 which is not feasible for horizons greater than or equal to N . This implies that xk 6! x1 which is also a contradiction. 4
4
4
In particular, if we take to be = minx2S?S xT Qx > 0 then there exists an N such that 1 > JN (x1) ? : By the continuity of JN , we can choose a k large enough so that Nk > N and JNk (xk ) JN (xk ) ? : Recalling that xk = xk (1) = Axk (0) + B u^Nk (xk (0)), and that xk (0) 2 S ? S , we have the following: 4
2
JNk (xk (0)) = xTk (0)Qxk (0) + u^TNk Ru^TNk + JNk (xk (1)) + ? 2 = + 2 +1
+1
+1
which is a contradiction. This proves the theorem.
References [Alamir and Bornard, 1995] M. Alamir and G. Bornard, \Stability of a truncated in nite constrained receding horizon scheme: The general discrete nonlinear case", Automatica, Vol. 31, No. 9. pp. 1353-1356, 1995. [Bertsekas, 1976] D.P. Bertsekas, \Dynamic Programming and Stochastic Control", Academic Press New York, 1976.
12
[Bitmead et al., 1989] R.R. Bitmead, M. Gevers, and V. Wertz, \Optimal control redesign of generalized predictive control", In Proc. IFAC Symp. on Adaptive Systems in Control and Signal Proc., 129{134, 1989 [Chen and Allgower, 1996] , H. Chen and F. Allgower, \A quasi-in nite horizon nonlinear model predictive control scheme for constrained nonlinear systems", In Proc. of the 16th Chinese Control Conference, Qindao, China, pg. 309-316, 1996. [Chen and Shaw, 1982] C.C. Chen and L. Shaw, \On Receding Horizon Feedback Control", Automatica, Vol. 18, No. 3. pp. 349-352, 1982. [Chmielewski and Manousiouthakis, 1996] D. Chmielewski and V Manousiouthakis, \On Constrained In nite-Time Linear Quadratic Optimal Control", Syst. Contr. Lett., Vol 29., pg. 121-129, 1996. [De Nicolao et al., 1998] G. De Nicolao, L. Magni, and R. Scattolini, \Stabilizing Receding-Horizon Control of Nonlinear Time-Varying Systems", In IEEE Trans. on Auto. Control, 43(7): 1030{1036, 1998. [Garcia et al., 1989] C.E. Garcia, D.M. Prett, and M. Morari, \Model Predictive Control: Theory and Practice { A Survey", In Automatica, Vol 25., No. 3, May 1989. [Gilbert and Tan, 1991] E. G. Gilbert and K. T. Tan, \Linear systems with state and control constraints: The theory and application of maximal output admissible sets", In IEEE Trans. on Auto. Control, 36(9): 1008{1020, 1991. [Keerthi and Gilbert, 1988] S.S. Keerthi and E.G. Gilbert, \Optimal In niteHorizon Feedback Laws for a General Class of Constrained Discrete-Time Systems: Stability and Moving-Horizon Approximations", In Journal of Opt. Theory and Appl., Vol 57., No. 2, May 1988. [Kwon and Pearson, 1977] W.H. Kwon and A.E. Pearson, \A modi ed quadratic cost problem and feedback stabilization of a linear system", In IEEE Trans. Auto. Control, AC{22:838{842, 1977. [Mayne and Michalska, 1990] D. Mayne and H. Michalska, \Receding horizon control of nonlinear systems", IEEE Trans. Aut. Control, 35:814{824, 1990. [Primbs and Nevistic, 97] J. Primbs and V. Nevistic, \Constrained nite receding horizon linear quadratic control", CDS Technical Memo #CIT-CDS 97-002, Jan, 1997. [Rawlings and Muske, 1993] J.B. Rawlings and K.R. Muske, \The stability of constrained receding horizon control", In IEEE Trans. Auto. Control, Vol 38, No. 10:1512{1516, 1993. [Royden, 1988] H.L. Royden, Real Analysis, third edition, Macmillan, New York, 1988. [Scokaert and Rawlings, 1996] P. Scokaert and J. Rawlings, \In nite horizon linear quadratic control with constraints", In Proced. of the 13th IFAC World Congress, San Francisco, California, July, 1996.
13
[Shamma and Xiong, 1997] J.S. Shamma and D. Xiong, \Linear Non-Quadratic Optimal Control", In IEEE Trans. Auto. Control, Vol 42, No. 6: 875-879, 1997. [Vidyasagar, 1993] M. Vidyasagar, Nonlinear System Analysis, Prentice Hall, 1993. [Za riou, 1990] E. Za riou, \Robust model predictive control of processes with hard constraints", In Computers and Chem. Eng., Vol 14, No. 4/5: 359-371, 1990. [Zheng and Morari, 1995] A. Zheng and M. Morari, \Control of Linear Unstable Systems with Constraints", In Proceedings of the American Control Conference, Seattle, Washington, pp. 3704{3708, June 1995.
14