“TAYLOR NOTES” Math 126: Taylor Polynomials and Infinite Series (An Epilogue to Single Variable Calculus) K. P. Bube University of Washington, Department of Mathematics, Seattle, WA 98195 March 2005
Introduction In this “Epilogue to Single Variable Calculus,” we introduce two very useful concepts that are closely related to topics we have studied in Math 124 and Math 125. These concepts are used widely throughout mathematics, the sciences, and engineering. They are discussed in more detail in Chapter 11 in our Math 126 text (Calculus: Early Transcendentals, 5th Edition by James Stewart), and they are covered in depth in Math 327 and Math 328. In Math 126, we will not go into as much detail as does Chapter 11 in Stewart. The goal of these notes is to give you an introduction to these concepts, with an emphasis on aspects that are directly related to the calculus you have already studied. The first topic is an introduction to Taylor polynomials. These are an extension of the concept of a tangent line. Taylor polynomials are higher-degree polynomials that give very good approximations to a function near a point x = a. The second topic is infinite series. Infinite series are closely related to improper integrals. Recall that the value of an improper integral is defined by Z
∞
f (x) dx = lim
Z
t
t→∞ a
a
f (x) dx,
the limit of definite integrals on finite intervals. Similarly, we define the value of an “infinite series” to mean the limit of sums with a finite number of terms: ∞ X
k=1
ak = n→∞ lim
n X
k=1
!
ak .
Not only is the definition of infinite series similar to the definition of improper integrals, but also one of the important tools for understanding infinite series, called the Integral Test, involves comparing infinite series with improper integrals. Part 3 of these notes brings these two topics together and considers the limit of the Taylor polynomials as the degree n goes to infinity.
Preliminary: Sigma Notation X
A useful shorthand for writing sums, called sigma notation, uses a , a capital Greek letter sigma (corresponding to “S” for sum). For integers m and n with m ≤ n, the notation n X
ak represents the sum
k=m n X
k=m
ak = am + am+1 + am+2 + · · · + an−1 + an .
For example, 7 X
k 2 = 22 + 32 + 42 + 52 + 62 + 72 = 139.
k=2
To know what the sum
n X
ak is, we need to know the value of each term ak for all integers
k=m
k satisfying m ≤ k ≤ n: if we know all of these values, we could then list them (am , am+1 , am+2 , . . . , an−1 , an ) and add them up. Notice the great similarities between sums using the sigma notation and definite integrals Z
b
f (x) dx: the dummy index k in the sum plays the role of the dummy variable x in the definite integral. The lower index m plays the role of a and the upper index n plays the role of b. For the sum, the dummy variable k ranges over all integer values satisfying m ≤ k ≤ n; for the integral, the dummy variable x ranges over all real values satisfying a ≤ x ≤ b. a
When we studied Riemann sums in Math 125, the sigma notation was useful in representing the Riemann sums. Although we will not really use Riemann sums in these notes (Riemann sums are, however, closely related to Figures 4 and 5), you may want to review briefly what you studied in Math 125 on Riemann sums to re-familiarize yourself with sigma notation.
Part 1 1.1
Epilogue to Tangent Lines: Taylor Polynomials
Taylor Polynomials
The tangent line to the graph of y = f (x) at the point x = a is the line going through the point (a, f (a)) that has slope f 0 (a). By the point-slope form of the equation of a line, its equation is y − f (a) = f 0 (a)(x − a) y = f (a) + f 0 (a)(x − a). 2
As you have seen in Math 124, the tangent line is a very good approximation to y = f (x) near x = a, as shown in Figure 1. y
(a,f(a))
y = f(a) + f ’(a)(x−a)
x
a y = f(x)
Figure 1. The line y = f (a) + f 0 (a)(x − a) tangent to the graph of y = f (x) at the point (a, f (a)). We will give a name, T1 (x), to the function corresponding to the tangent line: T1 (x) = f (a) + f 0 (a)(x − a). For x near x = a, we have f (x) ≈ T1 (x).
The tangent line function T1 (x) is called the Taylor polynomial of degree one for f (x), centered at x = a. Notice that it satisfies the two conditions T1 (a) = f (a)
and
T10 (a) = f 0 (a).
In other words, T1 (x) is the polynomial of degree one that has the same function value at x = a and the same first derivative value at x = a as the original function f (x). We can get a better approximation T2 (x) near x = a using a parabola (a polynomial of degree two). The formula for T2 (x) is f 00 (a) (x − a)2 . 2 T2 (x) is called the Taylor polynomial of degree two for f (x), centered at x = a. Since T20 (x) = f 0 (a) + f 00 (a)(x − a) and T200 (x) = f 00 (a), T2 satisfies the three conditions T2 (x) = f (a) + f 0 (a)(x − a) +
T2 (a) = f (a),
T20 (a) = f 0 (a), 3
and
T200 (a) = f 00 (a).
In other words, T2 (x) is the polynomial of degree two that has the same function value at x = a, the same first derivative value at x = a, and the same second derivative value at x = a as the original function f (x). Example 1.1 Find the Taylor polynomials of degrees one and two for f (x) = ex , centered at x = 0. Solution: Since f (x) = f 0 (x) = f 00 (x) = ex , we have f (0) = f 0 (0) = f 00 (0) = e0 = 1, so the Taylor polynomial of degree one (the tangent line to y = ex at the point (0, 1) ) is T1 (x) = f (0) + f 0 (0)(x − 0) = 1 + x. The Taylor polynomial of degree two (the parabola that best fits y = e x near x = 0) is T2 (x) = f (0) + f 0 (0)(x − 0) +
f 00 (0) x2 (x − 0)2 = 1 + x + . 2 2
y
7 x
6
y=e
5 4 y=T (x) 2
3
y=T (x)
2 1
1
y=T (x) 2
x
y=e
0
x y=T (x) 1
−2
−1
0
1
2
Figure 2. The Taylor polynomials T1 (x) and T2 (x) for f (x) = ex , centered at x = 0. Notice that T2 (x) does a better job of matching f (x) near x = 0. 4
We can get an even better approximation T3 (x) near x = a using a cubic (a polynomial of degree three). The formula for T3 (x) is f 00 (a) f 000 (a) (x − a)2 + (x − a)3 . 2 6 T3 (x) is called the Taylor polynomial of degree three for f (x), centered at x = a. A short computation (Homework 1, problem 3) shows that T3 satisfies the four conditions T3 (x) = f (a) + f 0 (a)(x − a) +
T30 (a) = f 0 (a),
T3 (a) = f (a),
T300 (a) = f 00 (a),
T3000 (a) = f 000 (a).
and
In other words, T3 (x) is the polynomial of degree three that has the same function value at x = a, the same first derivative value at x = a, the same second derivative value at x = a, and the same third derivative value at x = a as the original function f (x). Example 1.2 Find the Taylor polynomial of degree three for f (x) = sin x, centered at x = 5π/6. f (x) = sin x, f 0 (x) = cos x, f 00 (x) = − sin x, f 000 (x) = − cos x,
Solution:
f (5π/6) = 1/2, √ f 0 (5π/6) = − 3/2, f 00 (5π/6) = −1/2, √ f 000 (5π/6) = 3/2.
The Taylor polynomial of degree three (the cubic that best fits y = sin x near x = 5π/6) is 5π 5π 5π f 00 (5π/6) 5π 2 f 000 (5π/6) 5π 3 ) + f 0 ( )(x − )+ (x − ) + (x − ) 6√ 6 6 2 √ 6 6 6 3 3 5π 1 5π 2 5π 3 1 − (x − ) − (x − ) + (x − ). = 2 2 6 4 6 12 6
T3 (x) = f (
y y=T3(x)
y=sin(x)
1
y=sin(x)
0 x
5 π /6
−1
y=T3(x)
0
1
2
3
4
5
6
7
Figure 3. The Taylor polynomial T3 (x) for f (x) = sin x, centered at x = 5π/6. 5
In general, the Taylor polynomial of degree n for f (x), centered at x = a, is f 00 (a) f 000 (a) (x − a)2 + (x − a)3 + · · · 2! 3! f (n−1) (a) f (n) (a) (n−1) + (x − a) + (x − a)n , (n − 1)! n!
Tn (x) = f (a) + f 0 (a)(x − a) +
where n! (read “n factorial”) is the product of the first n positive integers: n! = 1 · 2 · 3 · · · (n − 1) · n. For convenience, we define 0! = 1; then the formula for Tn (x) can be written using summation notation: Tn (x) =
n X f (k) (a)
k!
k=0
(x − a)k
(where f (0) just means the function f itself). By a similar computation to that for T 2 (x) or for T3 (x), it can be shown that Tn satisfies the n + 1 conditions Tn (a) = f (a), Tn0 (a) = f 0 (a), Tn00 (a) = f 00 (a), . . . , Tn(n−1) (a) = f (n−1) (a), Tn(n) (a) = f (n) (a). In other words, Tn (x) is the polynomial of degree n that has the same function value, first derivative value, second derivative value, etc., and nth derivative value at x = a as the original function f (x). Example 1.3 Find the Taylor polynomial of degree n for f (x) = Solution:
f (x) = (1 − x)−1 , f 0 (x) = 1 · (1 − x)−2 , f 00 (x) = 2 · 1 · (1 − x)−3 , f 000 (x) = 3 · 2 · 1 · (1 − x)−4 , f
(n)
(x) = n! · (1 − x)
−(n+1)
1 , centered at x = 0. 1−x
f (0) = 1, f 0 (0) = 1, f 00 (0) = 2!, f 000 (0) = 3!, etc. f (n) (0) = n!.
,
The Taylor polynomial of degree n for f (x) =
1 , centered at x = 0, is 1−x
f 00 (0) 2 f (n) (0) n x + ··· + x 2! n! = 1 + x + x 2 + x3 + · · · + x n .
Tn (x) = f (0) + f 0 (0)x +
6
1.2
Taylor’s Inequality
In this section, we estimate the “remainder term” Rn (x) = f (x) − Tn (x), which is the difference between f (x) and the Taylor polynomial Tn (x) for f (x), centered at x = a. The size of Rn (x) tells us how good an approximation Tn (x) is to f (x): the smaller Rn (x) is, the closer Tn (x) is to f (x). To estimate the remainder term Rn (x), we need a formula for Rn (x). The formula we need is derived in the Appendix to these notes, using integration by parts repeatedly. A consequence of this formula is Taylor’s Inequality Suppose that f (x) has n + 1 continuous derivatives on an interval containing both a and x, (n+1) (t) ≤ Mn+1 for all values of t between a and x. Then and that f |Rn (x)| ≤
Mn+1 |x − a|n+1 , (n + 1)!
where Rn (x) = f (x) − Tn (x) is the remainder term and Tn (x) is the Taylor polynomial of degree n for f (x), centered at x = a. The form of this upper bound for |Rn (x)| is easy to remember: if we were to add one more term to Tn (x) to get Tn+1 (x) (both centered at x = a), this last term would be f (n+1) (a) (x − a)n+1 ; (n + 1)! if we were to take absolute values and replace f (n+1) (a) by Mn+1 , we would get the expression for the upper bound for |Rn (x)|. Caution: This mnemonic device for remembering the form of the upper bound for |R n (x)| does f (n+1) (a) f (n+1) (c) not mean that Rn (x) equals (x − a)n+1 . This exact value of Rn (x) is (x − a)n+1 (n + 1)! (n + 1)! for some value of c between a and x; see Taylor’s Theorem with Remainder in the Appendix to these notes for details.
Notice that when x is very close to a, |x − a| is very small, so higher powers of |x − a| are much smaller than lower powers. For example, when |x − a| = 0.01, then |x − a|2 = 0.0001 and |x − a|3 = 0.000001. The [absolute value of the] remainder term is bounded by a constant times |x − a|n+1 , which is one higher power of |x − a| than the degree n. Among all polynomials of degree n, the Taylor polynomial Tn (x) is the only one that has this property: any other polynomial differs from f (x) (for x very close to a) by at least a constant times |x − a|n , which is bigger than a constant times |x − a|n+1 . This explains why we call Tn (x) the polynomial of degree n that best fits f (x) near x = a. 7
Example 1.4 The line tangent to the curve y = sin x at (0, 0) is the line y = x. Give an upper bound for the difference between the curve and this tangent line for −0.1 ≤ x ≤ 0.1. Solution: The line y = x is the Taylor polynomial T1 (x) of degree one for the function f (x) = sin x, centered at x = 0. The difference between the curve and its tangent line is R1 (x) = f (x) − T1 (x). Since f 00 (x) = − sin x, we have |f 00 (x)| ≤ 1 for all x, so we can take M2 = 1. Using the upper bound for |Rn (x)| given above, we obtain for |x| ≤ 0.1 | sin x − x| = |f (x) − T1 (x)| = |R1 (x)| ≤
|x|2 (0.1)2 M2 |x − 0|2 = ≤ = 0.005. 2! 2 2
Example 1.5 Give an upper bound for the difference between the function f (x) = ex and its Taylor polynomial T2 (x) = 1 + x + x2 /2 (centered at x = 0) for 0 ≤ x ≤ 0.1 (see Figure 2). Solution: Here, a = 0 and x > a and f 000 (x) = ex (which is positive and increasing for all x), so the maximum value of |f 000 (x)| on the interval 0 ≤ x ≤ 0.1 is taken on at the right endpoint: M3 = max |f 000 (x)| = e0.1 . 0≤x≤0.1
Using the upper bound for |Rn (x)| given above, we obtain for 0 ≤ x ≤ 0.1 |f (x) − T2 (x)| = |R2 (x)| ≤
Part 2
M3 e0.1 · |x|3 e0.1 · 0.001 |x − 0|3 = ≤ < 0.0001842. 3! 6 6
Epilogue to Improper Integrals: Infinite Series
Our second topic is infinite series. Infinite series are closely related to improper integrals. The value of an improper integral is the limit of definite integrals on finite intervals: Z
∞
f (x) dx = lim
Z
t
t→∞ a
a
f (x) dx.
Similarly, we define the value of an “infinite series” to mean the limit of the “partial sums” (sums with a finite number of terms): ∞ X
k=1
ak = n→∞ lim
n X
k=1
!
ak .
We give a name, sn , to the “partial sum” sn =
n X
k=1
ak = a1 + a2 + a3 + · · · an−1 + an . 8
Then the value of the infinite series
∞ X
ak is the limit of the partial sums
k=1 ∞ X
k=1
ak = n→∞ lim sn .
Not only is the definition of infinite series similar to the definition of improper integrals, many of the properties of infinite series are similar to corresponding properties of improper integrals. In addition, one of the important tools for understanding infinite series, called the Integral Test, involves comparing infinite series with improper integrals. Remark on the word “convergent”: Throughout these notes, when we say that an improper integral or an infinite series is convergent or that it converges, we include the condition that the limit is finite. Sometimes in calculus, we use a notation such as ex =∞ x→∞ x2 lim
that makes it look like ∞ is a valid limit, and indeed this notation has a specific meaning (e x /x2 grows arbitrarily large without bound as x → ∞), but we will not use the word “convergent” in this context. Whenever an improper integral or infinite series has a limit and that limit is finite, we will say that the improper integral or infinite series is convergent or that it converges. If an improper integral or infinite series either does not have a limit, or if it has an infinite limit, we will say that the improper integral or infinite series is divergent or that it diverges. When we use the word “convergent,” even though we do not officially have to, we will often add the parenthetical phrase “[to a finite number]” to remind us that converging to a finite limit is part of what we mean by the word “convergent.”
2.1
Improper Integrals Revisited
In this section we review improper integrals, and then learn a little more about them. You may want to read through section 7.8 (on improper integrals) of the Stewart calculus text to refresh your memory on improper integrals. We will focus on improper integrals of continuous functions on an interval a ≤ x < ∞ because these improper integrals are similar to infinite series. Similar statements to those we make below hold for other improper integrals (on intervals going to −∞ or of functions with discontinuous integrands). Example 2.1 Suppose 0 < r < 1. Show that the improper integral value. Solution: Z
t 0
rx r dx = ln r x
9
t
0
=
Z
∞ 1
rt − 1 , ln r
rx dx is convergent, and find its
so since 0 < r < 1 implies r t → 0 as t → ∞, the improper integral is convergent, and Z
∞
rx dx = lim
Z
t→∞ 0
0
t
rt − 1 −1 1 = = . t→∞ ln r ln r ln(1/r)
rx dx = lim
(Note that since 0 < r < 1, then ln r < 0 and ln(1/r) > 0.) Example 2.2 Show that the p-integral
Z
∞ 1
1 dx is convergent if p > 1 and divergent if p ≤ 1. xp
Solution: For p = 1, Z
t 1
1 dx = [ln |x|]t1 = ln t − ln 1 → ∞ x
so the improper integral For p 6= 1,
Z
∞ 1
as
t → ∞,
1 dx is divergent for p = 1. xp Z
t 1
1 x1−p dx = xp 1−p "
#t
1
t1−p − 1 . = 1−p
For p < 1, (t1−p − 1)/(1 − p) → ∞ as t → ∞, so the improper integral if p < 1.
Z
∞ 1
1 dx is divergent xp
For p > 1, (t1−p − 1)/(1 − p) → 1/(p − 1) as t → ∞, so the improper integral convergent if p > 1, and its value is Z
2.1.1
∞ 1
1 1 dx = p x p−1
Z
∞ 1
1 dx is xp
for p > 1.
The Comparison Test for Improper Integrals
Sometimes, we cannot find an antiderivative for f (x), so we cannot show directly that Z t Z t f (x) dx has a finite limit as t → ∞ by just evaluating the integral f (x) dx exactly a a and taking its limit as t → ∞. There are, however, ways that we can show that this limit exists and is finite, even when we cannot find its exact value. The first of these methods, which applies to nonnegative integrands, is the Comparison Test for Improper Integrals. The Comparison Test for Improper Integrals [with Nonnegative Integrands] Suppose that f (x) (the function we want to know about) and h(x) (the function we already know about and want to compare f (x) to) are continuous nonnegative functions for x ≥ a. 10
Z
(a) If 0 ≤ f (x) ≤ h(x) for x ≥ a and convergent.
(b) If 0 ≤ h(x) ≤ f (x) for x ≥ a and divergent.
∞
h(x) dx is convergent, then a
Z
∞
h(x) dx is divergent, then a
Z Z
∞
f (x) dx is also a ∞
f (x) dx is also a
Loosely, the reason that the Comparison Test works is that in part (a), the region between the graph of y = h(x) and the x-axis for x ≥ a has finite area, and the region between the graph of y = f (x) and the x-axis for x ≥ a is trapped inside this region, so it also has to have finite area; in part (b), the region between the graph of y = h(x) and the x-axis for x ≥ a has infinite area, and it is contained inside the region between the graph of y = f (x) and the x-axis for x ≥ a, so this latter region must also have infinite area. A more rigorous justification uses the Completeness Axiom for the real number system, which we will not discuss in detail in this course. (The Completeness Axiom is described at the top of page 709 in section 11.1 of the Stewart calculus text. The Completeness Axiom and its consequences will be discussed in depth in Math 327 and Math 328.) Example 2.3 Z
Show that the improper integral
∞
1
cos2 x dx is convergent. 1 + x6
2
Solution:
Let f (x) =
cos x 1 and h(x) = 6 . Then for x ≥ 1, 6 1+x x
0 ≤ f (x) =
1 1 cos2 x ≤ ≤ 6 = h(x). 6 6 1+x 1+x x
Since we already know that the improper integral parison Test, the improper integral
Z
∞ 1
Z
∞ 1
1 dx is convergent, then by the Comx6
cos2 x dx is also convergent. 1 + x6
Remark: In using the Comparison Test, we only need to show in part (a) that for some N ≥ a, 0 ≤ f (x) ≤ h(x) for all x ≥ N [or in part (b), that 0 ≤ h(x) ≤ f (x) for all x ≥ N ]. 2.1.2
Absolutely Convergent Improper Integrals
Fact: Absolute Convergence Implies Convergence for Improper Integrals When the improper integral for the absolute value of f (x) is convergent, i.e. Z
∞ a
|f (x)| dx = lim
Z
t
t→∞ a
11
|f (x)| dx
where the limit exists and is finite, then it is automatically guaranteed that also the improper integral for f (x) is convergent, i.e. Z
∞
f (x) dx = lim
Z
t
t→∞ a
a
f (x) dx
where the limit exists and is finite. When the improper integral for |f (x)| is convergent, we say that the improper integral for f (x) is absolutely convergent. Using this terminology, this Fact stated briefly says that absolute convergence implies convergence. Loosely, the reason that absolute convergence implies convergence is similar to the reason that the Comparison Test works: absolute convergence guarantees that there is finite area trapped between the graph of y = f (x) and the x-axis for x ≥ a (both above the x-axis and below the graph, and below the x-axis and above the graph), and this forces the integral for f (x) to be convergent. A more rigorous justification depends on the Completeness Axiom. Absolute convergence is a stronger property than convergence. Whenever an improper integral converges absolutely, it must automatically converge, too. However, it is possible for an improper integral to converge even though it does not converge absolutely. For example, it can be shown (we will not go into the details here) that the improper integral Z ∞ sin x dx converges [to a finite number]. However, the improper integral for the absolute xZ 1 ∞ sin x dx does not converge [to a finite number] (again, we will not go into the value x 1 details here). When an improper integral converges, but does not converge absolutely, we say that it is conditionally convergent. Example 2.4 Show that the improper integral
Z
1
∞
sin(x3 ) dx is absolutely convergent. x2
sin(x3 ) 1 Solution: For x ≥ 1, ≤ 2 so by the Comparison Test, the improper integral x2 x Z ∞ Z ∞ 3 sin(x3 ) sin(x ) dx is convergent. We conclude that the improper integral dx is x2 x2 1 1
absolutely convergent, and therefore it is also convergent.
Notice that there are two main issues regarding improper integrals. The first is to determine whether or not they converge. This first issue has been our focus here. The second issue is to actually find the value of the limit. In many situations, it is important to show that an improper integral converges, even when we cannot find the exact value of the limit.
12
2.2
Introduction to Infinite Series
The results of section 2.1 for improper integrals carry over almost verbatim to infinite series. Remark on indexing: Like variables of integration in definite integrals, the index k in the infinite series ∞ X
∞ X
ak is a “dummy variable.” For example, we could have used n instead of k as the index:
k=1
an and
n=1
∞ X
ak mean the same thing and represent the same number. Other letters commonly
k=1
used for the index in infinite series are i, j, `, and m. We will usually use k as the index for terms ak in an infinite series and n as the index for the partial sums sn . Most of our series will start at k = 1, but occasionally it will be useful to start at k = 0 or some other initial value of k.
Example 2.5: Geometric Series
Show that the infinite series
∞ X
k=0
is convergent if |r| < 1 and is divergent if |r| ≥ 1. Solution: For r = 1, the partial sum sn = as n → ∞, so the infinite series
∞ X
k=0
n X
k=0
rk = 1+r+r 2 +r3 +· · ·
rk = 1 + r + r 2 + · · · + r n = n + 1, so sn → ∞
rk = 1 + r + r 2 + r3 + · · · is divergent for r = 1.
For r 6= 1, (1−r)sn = (1−r)(1+r +r 2 +· · ·+r n ) = (1+r +r 2 +· · ·+r n )−(r +r 2 +· · ·+r n+1 ) = 1−r n+1 , so sn =
1 − rn+1 . 1−r
For |r| > 1, |r n+1 | → ∞ as n → ∞, so sn does not converge to a finite limit. For r = −1, sn oscillates back and forth between sn = 0 for odd n and sn = 1 for even n, so sn does not converge. So the infinite series
∞ X
k=0
rk = 1 + r + r 2 + r3 + · · · is divergent if |r| ≥ 1.
For |r| < 1, r n+1 → 0 as n → ∞, so sn = series
∞ X
k=0
1 − rn+1 1 → as n → ∞, so the infinite 1−r 1−r
rk = 1 + r + r 2 + r3 + · · · is convergent if |r| < 1, and its value is ∞ X
k=0
rk =
1 1−r
for |r| < 1.
Remark: We will discuss p-series, the infinite series analogue to the p-integrals in Example 2.2, in subsection 2.2.3 using the Integral Test.
13
2.2.1
The Comparison Test for Infinite Series
Sometimes, we cannot get a simple expression for the partial sums sn , so we cannot show directly that sn has a finite limit as n → ∞ by just simplifying sn and taking its limit as n → ∞, as we were able to do for geometric series. There are, however, ways that we can show that this limit exists and is finite, even when we cannot find its exact value. The first of these methods, which applies to infinite series
∞ X
k=1
the Comparison Test for Infinite Series.
ak with nonnegative terms ak ≥ 0, is
The Comparison Test for Infinite Series [with Nonnegative Terms] Suppose that
∞ X
ak (the series we want to know about) and
k=1
about and want to compare
∞ X
∞ X
ck (the series we already know
k=1
ak to) are series with nonnegative terms.
k=1 ∞ X
(a) If 0 ≤ ak ≤ ck for k ≥ 1 and
ck is convergent, then
k=1
(b) If 0 ≤ ck ≤ ak for k ≥ 1 and
∞ X
∞ X
ak is also convergent.
k=1
ck is divergent, then
∞ X
ak is also divergent.
k=1
k=1
The Comparison Test for infinite series is another consequence of the Completeness Axiom. Example 2.6 Show that the infinite series Solution:
Let ak =
2k
∞ X
1 is convergent. k +1 2 k=0
1 1 and ck = k . Then for k ≥ 0, +1 2 0 ≤ ak =
2k
1 1 ≤ k = ck . +1 2
Since we already know that the infinite series
∞ X 1
k=0
2k
is convergent (it is a geometric series
with r = 1/2), then by the Comparison Test, the infinite series
∞ X
k=0
2k
1 is also convergent. +1
Remark: In using the Comparison Test, we only need to show in part (a) that for some N ≥ 1, 0 ≤ ak ≤ ck for all k ≥ N [or in part (b), that 0 ≤ ck ≤ ak for all k ≥ N ].
14
2.2.2
Absolutely Convergent Infinite Series
Fact: Absolute Convergence Implies Convergence for Infinite Series When the infinite series for the absolute value of ak is convergent, i.e. ∞ X
k=1
|ak | = n→∞ lim
n X
|ak |
n X
ak
k=1
where the limit exists and is finite, then it is automatically guaranteed that also the infinite series for ak converges, i.e. ∞ X
k=1
where the limit exists and is finite.
ak = n→∞ lim
k=1
When the infinite series for |ak | is convergent, we say that the infinite series for ak is absolutely convergent. Using this terminology, the Fact above stated briefly says that absolute convergence implies convergence. This Fact is again a consequence of the Completeness Axiom for the real number system. Absolute convergence is a stronger property than convergence. Whenever an infinite series converges absolutely, it must automatically converge, too. However, it is possible for an infinite series to converge even though it does not converge absolutely. For example, it can be shown (we will not go into the details here) that the infinite series ∞ X (−1)k+1
k=1
k
=1−
1 1 1 1 1 + − + − + ... 2 3 4 5 6
converges to ln 2 ≈ 0.693. However, the infinite series for the absolute value ∞ X 1 , k k=1
which is called the harmonic series, does not converge; we will show this in the next subsection. When an infinite series converges, but does not converge absolutely, we say that it is conditionally convergent. We will not study convergence tests for conditionally convergent infinite series (for example, the alternating series test) in these notes. Example 2.7 Show that the infinite series
∞ X (−1)k
k=0
3k + k 2
is absolutely convergent.
k (−1)k 1 1 1 , so by the Comparison Test, the = ≤ = Solution: For k ≥ 0, k 2 k 2 k 3 + k 3 +k 3 3 ∞ ∞ k X X (−1)k (−1) is convergent. We conclude that the infinite series k infinite series is 3 + k2 3k + k 2 k=0
k=0
absolutely convergent, and therefore it is also convergent. 15
Notice that, as with improper integrals, there are two main issues regarding infinite series. The first is to determine whether or not they converge. This first issue has been our focus here. The second issue is to actually find the value of the limit. In many situations, it is important to show that an infinite series converges, even when we cannot find the exact value of the limit. 2.2.3
The Integral Test
An important test for determining whether or not an infinite series converges, called the Integral Test, involves comparing infinite series with improper integrals. The Integral Test Suppose f (x) is positive, continuous, and decreasing for x ≥ 1, and let ak = f (k) for k ≥ 1. Z
(i) If the improper integral convergent. (See Figure 4). (ii) If the improper integral divergent. (See Figure 5).
∞
f (x) dx is convergent, then the infinite series
1
Z
∞ X
ak is also
∞ X
ak is also
k=1
∞
f (x) dx is divergent, then the infinite series
1
k=1
y
y=f(x)
f(1)
f(2)
1
f(3)
2
f(4)
3
f(5)
4
...
5
...
f(k)
k−1
...
k
...
x
Figure 4. The Integral Test, part (i). Each rectangle is labeled by its area. The area under the curve y = f (x) (outlined in bold) is finite, so the sum of the areas of all the rectangles is finite, so the infinite series
∞ X
ak is convergent.
k=1
16
y
y=f(x)
f(1)
1
f(2)
2
f(3)
3
f(4)
4
...
5
f(k)
...
k
...
k+1 ...
x
Figure 5. The Integral Test, part (ii). Each rectangle is labeled by its area. The area under the curve y = f (x) (outlined in bold) is infinite, so the sum of the areas of all the rectangles is infinite, so the infinite series
∞ X
ak is divergent.
k=1
Example 2.8 Show that the p-series
∞ X 1
k=1
kp
is convergent if p > 1 and divergent if p ≤ 1.
1 is positive, continuous, and decreasing for xp Z ∞ Z ∞ 1 x ≥ 1, so by the Integral Test, the improper integral f (x) dx = dx and the infinite xp 1 1 ∞ ∞ X 1 X 1 f (k) = either are both convergent, or are both divergent. (For p ≤ 0, ≥1 series p kp k=1 k k=1 ∞ X 1 for all k ≥ 1, so the infinite series is divergent.) p k=1 k Solution: For p > 0, the function f (x) =
Note in particular that when p = 1, the harmonic series
∞ X 1
k=1
k
is divergent.
Remark: We can apply the Integral Test when the interval on which f (x) is positive, continuous, and decreasing, the interval on which we integrate f (x), and the sum in the infinite series all start at some nonnegative integer N instead of at 1. Example 2.9 Show that the infinite series
∞ X
1 is convergent. 2 k=2 k(ln k) 17
1 . Then f (x) is positive, continuous, and decreasing for x ≥ 2. x(ln x)2 Z t Z t −1 t 1 1 1 1 dx = = − → as t → ∞, so the f (x) dx = For t ≥ 2, 2 ln x 2 ln 2 ln t ln 2 2 x(ln x) 2 Z ∞ Z ∞ 1 improper integral f (x) dx = dx is convergent. Then by the Integral Test, x(ln x)2 2 2 ∞ ∞ X X 1 is also convergent. f (k) = the infinite series 2 k=2 k(ln k) k=2
Solution: Let f (x) =
Example 2.10 Suppose 0 < r < 1. Use the Integral Test to show that the geometric series ∞ X
k=0
rk = 1 + r + r 2 + r3 + · · · is convergent. (We already know this is true by Example 2.5.)
Solution: For 0 < r < 1, the function f (x) = rZx is positive, continuous, and decreasing for Z ∞ ∞ x ≥ 0. By Example 2.1, the improper integral f (x) dx = rx dx is convergent, so by the Integral Test, the infinite series
∞ X
0
f (k) =
∞ X
0
k
r is also convergent.
k=0
k=0
Example 2.10 shows some of the advantages and disadvantages of convergence tests like the Integral Test. Compare the solution of Example 2.10 with the solution of Example 2.5. The advantage of the Integral Test is that it is relatively easy to use (provided we can show that the corresponding integral is convergent) to show that an infinite series is convergent, as in Example 2.10. The disadvantage of the Integral Test is that it never gives us the value of the infinite sum; we had to work harder in Example 2.5 to evaluate the infinite sum. We conclude this subsection with a basic fact about convergent infinite series. We waited to discuss this fact until after we had shown that the harmonic series is divergent to help clarify what this basic fact does say and what it does not say. Fact: Whenever the infinite series
∞ X
k=1
ak is convergent, then we must have ak → 0 as k → ∞.
(Why? If the infinite series is convergent, then its partial sums sn converge to a finite limit L. But then also sn−1 → L as n → ∞, so an = sn − sn−1 → L − L = 0 as n → ∞.) Be careful not to misinterpret this Fact. The converse of this Fact is not true: the harmonic series shows that it is possible for ak → 0 as k → ∞ and still have the series be divergent. This Fact, however, can be restated in the following form: Fact: If ak does not converge to 0 as k → ∞, then the infinite series 18
∞ X
k=1
∞ X
k=1
ak is divergent.
ak
The Comparison Test for infinite series and the Integral Test can be used to show that an infinite series is convergent, but they do not help us find the value of the limit. In Part 3 of these notes, we will see some series that we can evaluate exactly. Remark: A number of other tests, including the ratio test, root test, and alternating series test, are covered in depth in Math 327 and Math 328. In Math 126, we only cover the comparison and integral tests. If you are interested in reading more, see Chapter 11 in the Stewart calculus text.
Part 3
Taylor Series
This third part of these notes brings together the two topics of Taylor polynomials and infinite series, and considers the limit of the Taylor polynomials as the degree n goes to infinity.
3.1
Taylor Series and Maclaurin Series
We start our discussion with a useful limit. Example 3.1 xn → 0 as n → ∞. n! 1 |x| ≤ , so for n ≥ N (with both x and Solution: Choose an integer N ≥ 2|x|. Then N 2 N being held fixed ), Show that for any fixed real number x,
|x|n |x|N |x| |x| |x| |x| = · · · ··· n! N! N + 1 N + 2 N + 3 n n−N |x|N 1 1 1 1 |x|N 1 → 0 ≤ · · · ··· = · N! 2 2 2 2 N! 2
as
n → ∞.
Example 3.2 Find the Taylor polynomial Tn (x) of degree n for f (x) = ex , centered at x = 0. Then show that for each x, Tn (x) → f (x) as n → ∞.
Solution: Since f (x) = f 0 (x) = f 00 (x) = f 000 (x) = · · · = f (n) (x) = ex , we have f (0) = f 0 (0) = f 00 (0) = f 000 (0) = · · · = f (n) (0) = e0 = 1, so the Taylor polynomial of degree n is f 00 (0) f 000 (0) 2 Tn (x) = f (0) + f (0)(x − 0) + (x − 0) + (x − 0)3 + · · · 2! 3! (n) f (0) f (n−1) (0) + (x − 0)(n−1) + (x − 0)n (n − 1)! n! 0
19
= 1+x+
xn−1 xn x2 x3 + + ··· + + . 2! 3! (n − 1)! n!
The (n + 1)st derivative of f (x) = ex is again f (n+1) (x) = ex . t For each fixed x ≥ 0, the maximum value of |f (n+1) (t)| = e on the interval 0 ≤ t ≤ x (n+1) is taken on at the right endpoint: Mn+1 = max f (t) = ex . Using the upper bound for 0≤t≤x
|Rn (x)| given in Taylor’s Inequality, we obtain |f (x) − Tn (x)| = |Rn (x)| ≤
ex xn+1 Mn+1 |x − 0|n+1 = →0 (n + 1)! (n + 1)!
as n → ∞ by Example 3.1 (because here, x is held fixed, so ex is also held fixed as n → ∞). We conclude that Tn (x) → f (x) as n → ∞ for x ≥ 0.
For each fixed x < 0, the maximum value of |f (n+1) (t)| = et on the interval x ≤ t ≤ 0 is taken on at the right endpoint: Mn+1 = max f (n+1) (t) = e0 = 1. Using the upper bound for x≤t≤0
|Rn (x)| given in Taylor’s Inequality, we obtain |f (x) − Tn (x)| = |Rn (x)| ≤
Mn+1 |x|n+1 |x − 0|n+1 = →0 (n + 1)! (n + 1)!
as n → ∞ by Example 3.1. We conclude that Tn (x) → f (x) as n → ∞ for x < 0, too. For each fixed value of x, the Taylor polynomial Tn for f (x) = ex (centered at x = 0) evaluated at this x Tn (x) =
n X xk
k=0
x2 x3 xn−1 xn =1+x+ + + ··· + + k! 2! 3! (n − 1)! n!
is the nth partial sum of the infinite series ∞ X xk
k=0
k!
=1+x+
x2 x3 + + ···. 2! 3!
This infinite series is called the Taylor series for f (x) = ex , centered at x = 0. Example 3.2 shows that for each real x, this series actually converges to f (x) = ex : ∞ X xk
k=0
k!
=1+x+
x2 x3 + + · · · = ex 2! 3!
for all x.
This series is a little different from the series in Part 2 because the partial sums and the whole infinite series are functions of x instead of just numbers. However, treating each fixed value of x separately, we can analyze the partial sums and the infinite series just like we did in Part 2. 20
Taylor Series and Maclaurin Series The Taylor series for a function f (x), centered at x = a, is the infinite series ∞ X f (k) (a)
k=0
k!
(x − a)k = f (a) + f 0 (a)(x − a) +
f 00 (a) f 000 (a) (x − a)2 + (x − a)3 + · · · . 2! 3!
When a = 0, the Taylor series for f (x), centered at x = 0, is often called the Maclaurin series for f (x). Example 3.3 Find the Maclaurin series for f (x) = sin x and show that it converges to f (x) = sin x for all x. f (x) = sin x, f 0 (x) = cos x, f 00 (x) = − sin x, f 000 (x) = − cos x, f 0000 (x) = sin x,
Solution:
f (0) = 0, f 0 (0) = 1, f 00 (0) = 0, f 000 (0) = −1, f 0000 (0) = 0,
and the values of f (k) (0) for k = 0, 1, 2, 3, . . . repeat in cycles of length four: 0, 1, 0, −1, 0, 1, 0, −1, etc. The Maclaurin series for f (x) = sin x is ∞ X f (k) (0)
k=0
k!
(x)k = 0 + x + 0 − = x−
x3 x5 x7 x9 +0+ +0− +0+ − ··· 3! 5! 7! 9!
∞ X x2j+1 x3 x5 x7 x9 (−1)j + − + − ··· = . 3! 5! 7! 9! (2j + 1)! j=0
For each fixed x, to show that the Maclaurin series for f (x) = sin x converges to f (x) = sin x, we must show that |Rn (x)| → 0 as n → ∞. Since the derivatives of sin x are either ± cos x or ± sin x, we can take Mn+1 = 1 for all n and all x. Using the upper bound for |Rn (x)| given in Taylor’s Inequality, we obtain |f (x) − Tn (x)| = |Rn (x)| ≤
|x|n+1 Mn+1 |x − 0|n+1 = →0 (n + 1)! (n + 1)!
as n → ∞ by Example 3.1. We conclude that Tn (x) → f (x) as n → ∞ for each fixed x, so the Maclaurin series for f (x) = sin x actually converges to f (x) = sin x : ∞ X
j=0
(−1)j
x3 x5 x7 x9 x2j+1 =x− + − + − · · · = sin x (2j + 1)! 3! 5! 7! 9!
21
for all x.
T11
T7
T3
T13
T9
T5
T1
1 T
0
15
T15
−1 T
T
13
T
9
−6
T
T
5
3
1
−4
−2
0
2
T
7
4
T
11
6
Figure 6. The Taylor polynomials T1 (x), T3 (x), T5 (x), T7 (x), T9 (x), T11 (x), T13 (x), and T15 (x) for f (x) = sin x, centered at x = 0. Figure 6 illustrates the convergence of the Taylor series for f (x) = sin x (centered at x = 0) to sin x. The partial sums of the Taylor series are the Taylor polynomials. Example 3.4 Find the Maclaurin series for f (x) = cos x and show that it converges to f (x) = cos x for all x. f (x) = cos x, f (x) = − sin x, f 00 (x) = − cos x, f 000 (x) = sin x, f 0000 (x) = cos x,
Solution:
0
f (0) = 1, f 0 (0) = 0, f 00 (0) = −1, f 000 (0) = 0, f 0000 (0) = 1,
and the values of f (k) (0) for k = 0, 1, 2, 3, . . . repeat in cycles of length four: 1, 0, −1, 0, 1, 0, −1, 0, etc. The Maclaurin series for f (x) = cos x is ∞ X f (k) (0)
k=0
k!
(x)k = 1 + 0 − = 1−
x2 x4 x6 x8 +0+ +0− +0+ − ··· 2! 4! 6! 8!
∞ X x2 x4 x6 x8 x2j (−1)j + − + − ··· = . 2! 4! 6! 8! (2j)! j=0
For each fixed x, to show that the Maclaurin series for f (x) = cos x converges to f (x) = cos x, we must show that |Rn (x)| → 0 as n → ∞. Since the derivatives of cos x are either ± sin x or ± cos x, we can take Mn+1 = 1 for all n and all x. Using the upper bound for |Rn (x)| given in Taylor’s Inequality, we obtain |f (x) − Tn (x)| = |Rn (x)| ≤
|x|n+1 Mn+1 |x − 0|n+1 = →0 (n + 1)! (n + 1)! 22
as n → ∞ by Example 3.1. We conclude that Tn (x) → f (x) as n → ∞ for each fixed x, so the Maclaurin series for f (x) = cos x actually converges to f (x) = cos x : x2 x4 x6 x8 x2j =1− + − + − · · · = cos x (−1) (2j)! 2! 4! 6! 8! j=0 ∞ X
j
for all x.
Remark: To show that the Taylor series for f (x) actually converges to f (x), we need to show that Rn (x) → 0 as n → ∞. One way to show this is by using Taylor’s Inequality. For some functions f (x), as in the following example, we can show directly that the Taylor series converges to f (x) without using Taylor’s Inequality to estimate the remainder term.
Example 3.5 1 1 , show that it converges to f (x) = for 1−x 1−x |x| < 1, and show that it diverges for |x| ≥ 1. Find the Maclaurin series for f (x) =
Solution: By Example 1.3, the Maclaurin series for f (x) = ∞ X
k=0
1 is 1−x
xk = 1 + x + x 2 + x 3 + · · · .
This is just the geometric series with r = x. By Example 2.5, this series is convergent for |x| < 1, and ∞ X 1 for |x| < 1. xk = 1 + x + x 2 + x 3 + · · · = 1−x k=0 Also by Example 2.5, this series is divergent for |x| ≥ 1.
1 Figure 7 illustrates the convergence of the Taylor series for f (x) = (centered at 1−x 1 x = 0) to f (x) = for |x| < 1, and the divergence for |x| ≥ 1. The partial sums of the 1−x Taylor series are the Taylor polynomials. Figure 7 shows only T1 (x), T4 (x), T7 (x), T10 (x), and T13 (x) to avoid crowding.
The vertical asymptote at x = 1 clearly poses a problem for the convergence of this Taylor series for x ≥ 1. Notice, however, that there is also a problem in convergence for x ≤ −1, even though f (x) doesn’t have a vertical asymptote at x = −1. As we will see in the next section, it is impossible for a Taylor series centered at x = 0 to diverge for x > 1 and also converge for x < −1. So in some sense it is true that the vertical asymptote at x = 1 not only causes divergence of this Taylor series for x > 1, it also causes divergence of this Taylor series for x < −1.
23
T4
8
T7 T4
T10
6 4
T1
2 0 −2
T1
−4 −6 −8 T7
−3
−2
T13
−1
0
1
2
3
Figure 7. The Taylor polynomials T1 (x), T4 (x), T7 (x), T10 (x), and T13 (x) 1 for f (x) = , centered at x = 0. 1−x
3.2
Radius of Convergence
As we have seen in Example 3.5, Taylor series do not always converge for all x. The behavior that we saw in Example 3.5 (the series converges for |x| < 1 and diverges for |x| > 1) is surprisingly general for all Taylor series: Fact: Radius of Convergence For the Taylor series
∞ X f (k) (a)
k=0
k!
(x − a)k for a function f (x), centered at x = a, there are
only three possibilities:
(i) The series converges only at x = a. (In this case, we say R = 0.) (ii) The series converges absolutely for all x. (In this case, we say R = ∞.) (iii) There is a positive number R for which the series converges absolutely whenever |x − a| < R and diverges whenever |x − a| > R. The number R (which may be 0 or ∞) is called the radius of convergence for the Taylor series for f (x), centered at x = a. 24
Remark: Notice that there is no conclusion about convergence when |x − a| = R, i.e., when x = a − R or x = a + R. It is possible that the series could converge at both, one, or neither of these points. We will not study what happens when |x − a| = R in these notes.
Example 3.6 1 1−x is R = 1. Examples 3.2, 3.3, and 3.4 show that the radius of convergence of the Maclaurin series for each of (a) ex , (b) sin x, and (c) cos x is R = ∞. Example 3.5 shows that the radius of convergence of the Maclaurin series for f (x) =
3.3
Operations with Taylor Series
We conclude our discussion of Taylor series by introducing some operations that can be done with Taylor series. The first of these operations is simple substitution. 1 Example 3.7 Find the Maclaurin series for f (x) = , find its radius of convergence, 1 + 4x2 1 whenever it converges. and show that it converges to f (x) = 1 + 4x2 1 is Solution: By Example 1.3, the Maclaurin series for g(u) = 1−u 1 + u + u 2 + u3 + · · · .
Since f (x) = g(−4x2 ), we substitute −4x2 in for u in the Maclaurin series for g(u) to obtain the Maclaurin series for f (x): 1 + (−4x2 ) + (−4x2 )2 + (−4x2 )3 + · · · =
∞ X
(−4x2 )j =
∞ X
(−4)j x2j .
j=0
j=0
1 is a geometric series 1 + 4x2 with r = −4x2 . By Example 2.5, this series is convergent for |r| < 1 and divergent for |r| ≥ 1. Since
To find the limit, note that the Maclaurin series for f (x) =
|r| < 1
⇔
| − 4x2 | < 1
⇔
|x2 |
a (or the interval [x, a] when x < a) is a closed and bounded interval, there is a finite number Mn+1 for which (n+1) f (t) ≤ Mn+1
for all values of t between a and x. Since c in the derivative form of the remainder term is between a and x, f (n+1) (c) Mn+1 n+1 (x − a) ≤ |x − a|n , |Rn (x)| = (n + 1)! (n + 1)!
which is Taylor’s Inequality (see section 1.2).
31