Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Math 104A Homework #3 Instructor: Lihui Chai General Instructions: Please write your homework papers neatly. You need to turn in both full printouts of your

Math 104A Homework #3 Instructor: Lihui Chai General Instructions: Please write your homework papers neatly. You need to turn in both full printouts of your codes and the appropriate runs you made. Write your own code, individually. Do not copy codes! 1. (a) Let f C 2 [x0 , x1 ] and P1 its interpolation linear polynomial at x0 and x1 . Prove that 1 kf P1 k (x1 x0 )2 M2 , 8 (1) where |f 00 (x)| M2 for all x [x0 , x1 ] and kf P1 k = max |f (x) P1 (x)|. x[x0 ,x1 ] (b) Let P1 (x) be the linear polynomial that interpolates f (x) = sin x at x0 = 0 and x1 = /2. Using (a) find a bound for the maximum error kf P1 k and compare this bound with the actual error at x = /4. 2. (a) Equating the leading coefficient of in the Lagrange form of the interpolation polynomial Pn (x) with that of the Newton's form deduce that f [x0 , x1 , ..., xn ] = n X j=0 f (xj ) n Q . (2) (xj xk ) k=0 k6=j (b) Use (a) to conclude that divided differences are symmetric functions of their arguments, i.e. any permutation of x0 , x1 , ..., xn leaves the corresponding divided difference unchanged. 3. In Newton's form of the interpolation polynomial we need to compute the coefficients, c0 = f [x0 ], c1 = f [x0 , x1 ], ..., cn = f [x0 , x1 , ..., xn ]. In the table of divided differences we proceed column by column and the needed coefficients are in the uppermost diagonal. A simple 1D array, c of size n + 1, can be used to store and compute these values. We just have to compute them from bottom to top to avoid losing values we have already computed. The following pseudocode does precisely this: for j = 0, 1, ..., n do All course materials (class lectures and discussions, handouts, homework assignments, examinations, web materials) and the intellectual content of the course itself are protected by United States Federal Copyright Law, the California Civil Code. The UC Policy 102.23 expressly prohibits students (and all other persons) from recording lectures or discussions and from distributing or selling lectures notes and all other course materials without the prior written permission of Prof. Hector D. Ceniceros. 1 cj = fj end for for k = 1, ..., n do for j = n, n 1, ..., k do cj = (cj cj1 )/(xj xjk ) end for end for The evaluation of the interpolation polynomial in Newton's form can be then done with the Horner-like scheme: p = cn for j = n, n 1, ..., 0 do p = cj + (x xj )p end for (a) Write computer codes to compute the coefficients c0 , c1 , ..., cn and to evaluate the corresponding interpolation polynomial at an arbitrary point x. Test your codes and turn in a run of your test. 2 (b) Consider the function f (x) = ex for x [1, 1] and the nodes xj = 1 + j(2/10), j = 0, 1, ..., 10. Use your code in (a) to evaluate P10 (x) at the points x j = 1 + j(2/100), j = 0, 1, ..., 100 and plot the error f (x) P10 (x). 4. Inverse Interpolation. Suppose that we want to solve the equation f (x) = 0, for some function f which has an inverse f 1 . If we have two approximations x0 and x1 of a zero x of f then we 1 can use interpolation to find a better approximation, x f (0), as follows. Let y0 = f (x0 ) and y1 = f (x1 ). Table 1 yj = f (xj ) y0 y1 xj x0 x1 f 1 [y0 , y1 ] and P1 (0) = x0 + f 1 [y0 , y1 ](0 y0 ) = x0 y0 f 1 [y0 , y1 ]. We could now define x2 = P1 (0), evaluate f at this point to get y2 = f (x2 ), and then add one more row to our table 1 to get f 1 [y0 , y1 , y2 ]. Once this is computed we can evaluate P2 (0) to get an improved approximation x , etc. Let f (x) = x ex using the values f (0.5) = 0.106530659712633 and f (0.6) = 0.051188363905973 find an approximate value for the zero x of f by evaluating P1 (0). 5. Obtain the Hermite interpolation polynomial corresponding to the data f (0) = 0, f 0 (0) = 0, f (1) = 2, f 0 (1) = 3. 2 Hermite Interpolation Hector D. Ceniceros Let Pk be the interpolation polynomial of degree at most k which interpolates f at the nodes x0 , x1 , . . . , xk . Consider the remainder or error r(x) = f (x) Pk (x). (1) It has k + 1 zeros (x0 , x1 , . . . , xk ) and by repeated application of Rolle's theorem there is a point between min x0 , x1 , . . . , xk and max x0 , x1 , . . . , xk such that (k) 0 = r(k) () = f (k) () Pk (). (2) (k) But Pk is constant and equal to k! times the leading order coefficient of (k) Pk , i.e. (3) (k) Pk () = k!f [x0 , x1 , . . . , xk ]. Combining (2) and (3) we get (4) f [x0 , x1 , . . . , xk ] = 1 (k) f (). k! for some between min x0 , x1 , . . . , xk and max x0 , x1 , . . . , xk . The divided differences of f are related to the derivatives of f as indicated in (4). If we now let the points x1 , . . . , xn all approach x0 , will also tend to x0 , consequently (5) lim x1 ,...,xn x0 f [x0 , x1 , . . . , xk ] = 1 (k) f (x0 ). k! These are lecture notes for Math 104 A. These notes and all course materials are protected by United States Federal Copyright Law, the California Civil Code. The UC Policy 102.23 expressly prohibits students (and all other persons) from recording lectures or discussions and from distributing or selling lectures notes and all other course materials without the prior written permission of the instructor. 1 Then we can define f [x0 , x0 ] = f 0 (x0 ), 1 f [x0 , x0 , x0 ] = f 00 (x0 ), (6) 2! 1 f [x0 , x0 , x0 , x0 ] = f 000 (x0 ), etc. 3! This is going to be very handy for the following interpolation problem. Hermite Interpolation: Find a polynomial P of lowest degree that interpolates f and some derivative of it at a set of given nodes x0 , x1 , . . . xk . P is called the Hermite Interpolation Polynomial. For example: Suppose we look for a polynomial of P of lowest degree which satisfies the interpolation conditions: P (x0 ) = f (x0 ), P 0 (x0 ) = f 0 (x0 ), P (x1 ) = f (x1 ), P 0 (x1 ) = f 0 (x1 ). We can view this problem as a limiting case of polynomial interpolation of f at two pairs of coincident nodes, x0 , x0 , x1 , x1 and we can use Newton's Interpolation form to obtain P . The table of divided differences, in view of (6), is (7) x0 x0 x1 x1 f (x0 ) f (x0 ) f 0 (x0 ) f (x1 ) f [x0 , x1 ] f [x0 , x0 , x1 ] f (x1 ) f 0 (x1 ) f [x0 , x1 , x1 ] f [x0 , x0 , x1 , x1 ] and P (x) = f (x0 ) + f 0 (x0 )(x x0 ) + f [x0 , x0 , x1 ](x x0 )2 + f [x0 , x0 , x1 , x1 ](x x0 )2 (x x1 ). Example 1. Let f (0) = 1, f 0 (0) = 0 and f (1) = 2. Find the Hermite Interpolation Polynomial. We construct the table of divided differences as follows: (8) 0 1 0 1 0 1 2 21 21 2 and using Newton Interpolation form we have that P (x) = 1 + 0(x 0) + ( 2 1)(x 0)2 = 1 + ( 2 1)x2 . 3 Interpolation Hector D. Ceniceros 1 Approximation Theory Given f C[a, b], we would like to find a \"good\" approximation to it by \"simpler functions\Introduction to Numerical Analysis Hector D. Ceniceros 1 What is Numerical Analysis? This is an introductory course of Numerical Analysis, which comprises the design, analysis, and implementation of constructive methods for the solution of mathematical problems. Numerical Analysis has vast applications both in Mathematics and in modern Science and Technology. In the areas of the Physical and Life Sciences, Numerical Analysis plays the role of a virtual laboratory by providing accurate solutions to the mathematical models representing a given physical or biological system in which the system's parameters can be varied at will, in a controlled way. The applications of Numerical Analysis also extend to more modern areas such as data analysis, web search engines, social networks, and basically anything where computation is involved. 2 An Illustrative Example: Approximating an Integral The main principles and objectives of Numerical Analysis are better illustrated with concrete examples and this is the purpose of this chapter. Con These are lecture notes for Math 104 A. These notes and all course materials are protected by United States Federal Copyright Law, the California Civil Code. The UC Policy 102.23 expressly prohibits students (and all other persons) from recording lectures or discussions and from distributing or selling lectures notes and all other course materials without the prior written permission of the instructor. 1 sider the problem of calculating a definite integral Z b (1) I[f ] = f (x)dx. a In most cases we cannot find an exact value of I[f ] and very often we only know the integrand f at finite number of points in [a, b]. The problem is then to produce an approximation to I[f ] as accurate as we need and at a reasonable computational cost. 2.1 An Approximation Principle and the Trapezoidal Rule One of the central ideas in Numerical Analysis is to approximate a given function by simpler functions which we can analytically integrate, differentiate, etc. For example, we can approximate the integrand f (x) in [a, b] by the segment of the straight line, a linear polynomial P1 (x), that passes through (a, f (a)) and (b, f (b)). That is f (x) P1 (x) = f (a) + (2) f (b) f (a) (x a). ba and Z b b Z 1 P1 (x)dx = f (a)(b a) + [f (b) f (a)](b a) 2 1 = [f (a) + f (b)](b a). 2 f (x)dx (3) a a That is Z b f (x)dx (4) a ba [f (a) + f (b)]. 2 The right hand side is known as the simple Trapezoidal Rule Quadrature. A quadrature is a method to approximate an integral. How accurate is this approximation? Clearly, if f is a linear polynomial then the Trapezoidal Rule would give us the exact value of the integral, i.e. it would be exact. The underlying question is how well does a linear polynomial P1 , satisfying (5) (6) P1 (a) = f (a), P1 (b) = f (b), 2 approximates f on the interval [a, b]? We can almost guess the answer. The approximation is exact at x = a and x = b because of (5)-(6) and it is exact for all polynomial of degree 1. This suggests that f (x) P1 (x) = Cf 00 ()(x a)(x b), where C is a constant. But where is f 00 evaluated at? it cannot be at x for if it did f would be the solution of a second order ODE and f is an arbitrary (but sufficiently smooth, i.e. C 2 [a, b] ) function so it has to be at some undetermined point (x) in (a, b). Now, if we take the particular case f (x) = x2 on [0, 1] then P1 (x) = x, f (x) P1 (x) = x(x 1), and f 00 (x) = 2, which implies that C would have to be 1/2. So our conjecture is 1 f (x) P1 (x) = f 00 ((x))(x a)(x b). 2 (7) There is a beautiful 19th Century proof of this result by A. Cauchy. It goes as follows. If x = a or x = b (7) holds trivially. So let us take x in (a, b) and define the following function of a new variable t as (t) = f (t) P1 (t) [f (x) P1 (x)] (8) (t a)(t b) . (x a)(x b) Then , as a function of t, is C 2 [a, b] and (a) = (b) = (x) = 0. Since (a) = (x) = 0 by Rolle's theorem there is 1 (a, x) such that 0 (1 ) = 0 and similarly there is 2 (x, b) such that 0 (2 ) = 0. Because is C 2 [a, b] we can apply Rolle's theorem one more time, observing that 0 (1 ) = 0 (2 ) = 0, to get that there is a point (x) between 1 and 2 such that 00 ((x)) = 0. Consequently, 0 = 00 ((x)) = f 00 ((x)) [f (x) P1 (x)] (9) 2 (x a)(x b) and so (10) 1 f (x) P1 (x) = f 00 ((x))(x a)(x b), 2 (x) (a, b). \u0003 Now we can use (10) to find the accuracy of the simple Trapezoidal Rule. Assuming the integrand f is C 2 [a, b] Z (11) b Z f (x)dx = a a b 1 P1 (x)dx + 2 Z 3 a b f 00 ((x))(x a)(x b)dx. Now, (x a)(x b) does not change sign in [a, b] and f 00 is continuous so by the Weighted Mean Value Theorem for Integrals we have that there is (a, b) such that Z b Z b 00 00 (12) f ((x))(x a)(x b)dx = f () (x a)(x b)dx. a a The last integral can be easily evaluated if we shift to the midpoint, i.e., changing variables to x = y + 21 (a + b) then \u0012 \u00132 # Z ba " Z b 2 b a 1 (13) (x a)(x b)dx = y2 dy = (b a)3 . 2 6 a ba 2 Collecting (11) and (13) we get Z b ba 1 f (x)dx = (14) [f (a) + f (b)] f 00 ()(b a)3 , 2 12 a where is some point in (a, b). So in the approximation Z b ba f (x)dx [f (a) + f (b)]. 2 a we make the error 1 (15) E[f ] = f 00 ()(b a)3 . 12 2.2 Divide and Conquer and the Composite Trapezoidal Rule The error (15) of the simple Trapezoidal Rule grows cubically with the length of the interval of integration so it is natural to divide [a, b] into smaller subintervals, apply the Trapezoidal Rule on each of them, and sum up the result. Let us divide [a, b] in N subintervals of equal length h = N1 (b a), determined by the points x0 = a, x1 = x0 + h, x2 = x0 + 2h, . . . , xN = x0 + N h = b, then Z b Z x1 Z x2 Z xN f (x)dx = f (x)dx + f (x)dx + . . . + f (x)dx a x0 (16) = x1 N 1 Z X j=0 xj+1 f (x)dx. xj 4 xN 1 But we know Z (17) xj+1 xj 1 1 f (x)dx = [f (xj ) + f (xj+1 )]h f 00 (j )h3 2 12 for some j (xj , xj+1 ). Therefore, we get Z a b \u0015 N 1 1 1 1 3 X 00 f (x)dx = h f (x0 ) + f (x1 ) + . . . + f (xN 1 ) + f (xN ) h f (j ). 2 2 12 j=0 \u0014 The first term on the right hand side is called the Composite Trapezoidal Rule Quadrature (CTR): \u0014 \u0015 1 1 Th [f ] := h f (x0 ) + f (x1 ) + . . . + f (xN 1 ) + f (xN ) . (18) 2 2 The error term N 1 (19) X 1 1 Eh [f ] = h3 f 00 (j ) = (b a)h2 12 j=0 12 " N 1 1 X 00 f (j ) N j=0 # where we have used that h = (b a)/N . The term in brackets is a mean value of f 00 (it is easy to prove that it lies between the maximum and the minimum of f 00 ). Since f 00 is assumed continuous (f C 2 [a, b]) then by the Intermediate Value Theorem of Calculus, there is a point (a, b) such that f 00 () is equal to the quantity in the brackets so we obtain that Eh [f ] = (20) 1 (b a)h2 f 00 () 12 for some (a, b). 2.3 Convergence and Rate of Convergence We do not not know what the point is in (20). If we knew, the error could be evaluated and we would know the integral exactly, at least in principle, because (21) I[f ] = Th [f ] + Eh [f ]. 5 But (20) gives us two important properties of the approximation method in question. First, (20) tell us that Eh [f ] 0 as h 0. That is, the quadrature rule Th [f ] converges to the exact value of the integral as h 0 1 . Recall h = (b a)/N , so as we increase N our approximation to the integral gets better and better. Second, (20) tells us how fast the approximation converges, namely quadratically in h. This is its rate of convergence. If we double N (or equivalently halve h) then the error decreases by a factor of 4. We also say that the error is order h2 and write Eh [f ] = O(h2 ). The Big 'O' notation is used frequently in Numerical Analysis. We say that g(h) is order h , and write g(h) = O(h ), if there is a constant C and h0 such that |g(h)| Ch for h h0 , i.e. for sufficiently small h. Example 1. Let's check the Trapezoidal Rule approximation for an integral we can compute exactly. Take f (x) = ex in [0, 1]. The exact value of the integral is of course e 1. Observe how the error |I[ex ] T1/N [ex ]| decreases Table 1: Composite Trapezoidal Rule for f (x) = ex in [0, 1]. T1/N [ex ] |I[ex ] T1/N [ex ]| N 16 1.718841128579994 5.593001209489579 104 32 1.718421660316327 1.398318572816137 104 64 1.718316786850094 3.495839104861176 105 128 1.718290568083478 8.739624432374526 106 by a factor of (approximately) 4 as N is doubled, in accordance to (20). 2.4 Error bounds and Computable Error Estimates We can get an upper bound for the error using (20) and that f 00 is bounded in [a, b], i.e. |f 00 (x)| M2 for all x [a, b] for some constant M2 . Then (22) |Eh [f ]| 1 (b a)h2 M2 . 12 However, this error bound does not in general provide an accurate estimate of the error. It could grossly overestimate it. This can be seen from (19). As 1 Neglecting round-off errors introduced by finite precision number representation and computer arithmetic. 6 N the term in brackets converges to a mean value of f 00 , i.e. (23) Z b N 1 1 1 X 00 f (j ) f 00 (x)dx, N j=0 ba a as N and this mean value could be significantly smaller than max |f 00 |. Take for example f (x) = e100x on [0, 1]. Then max |f 00 | = 10000e100 , whereas the mean value (23) is equal to 100(e100 1) so the error bound (22) overestimates the actual error by two orders of magnitude. Thus, (22) is of little practical use. Equation (19) and (23) suggest that asymptotically, that is for sufficiently small h, Eh [f ] = C2 h2 + R(h), (24) where c2 is a constant and R(h) goes to zero faster than h2 as h 0, i.e. R(h) = 0. h0 h2 (25) lim We say that R(h) = o(h2 ) (little 'o' h2 and more generally, g(h) = o(h ) if limh0 g(h) = 0). Thus, we have h I[f ] = Th [f ] + C2 h2 + R(h) (26) and for sufficiently small h, C2 h2 is an approximation of the error. Let us now use (26) with h/2: 1 I[f ] = Th/2 [f ] + C2 h2 + R(h/2). 4 (27) Subtracting (27) from (26) we get (28) C2 h2 = \u0001 4 4 Th/2 [f ] Th [f ] + (R(h/2) R(h)) . 3 3 The last term on the right hand side is o(h2 ). Hence, for h sufficiently small, we have (29) C2 h2 \u0001 4 Th/2 [f ] Th [f ] 3 7 and this would provide a good, computable estimate for the error, i.e. \u0001 4 (30) Eh [f ] Th/2 [f ] Th [f ] . 3 The key here is that h has to be sufficiently small to make the asymptotic approximation (29) valid. We can check this by working backwards. If h is sufficiently small, then evaluating (29) at h/2 we get \u0012 \u00132 \u0001 4 h (31) Th/4 [f ] Th/2 [f ] C2 2 3 and consequently the ratio (32) q(h) = Th/2 [f ] Th [f ] Th/4 [f ] Th/2 [f ] should be approximately 4. Thus, q(h) offers a reliable, computable indicator of whether or not h is sufficiently small for (30) to be an accurate estimate of the error. 2.5 Building More Accurate Approximations: Error Correction and Richardson Extrapolation If h is sufficiently small, as explained above, we can use (30) to improve the [a,b] accuracy of Th [f ] with the following error correction approximation 2 \u0001 4 Th/2 [f ] Th [f ] . 3 We can view this error correction procedure as a way to eliminate the leading order (in h) contribution to the error. Multiplying (27) by 4 and substracting (26) to the result we get (33) Sh [f ] := Th [f ] + 4Th/2 [f ] Th [f ] 4R(h/2) R(h) + 3 3 Note that Sh [f ] is exactly the first term in the right hand side of (34) and that the last term converges to zero faster than h2 . This very useful and general process in which the leading order component of the asymptotic form of error is eliminated by a combination of two computations performed with two different values of h is called Richardson's Extrapolation. (34) 2 I[f ] = The symbol := means equal by definition. 8 Example 2. Consider again f (x) = ex in [0, 1]. With h = 1/16 we get \u0012 \u0013 T1/32 T1/16 1 (35) = 3.9998 q 16 T1/64 T1/32 and the improved approximation is (36) S1/16 [ex ] = T1/16 [ex ] + \u0001 4 T1/32 [ex ] T1/16 [ex ] = 1.718281837561771 3 which gives us nearly 8 digits of accuracy (the actual error is |E1/16 [ex ]| 9.1 109 ). Computing S1/32 we see that the error |E1/32 [ex ]| 5.7 1010 . It decreased by approximately a factor of 16. This would correspond to fourth order rate of convergence. We will see in Chapter ?? that indeed this is the case. It appears that Sh [f ] gives us superior accuracy to that of Th [f ] but at roughly twice the computational cost. If we group together the common terms in Th [f ] and Th/2 [f ] we can compute Sh [f ] at about the same computational cost as Th/2 [f ]: " # 2N 1 X 1 h 1 f (a) + f (a + jh/2) + f (b) 4Th/2 [f ] Th [f ] = 4 2 2 2 j=1 # " N 1 X 1 1 f (a + jh) + f (b) h f (a) + 2 2 j=1 # " N 1 N 1 X X h f (a + kh/2) . f (a + kh) + 4 f (a) + f (b) + 2 = 2 k=1 k=1 Therefore (37) " # N 1 N 1 X X h Sh [f ] = f (a) + 2 f (a + kh) + 4 f (a + kh/2) + f (b) . 6 k=1 k=1 The resulting quadrature formula Sh [f ] is known as the Composite Simpson's Rule and, as we will see in Chapter ??, can be derived by approximating the integrand by quadratic polynomials. Thus, based on cost and accuracy, the Composite Simpson's Rule would be preferable to the Composite Trapezoidal Rule, with one important exception: periodic smooth integrands integrated over their period. 9 Example 3. Consider the integral Z (38) I[1/(2 + sin x)] = 0 2 dx . 2 + sin x Using Complex Variables techniques (Residues) the exact integral can be computed and I[1/(2 + sin x)] = 2/ 3. Note that the integrand is smooth (has an infinite number of continuous derivatives) and periodic in [0, 2]. If we use the Composite Trapezoidal Rule to find approximations to this integral we obtain the results show in Table 2. Table 2: Composite Trapezoidal Rule for f (x) = 1/(2 + sin x) in [0, 2]. N T2/N [1/(2 + sin x)] |I[1/(2 + sin x)] T2/N [1/(2 + sin x)]| 8 3.627791516645356 1.927881769203665 104 16 3.627598733591013 5.122577029226250 109 32 3.627598728468435 4.440892098500626 1016 The approximations converge amazingly fast. With N = 32, we already reached machine precision (with double precision we get about 16 digits). 3 Super-Algebraic Accuracy of the CTR for Smooth Periodic Integrands Integrals of periodic integrands appear in many applications, most notably, in Fourier Analysis. Consider the definite integral Z 2 I[f ] = f (x)dx, 0 where the integrand f is smooth and periodic in [0, 2], i.e. f has an infinite number of continuous derivatives in [0, 2] (f C [0, 2]) and f (x + 2) = f (x) for all x. Due to periodicity we can work in any interval of length 2 and if the function has a different period, with a simple change of variables we can reduce the problem to one in [0, 2]. Consider the equally spaced points in [0, 2], xj = jh for j = 0, 1, . . . , N and h = 2/N . Because f is periodic f (x0 = 0) = f (xN = 2) and the CTR 10 becomes (39) Th [f ] = h[f (x0 ) + f (x1 ) + . . . + f (xN )] = h N 1 X f (xj ) j=0 Being f smooth and periodic in [0, 2], it has a uniformly convergent Fourier Series: X f (x) = (40) ck eikx , k= where Z 1 ck = 2 (41) 2 f (x)eikx dx. 0 Here, we are assuming that f is real-valued so that the complex Fourier coefficients satisfy ck = ck , where ck is the complex conjugate of ck 3 . Using (40) in (39) we get ! N 1 X X (42) ck eikxj . Th [f ] = h j=0 k= Justified by the uniform convergence of the series we can exchange the finite and the infinite sums to get N 1 X 2 2 X Th [f ] = ck eik N j . N k= j=0 (43) But N 1 X (44) j=0 2 eik N j = N 1 \u0010 X 2 eik N \u0011j . j=0 2 Note that eik N = 1 precisely when k is an integer multiple of N , i.e. k = lN , l Z and if so N 1 \u0010 \u0011j X 2 (45) eik N =N for k = lN . j=0 3 ikx e = cos(kx) + i sin(kx), i2 = 1 and if ck = ak + ibk , with ak , bk R, then ck = ak ibk . 11 Otherwise, if k 6= lN , then (46) N 1 \u0010 X j=0 \u0010 2 eik N \u0011j = ik 2 N \u0011N 1 e \u0010 2 \u0011 = 0 1 eik N for k 6= lN Using (45) and (46) we thus get that (47) X Th [f ] = 2 clN . j= On the other hand (48) 1 c0 = 2 Z 2 f (x)dx = 0 1 I[f ]. 2 Therefore (49) Th [f ] = I[f ] + 2 [cN + cN + c2N + c2N + . . .] , that is (50) |Th [f ] I[f ]| 2 [|cN | + |cN | + |c2N | + |c2N | + . . .] , So now, the relevant question is how fast the Fourier coefficients of clN of f decay with N . The answer is tied to the smoothness of f . Doing integration by parts in the formula (41) for the Fourier coefficients of f we have \u0014Z 2 \u0015 1 1 0 ikx ikx 2 f (x)e dx f (x)e (51) k 6= 0 ck = 0 2 ik 0 and the last term vanishes due to the periodicity of f (x)eikx . Hence, Z 1 1 2 0 (52) ck = f (x)eikx dx k= 6 0. 2 ik 0 Integrating by parts m times we obtain \u0012 \u0013m Z 2 1 1 ck = (53) f (m) (x)eikx dx 2 ik 0 12 k 6= 0, where f (m) is the m-th derivative of f . Therefore, for f C m [0, 2] and periodic (54) |ck | Am , |k|m where Am is a constant (depending only on m). Using this in (50) we get \u0015 \u0014 2 2 2 + + + ... |Th [f ] I[f ]| 2Am N m (2N )m (3N )m \u0014 \u0015 (55) 4Am 1 1 = 1 + m + m + ... , Nm 2 3 and so for m > 1 we can conclude that (56) |Th [f ] I[f ]| Cm . Nm Thus, we have proved that for f C [0, 2] and periodic, the Composite Trapezoidal Rule at equally spaced points converges to the exact integral at a rate faster than any power of 1/N (or h)! This is called super-algebraic convergence. 13

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Discrete Mathematics and Its Applications

Authors: Kenneth H. Rosen

7th edition

0073383090, 978-0073383095

More Books

Students also viewed these Mathematics questions

Question

Discuss all branches of science

Answered: 1 week ago