Question: (Q1) Let A be an nn matrix with real entries and distinct eigenvalues. There are a few ways to measure the condition of A. In

(Q1) Let A be an nn matrix with real entries and distinct eigenvalues. There are a few ways to measure the condition of A. In general, the condition number is given by (A) = ||A|| ||A1 ||. It signifies how much change will be there in an output, if we slightly change some elements of A. The output could be the solution of a linear system or an eigenvalue problem or some general problem involving the matrix A. A large condition number would mean that if we slightly change the elements in A, there would be an unreasonably large change in the output. 1. In the L 2 norm, the condition number is (A) = max p (AAT ) min p (AAT ) , (1) (A) = max p (AT A) min p (AT A) , (2) (A) = max (A) min (A) 1, (3) where (C) is the set of eigenvalues of the matrix C (think about why (AAT ) = (AT A), this won't be graded) and (A) is the set of singular values of A given by (A) = p (AAT ) = p (AT A). A bad condition number usually makes solving linear systems numerically challenging. 2. Another way of studying matrices is by considering the condition number of each eigenvalue. Recall that is an eigenvalue of A associated with non-zero eigenvector v iff Av = v. In a general setting, we may also consider the following eigenvalue problem, u T A = uT , u = 0. Then, v is a right eigenvector and u T is a left eigenvector. The sensitivity or condition of an eigenvalue, then depends on the relation of its left and right eigenvectors u T and v respectively. The condition number of an eigenvalue is () = |u||v| |u T v| = 1 cos((u T , v)) 1. You get one value per eigenvalue. Note that if AT A = AAT (A is normal), then () = 1. This means that for these type of matrices, eigenvalue problems are always well conditioned no matter the value of (A). For a few matrices your task is to, 1

compute AAT , AT A, (A) and () (return only min and max values and indicate the corresponding eigenvalue number, see the code regarding sorting). compute (A) in 3 different ways. Perform computation i using Equation (i), i = 1, 2, 3. You may use the eig command to obtain the eigenvectors and eigenvalues. Matlab implements Equation (3) in cond. Do all the computations return the same values ? If not, explain why (write you reasons overall below the table). also check whether A is normal. If so, do the values () agree with what is expected ? Put your observations in a table with the following structure, A (A) [3 values from different methods] () [vector, 1 per eigenvalue] AAT = AT A ? discuss . . . . . . . . . . . . . . . In the column for discussion, indicate whether this matrix is better suited for linear systems or eigenvalue problems or neither (you can also check by inverting the matrix) and explain. In case there is a discrepancy in the 3 methods you used to compute (A) use the value returned by cond. The matrices are, 1. A1 = diag(10. (3:3)). 2. Vandermonde matrices V are known for being ill-conditioned, Vn = 1 x0 x n 0 1 x1 x n 1 . . . . . . . . . . . . 1 xm x n m , where xi are some data points through which we wish to fit a polynomial of degree n (see the next question). For simplicity, we take m = n. Using the vander command in Matlab construct the Vandermonde matrix for Xk = (1 : 0.5 : k), k = 3, 6, 9. Look up the documentation to convert what Matlab returns to this form. 3. Matlab has many sample matrices that can be accessed via the gallery command. Try "chebspec", n = 3, and "grcar", n = 35. (Q2) Given data points or nodes (x0, x1, , xn) we want to fit an n degree polynomial of the form pn(x) = Xn i=0 aix i , where the goal is to find the vector of coefficients a = (a0, a1, , an). For nodes Xdata = (1, 10. (6:1) , 0.2 : 0.1 : 0.9) with p(Xdata) = sin(Xdata), set this up as a matrix system and solve it for {ai} using 1. the standard inv in Matlab, 2. LU decomposition. Let pinv be the polynomial generated by using inv and plu be the polynomial generated by using lu. Report the relative error in the inf norm, ||pk(X)pn(X)|| ||pn(X)|| , where pn(X) is the true solution and k = inv, lu over the interpolated values (see the extra points X in the code)

(Q3) In the previous question, the data points or nodes were randomly placed. It is therefore natural to wonder whether there is a certain way to place the nodes so that we improve the conditioning of the Vandermonde matrix Vn. We will try 3 different types of node placements, vary the matrix size and compare results with the expected asymptotic condition number 1 , i.e. the condition number when n . 1. Harmonic nodes: xi = 1/i, i = 1, . . . , n. The expected rate is at least R := limn (Vn) > nn+1 . 2. Equidistant nodes on [0,1]: xi = (i 1)/(n 1), i = 1, . . . , n. Here R 2 4 8 n . 3. Chebychev nodes on (-1,1): xi = cos (2i1) 2n , i = 1, . . . , n. Here R 3 3/4 4 1 + 2 n . You must report the first value of n for which the numerically computed condition number satisfies the expected rate or for which numerical errors make the calculation of the condition number unreliable. Support your arguments with plots. In the provided code file part 1 is shown. Modify this for the other parts. Finally, comment on which node placements of the 3 would you use in general. Is this a fair comparison ?

For your own amusement, in (Q2) take one of these matrices with some bad (). Change an element slightly with respect to size and see whether that particular eigenvalue has changed or not. If some () has a better value, did this change in your matrix affect it ?

demo code :

close all; clear

%% Q1 ----------

A = diag(10.^(-3:3)); imagesc(A); colorbar % sub in other matrices

% method 1 to compute kappa(A)

B = A*A'; eb = eig(B); s = sort(sqrt(abs(eb)));

kappa_A1 = max(s)/min(s);

% method 2 to compute kappa(A): by eigenvalues of A'A, complete this part

% method 3 to compute kappa(A): by cond, complete this part

% compute eigenvalues and vectors of A, in v,e,u.

% sorting

[e, ind] = sort(diag(e)); v = v(:,ind); u = u(:,ind);

% use this function to compute kappa(lambda), v is the right evector and u

% is the left. look up the documentation for eig

kappa_lambda = @(v,u) (abs(vecnorm(v,2,1).*vecnorm(u,2,1))./abs(sum(transpose(u').*v)));

kappa_lambda(v,u)

%% Q2 ---------

nodes = [1 10.^(-6:-1) 0.2:0.1:0.9];

rhs = sin(nodes)';

% creat the Vandermonde matrix for these nodes

% compute the coefficients using lu and inv. compare them and the

% polynomials they create

% store coefficients from "inv" into "a1"

% store coefficients from "lu" into "a"

% plot the polynomials and data ------

X = sort(unique([nodes 0.1:1e-2:1])); % extra points

p_inv = @(x) sum(a1'.*x.^(0:(length(a1)-1)));

p_inv_values = arrayfun(@(x) p_inv(x), X);

p_lu = @(x) sum(a'.*x.^(0:(length(a)-1)));

p_lu_values = arrayfun(@(x) p_lu(x), X);

figure()

semilogy(X,p_inv_values,'--*'); hold on

semilogy(X,p_lu_values,'--*',LineWidth=2.5)

semilogy(X,sin(X),'--*')

semilogy(nodes,b,'k*',LineWidth=3)

legend("inv","lu","true function","initial data")

%% Q3 Vandermonde node placement and expected conditioning --------

N = 1:5; % you may want to increase the end value

nodes1 = @(n) 1./(1:n); % this is for harmonic nodes, change / add extra functions for the others

rate1 = @(n) n.^(n+1); % this is for harmonic nodes, change / add extra functions for the others

% apply the node1 and rate1 function to varying values of N

Conds = arrayfun(@(x) cond(fliplr(vander(nodes1(x)))),N);

Rates = rate1(N);

% plot

clf

semilogy(N,Conds,'--*')

hold on

semilogy(N,Rates,'--*')

legend(["Numerical","Expected"])

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Mathematics Questions!