All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
mathematics
applied linear algebra
Questions and Answers of
Applied Linear Algebra
Answer Exercise 3.3.1 for(a)(b)(c)Data From Exercise 3.3.1Compute the 1, 2, 3, and ∞ norms of the vectors Verify the triangle inequality in each case. 2 (-²), (-4), -1
Let L: U → V be a linear function between inner product spaces. Prove that u ∈ Rn solves the inhomogeneous linear system L[u] = f if and only if Explain why Exercise 3.1.11 is a special case
Let W, Z be complementary subspaces of a vector space V, as in Exercise 2.2.24. Let V/W denote the quotient vector space, as defined in Exercise 2.2.29. Show that the map L:Z → V/W that maps L[z] =
Let L: V → W be a linear map. (a) Suppose V,W are finite-dimensional vector spaces, and let A be a matrix representative of L. Explain why we can identify coker A ≃ W/img A and coimg A =
Answer Exercise 9.4.1 when(a)(b)(c)Data From Exercise 9.4.1 1 T · - ( _¦ ¦ ), » - ( 1 ); = b= - 1
Let A be a square matrix. Prove that its maximal eigenvalue is smaller than its maximal singular value: max |λi| ≤ max σi.
True or false: If w is a generalized eigenvector of index k of A, then w is an ordinary eigenvector of Ak.
Test the noise removal features of the Haar wavelets by adding random noise to one of the functions in Exercises 9.7.1 and 9.7.2, computing the wavelet series, and then setting the high
Prove formula (1.55). (AB)¹ = BT AT. (1.55)
Find the mean, the variance, and the standard deviation of the data sets {f(x) | x = i/10, i = −10, . . . , 10} associated with the following functions f(x):(a) 3x + 1, (b) x2, (c) x3 −
Which of the indicated maps F(x, y) define isometries of the Euclidean plane?(a)(b)(c)(d)(e) X- fi
Which of the following sets of vectors span conjugated subspaces of C3?(a)(b)(c)(d)(e) 1 -1 2
Find all solutions u = f(r) of the three-dimensional Laplace equation that depend only on the radial coordinate Do these solutions form a vector space? If so, what is its dimension?
Find all invariant subspaces of the following matrices:(a)(b)(c)(d)(e)(f) 12 2 1
Describe the image of the triangle with vertices (−1, 0), (1, 0), (0, 2) under the affine transformation 4 −1 ¹()-(¯) ()+(-3). (2 Y -4 T 5
Given that x2 + y2 solves the Poisson equation while x4 + y4 solves write down a solution to მ2 + მე2 მ2 II მu2 = 4
Find the minimum distance between the point (1, 0, 0)T and the plane x + y − z = 0 when distance is measured in (a) The Euclidean norm; (b) The weighted norm ΙΙ w ΙΙ = (c) The norm based on
Find all complex invariant subspaces and all real invariant subspaces of(a)(b)(c)(d) 01 -1 0
Minimize the function T p(u) = 1⁄2 u² 2 -1 -1 4 0-2 -2 3 2 (3 T u - u 0 for u € R³.
True or false: If F[x] is an affine transformation on Rn, then the equation F[x] = c defines a linear system.
(a) Let F:Rn → Rn be an affine transformation. Let L1, L2 ⊂ Rn be two parallel lines. Prove that F[L1] and F[L2] are also parallel lines.(b) Is the converse valid: if F: Rn → Rn maps
Prove that (a) (L + M)∗ = L∗ + M∗,(b) (cL)∗ = cL∗ for c ∈ R,(c) (L∗)∗ = L, (d) (L − 1)∗ = (L∗) − 1.
True or false: The identity transformation is self-adjoint for an arbitrary inner product on the underlying vector space.
Find all (real) solutions to the two-dimensional Laplace equation of the form u = log p(x, y), where p(x, y) is a quadratic polynomial. Do these solutions form a vector space? If so, what is its
Find all solutions u = f(r) of the two-dimensional Laplace equation that depend only on the radial coordinate r = √x2 = y2. Do these solutions form a vector space? If so, what is its dimension?
Show that the free space Schrodinger equation is not a real linear system by constructing a complex quadratic polynomial solution and verifying that its real and imaginary parts are not solutions.
Answer Exercise 8.2.2a for the reflection matrixData From Exercise 8.2.2a(a) Find the eigenvalues of the rotation matrix For what values of θ are the eigenvalues real? Fe - (8 cos sin 0 sin 0 cos
Minimize ΙΙ(2x − y, x + y)T)ΙΙ2 − 6x over all x, y, where ΙΙ · ΙΙ denotes the Euclidean norm on R2.
Consider the differential equation u′′ + xu = 2. Suppose you know solutions to the two boundary value problems u(0) = 1, u(1) = 0 and u(0) = 0, u(1) = 1. List all possible boundary value problems
Consider the differential equation xu′′ − (x + 1)u′ + u = 0. Suppose we know the solution to the initial value problem u(1) = 2, u′(1) = 1 is u(x) = x + 1, while the solution to the initial
Consider the differential equation 4xu′′ + 2u′ + u = 0. Given that cos √x solves the Given that cos √x solves the boundary value problem u(1/4) π2 = 0, u(π2) = −1, and sin √x
Prove that if L[u] = f is a real inhomogeneous linear system with real right-hand side f, and u = v + i w is a complex solution, then its real part v is a solution to the system, L[v] = f, while its
Find all invariant subspaces W ⊂ R2 of the following linear transformations L: R3 → R3:(a) The scaling transformation (2x, 3y, 4z)T;(b) The shear (x + 3y, y, z)T;
Find the mean, the variance, and the standard deviation of the following data sets:(a) 1.1, 1.3, 1.5, 1.55, 1.6, 1.9, 2, 2.1; (b) 2., .9, .7, 1.5, 2.6, .3, .8, 1.4; (c) −2.9,−.5,
Determine the variance and standard deviation of the normally distributed data points {e−x2/σ | x = i/10, i = −10, . . . , 10} for σ = 1, 2, and 10.
Prove that if W is an invariant subspace for A, then it is also invariant for A2. Is the converse to this statement valid?
Prove that σxy = x̅ y̅ – x̅ y̅ ,where x̅ and y̅ are the means of {xi} and {yi}, respectively, while x̅ y̅ denotes the mean of the product variable {xiyi}.
Show that one can compute the variance of a set of measurements without reference to the mean by the following formula 2 x 2m = m m * ΣΣ i=1 j=1 Σ(-x) = #2 - ) Σ(-x;). m
Explain why the singular values of A are the same as the nonzero eigenvalues of the positive definite square root matrix S = √ATA , defined in Exercise 8.5.27.Data From Exercise 8.5.27(a) Prove
Write down(a) A 2 × 2 matrix that has 0 as one of its eigenvalues and (1, 2)T as a corresponding eigenvector;(b) A 3 × 3 matrix that has (1, 2, 3)T as an eigenvector for the eigenvalue
Let V ⊂ Rn be an invariant subspace for the n × n matrix A. Explain why every eigenvalue and eigenvector of the linear map obtained by restricting A to V are also eigenvalues and eigenvectors of A
For each of the following subsets S ⊂ R3,(i) Compute a fairly dense sample of data points zi ∈ S; (ii) Find the principal components of your data set, using μ = .95 in the criterion in
Deer in northern Minnesota reproduce according to the linear differential equation du/dt = .27u where t is measured in years. If the initial population is u(0) = 5,000 and the environment can sustain
True or false: If V and W are invariant subspaces for the matrix A, then so is(a) V + W;(b) V ∩ W;(c) V ∪ W;(d) V \W.
Using the Euclidean norm, compute a fairly dense sample of points on the unit sphere S = {x ∈ R3 | ΙΙxΙΙ = 1}. (a) Set μ = .95 in (8.78), and then find the principal components of your data
Let xi = (xi, yi), i = 1, . . . , m, be a set of data points in the plane. Suppose L∗ ⊂ R2 is the line that minimizes the sums of the squares of the distances from the data points to it, i.e.,
Find all invariant subspaces of the following matrices: (a)(b)(c)(d)(e)(f) 40 -3 4
True or false:(a) Every diagonal matrix is complete.(b) Every upper triangular matrix is complete.
True or false: If V is an invariant subspace for the n × n matrix A and W is an invariant subspace for the n × n matrix B, then V + W is an invariant subspace for the matrix A + B.
Given a singular value decomposition A = P∑QT, prove that if the columns of A have zero mean, then so do the columns of P∑.
Prove that if A is a complete matrix, then so is c A + d I , where c, d are any scalars.
True or false: If W is an invariant subspace of the matrix A, then it is also an invariant subspace of AT.
Construct the 5 × 5 covariance matrix for the 5 data sets in Exercise 8.8.1 and find its principal variances and principal directions. What do you think is the dimension of the subspace the data
True or false: If W is an invariant subspace of the nonsingular matrix A, then it is also an invariant subspace of A−1.
Which 2 × 2 orthogonal matrices have a nontrivial real invariant subspace?
Determine the spectrum for the graphs in Exercise 2.6.3.Data From Exercise 2.6.3Write out the incidence matrix of the following digraphs.(a)(b)(c)(d)(e)(f)
True or false: If Q ≠ ± I is a 4 × 4 orthogonal matrix, then Q has no real invariant subspaces.
Show that the first principal direction q1 can be characterized as the direction of the line that minimizes the sums of the squares of its distances to the data points.
True or false: If A is complete, every generalized eigenvector is an ordinary eigenvector.
Show that if A is complete, then every similar matrix B = S−1AS is also complete.
(a) Let A be an n × n symmetric matrix, and let v be an eigenvector. Prove that its orthogonal complement under the dot product, namely, V⊥ = {w ∈ Rn | v1· w = 0}, is an invariant
Given a solid body spinning around its center of mass, the eigenvectors of its positive definite inertia tensor prescribe three mutually orthogonal principal directions of rotation, while the
True or false: If v is a real eigenvector of a real matrix A, then a nonzero complex multiple w = cv for c ∈ C is a complex eigenvector of A.
The method for constructing a Jordan basis in Example 8.52 is simplified due to the fact that each eigenvalue admits only one Jordan block. On the other hand, the method used in the proof of Theorem
Determine the spectrum for the trees in Exercise 2.6.9. Can you make any conjectures about the nature of the spectrum of a graph that is a tree?Data From Exercise 2.6.9A connected graph is called a
True or false: Every generalized eigenvector belongs to a Jordan chain.
True or false: If w is a generalized eigenvector of A, then w is a generalized eigenvector a of every power Aj, for j ∈ N, thereof.
(a) True or false: If λ1, v1 and λ2, v2 solve the eigenvalue equation (8.12) for a given matrix A, so does λ1 + λ2, v1 + v2.(b) Explain what this has to do with linearity. Av = Xv. (8.12)
True or false: If U is an upper triangular matrix whose diagonal entries are all positive, then its singular values are the same as its diagonal entries.
Under the assumptions of Theorem 8.74, show that ΙΙA − BΙΙ 2 is also minimized when B = Ak among all matrices with rank B ≤ k.Theorem 8.74Theorem 8.74 says that, in the latter cases, the
Let A be a nonsingular square matrix. Prove the following formulas for its condition number:(a)(b) K(A) max{ || Au|| | || u || = 1} min{ || Au|| | ||u|| = 1)
Suppose that λ is an eigenvalue of A.(a) Prove that c λ is an eigenvalue of the scalar multiple c A. (b) Prove that λ + d is an eigenvalue of A + d I.(c) More generally, c λ + d is an
Suppose A is an m × n matrix of rank r < n. Prove that there exist arbitrarily close matrices of maximal rank, that is, for every ε > 0 there exists an m × n matrix B with rank B = n such
Let A be an m × n matrix with singular values σ1, . . . , σr. Prove that T W i=1 Τ 2 mu n 2 ΣΣ aij i=1 j=1
Show that if λ is an eigenvalue of A, then λ2 is an eigenvalue of A2.
True or false: If det A > 1, then A is not ill-conditioned.
True or false: If λ is an eigenvalue of A and μ is an eigenvalue of B, then λμ is an eigenvalue of the matrix product C = AB.
How many different diagonal forms does an n × n diagonalizable matrix have?
Find a matrix A whose Euclidean matrix norm satisfies ΙΙA2ΙΙ ≠ ΙΙ AΙΙ2.
Which of the following systems have a strictly diagonally dominant coefficient matrix?(a)(b)(c)(d) х 5x - y = 1, -x+3y=-1;
Prove that if v is an eigenvector of A with eigenvalue λ and w is an eigenvector of AT with a different eigenvalue μ ≠ λ, then v and w are orthogonal vectors with respect to the dot product.
Find the minimum and maximum values of the quadratic form 2x2 + xy + 2xz + 2y2 + 2z2 where x, y, z are required to satisfy x2 + y2 + z2 = 1.
Given an idempotent matrix, so that P = P2, find all its eigenvalues and eigenvectors.
Under the set-up of Theorem 8.42, explain whyTheorem 8.42 immediately implies that qj is a unit eigenvector of K associated with its jth largest eigenvalue λj = σ2j, which therefore is, up to a
True or false: All the eigenvalues of an n × n permutation matrix are real.
Determine the spectrum of a graph given by the edges of (i) A triangle; (ii) A square; (iii) A pentagon. Can you determine the formula for the spectrum of the graph given by an n sided
A be an n × n matrix with eigenvalues λ1, . . . , λk, and B an m × m matrix with eigenvalues μ1, . . . , μl. Show that the (m + n) × (m + n) block diagonal matrix has eigenvalues λ1, . . . ,
Suppose K > 0. What is the maximum value of q(x) = xTKx when x is constrained to a sphere of radius ΙΙxΙΙ = r?
(a) Show that if all the row sums of A are equal to 1, then A has 1 as an eigenvalue.(b) Suppose all the column sums of A are equal to 1. Does the same result hold?
In this exercise, we investigate the compression capabilities of the Haar wavelets. Let represent a signal defined on 0 ≤ x ≤ 1. Let sr(x) denote the nth partial sum, from j = 0 to r, of the Haar
Let f(x) = x.(a) Determine its Haar wavelet coefficients cj,k. (b) Graph the partial sums sr(x) of the Haar wavelet series (9.136) where j goes from 0 to r = 2, 5, and 10. Compare your graphs with
Let A be a complete square matrix, not necessarily symmetric, with all positive eigenvalues. Is the associated quadratic form q(x) = xT A x > 0 for all x ≠ 0?
Prove that (9.92, 93, 94) give the same Arnoldi vectors uk and the same coefficients hjk when computed exactly. h va+1 = Aug - Σ "... j=1 where hjk = uf Aug (9.92)
Answer Exercises 9.7.1 and 9.7.2 using the Daubechies wavelets instead of the Haar wavelets. Do you see any improvement in your approximations? Discuss the advantages and disadvantages of both in
Prove that the invertibility of the coefficient matrix STAS in (9.104) depends only on the subspace V and not on the choice of basis thereof. STASY = ST b. (9.104)
True or false: The Gershgorin domain of the transpose of a matrix AT is the same as the Gershgorin domain of the matrix A, that is, DAT = DA.
True or false:(a) A positive definite matrix is strictly diagonally dominant.(b) A strictly diagonally dominant matrix is positive definite.
(a) Write down an invertible matrix A whose Gershgorin domain contains 0.(b) Can you find an example that is also strictly diagonally dominant?
Answer Exercise 9.7.3 using the Daubechies wavelets to compress the data. Compare your results.Data From Exercise 9.7.3In this exercise, we investigate the compression capabilities of the Haar
(a) Explain why the wavelet expansion (9.136) defines a linear transformation on Rn that takes a wavelet coefficient vector c = (c0, c1, . . . , cn−1)T to the corresponding sample vector f = (f0,
Showing 1 - 100
of 333
1
2
3
4