All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
linear state space systems
Questions and Answers of
Linear State Space Systems
2.12. Given the autocorrelation function for the random phase sinusoid in the previous Problem 2.11 compute the 2 x 2 autocorrelation matrix.
2.1. Does the equation of a straight line y = α x + β, where α and β are constants, represent a linear system? Show the proof.
10.2. What is the main disadvantage of the adaptive Volterra filter?
6.4. Can the exponentially weighted RLS algorithm (refer to Chapter 8) be considered to be a special case of the Kalman filter?
5.2. In Problem 5.1 the optimum first-order FIR Wiener filter for σ v2 =1 andα = 0.8 is, W(z) = 0.4048 + 0.2381z−1 . What is the signal to noise ratio(S/N) improvement, computed in dB, achieved
12.6 The three-layer Perceptron network shown in Figure 12.11, when properly trained, should respond with a desired output d = 0.9 at y1 to the augmented input vector x = [1,x1,x2]T = [1,1,3]T. The
12.5. Specify some possible applications for ANNs.
12.4. Can ANNs be seen as magic boxes which can be applied to virtually any problem?
12.3. What are the main types of ANN?
12.2. Name some general characteristics of ANNs.
12.1. What are two most important aspects of artificial neural network technology?
10.4. What would the LMS weight update equations be for Problem 10.3?
10.3. Compute the gradients of the mean square error function, E{e2[n]} , with respect to h0, a and B, for the second-order Volterra filter in the case when the data are complex.
2.13. The autocorrelation sequence of a zero mean white noise process is rv (k) =σ v2δ (k) and the power spectrum is ( ) v2 jPv e σ θ = , where 2σ v is the variance of the process. For the
9.5. What is the main purpose of the so called overlap-save and the overlapadd methods in frequency domain filtering?
7.5. For a total data length N what can you say about the relationship between the resolution and variance as function of K, the number of nonoverlapping data sections, for Bartlett’s spectral
7.4. Does the variance of the periodogram spectral estimate of white noise reduce as the data length N increases?
7.3. Assume that a random process can be described by two equal amplitude sinusoids in unit random variance white noise as defined by the following equation, x[n] = Asin(nθ1 +φ1) + Asin(nθ 2 +φ 2
7.2. What is the power spectrum of white noise having a variance of σ x 2 ?
7.1. What are the two main approaches to spectral estimation and in what way do they differ?
9.6 What is the disadvantage of the circular convolution method?
6.3. Use a Kalman filter to estimate the first-order AR process defined by, x[n] = 0.5x[n −1]+ w[n] , where w[n] is zero mean white noise with a varianceσ w2 = 0.64 .The noisy measurements of x[n]
6.2. Develop a Kalman filter to estimate the value of an unknown scalar constant x given measurements that are corrupted by an uncorrelated, zero mean white noise v[n] that has a variance of σ v 2 .
6.1. Is the Kalman filter useful for filtering nonstationary and nonlinear processes?
5.8. Find the optimum causal and noncausal IIR Wiener filters for estimating a zero-mean signal s[n] from a noisy real zero-mean signal x[n] = s[n]+ v[n] , where v[n] is a unit variance white noise
7.6. By looking at the performance comparisons of the various nonparametric spectral estimation methods what general conclusions can be drawn?
7.7. Compute the optimum filter for estimating the Minimum Variance (MV)power spectrum of white noise having a variance of 2σ x .
7.9. Compute the qth order Maximum Entropy (ME) spectral estimate for Problem 7.8.
9.4. For frequency domain filtering define the inverse of the p x p DFT matrix F.
9.3. What computational savings can be had by using a radix-2 decimation-intime FFT to perform the DFTs required in Problem 9.2?
9.2. Show how linear convolutions on sequences can be done by using frequency domain operations.
9.1. Compute how many multiplication operations are saved in the coefficient update equation for a time domain BLMS algorithm per block of L input points. Is it worth the trouble?
8.6. For the exponentially weighted RLS algorithm prove thatΦ(n)w[n] =θ (n) by setting the derivative of the weighted error ξ(n) with respect to w*[n] to zero, given that,????=Φ = −n kn n k k T
8.5. Compute and sketch a graph of the expectation of the squared error as a function of the single real-valued filter coefficient for a simple zero-order MA(0) process .
8.4. If the process and the FIR filter coefficients are complex valued show that the adaptive LMS update equation is w[k+1] = w[k] +μ e[k] x*[k].
8.3. Use the normalised LMS algorithm to derive the FIR filter coefficient update equations for the second-order AR(2) linear prediction equation of Problem 8.1.
8.2. In Problem 8.1 if the autocorrelation sequence rx (k) is given as rx (0) = 5.7523 and rx (1) = 4.0450 find the maximum bound for the step size in the LMS algorithm.
8.1. Assume x[n] is a second-order autoregressive process defined by the difference equation, x[n] = 1.2728x[n −1] − 0.81x[n − 2]+ v[n] , where v[n] is unit variance white noise the optimum
5.7. Compute the minimum mean square error for Problem 5.6 for 1 to 5 step predictors. What do you find odd about the sequence of errors from step 1 through to step 5?
5.6. Consider a random process whose autocorrelation sequence is defined by,r (k) (k) (0.9) cos( k / 4) k x =δ + π . The first six autocorrelation values are, rx [ ]T = 2.0 0.6364 0 −0.5155
4.5. Explain the main idea behind the Padé approximation method and why it is likely to be problematic in practice.
4.4. Explain why the Padé “approximation” method is badly named.
4.3. Show that Partial differentiation of Equation 4.5 with respect to variables*ak results in Equation 4.6.
4.2. Which models, MA, AR or ARMA, are the easiest to solve for their unknown parameters?
4.1. Under what conditions are Equations 4.1 and 4.2 referred to as a Moving-Average (MA), Autoregressive (AR) and Autoregressive Moving-Average(ARMA) model?
3.4. In the Acoustic Positioning System example why were the modified set of model equations, G0 H0 Tused instead of the original model equations, Tcc Tc1 Tc3 Tc2 Tc0 to solve the LSE problem?
3.3. Assume that you have three independent measurements, y1, y2 and y3 with a measurement error variance of σ 2, of the volume of water in the tray of optimum volume (tray of Problem 3.1). Show how
3.2. Show how you could use the LSE equations to solve Problem 3.1.
3.1. Assume that you have a square piece of sheet steel which you wish to bend up into a square open tray. The sheet is 6 by 6 units in area and the bend lines are x units in from the edge as shown
2.15. If x[n] is a zero mean wide-sense stationary white noise process and y[n] is formed by filtering x[n] with a stable LSI filter h[n] then is it true that, and are the variances of [ ] and [ ]
4.6. Given the signal, x = [1 1.5 0.75]T , find the Padé approximation model for,a. p = 2 and q = 0b. p = 0 and q = 2c. p = q = 1
4.7. Given the signal,???? ???? ????==0, otherwise 1, 0,1,..,20[ ]n x n , use Prony’s method to model x[n] with an ARMA, pole-zero, model with p = q = 1.
4.8. Find,a. the first-order, and,b. the second-order all-pole models for the signal, x[n] =δ [n] −δ [n −1] .
5.3 as σ v2 →0 .
5.5. Show that the solution to Problem 5.4 approaches the solution to Problem
5.4. Reconsider Problem 5.3 when the measurement of x[n] is contaminated with zero-mean white noise having a variance of 2σ v , i.e., y[n] = x[n]+ v[n] . Find the optimum first-order linear
5.3. Find the optimum first-order linear predictor having the form xˆ[n +1] = w[0]x[n]+ w[1]x[n −1] , for a first-order AR process x[n] that has an autocorrelation sequence defined by rs k( ) =α
10.1. What is a key feature of the Volterra filter that allows for the use of optimum filter theory?
5.1. Find the optimum first-order FIR Wiener filter for estimating a signal s[n]from a noisy real signal x[n] = s[n]+ v[n] , where v[n] is a white noise process with a variance 2σ v that is
4.12. For Problem 4.11 compute the next two values in the autocorrelation sequence, i.e., rx (4) and rx (5) .
4.11. Solve the autocorrelation normal equations using the Levinson-Durbin recursion to find a third-order all-pole model for a signal having the following autocorrelation values, rx (0) = 1, rx (1)
4.10. Use the modified Yule-Walker equations to find the first order ARMA model (p = q = 1) of a real valued stochastic process having the following autocorrelation values, 2 7rx (0) = 26, rx (1) =
4.9. Find the second-order all-pole model of a signal x[n] whose first N = 20 values are x = [1,−1,1,−1,...,1,−1]T by using,a. the autocorrelation method, and,b. the covariance method.
2.14. Let x[n] be a random process that is generated by filtering white noise w[n] with a first-order LSI filter having a system transfer function of, 1 0.25 1 1( ) − −=z H z If the variance of
Showing 1200 - 1300
of 1264
1
2
3
4
5
6
7
8
9
10
11
12
13