Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / 1) Problem 17 of Chapter 6 addresses the application of the LMS algorithm to the design of a linear predictor operating on an autoregressive process of order two

1) Problem 17 of Chapter 6 addresses the application of the LMS algorithm to the design of a linear predictor operating on an autoregressive process of order two

Management

1) Problem 17 of Chapter 6 addresses the application of the LMS algorithm to the design of a linear predictor operating on an autoregressive process of order two. Using the RLS algorithm, repeat parts (b) through (e) of the computer experiment described therein. Explain the close relationship of the LMS algorithm to stochastic approximation when there is a decrease in the value of the step-size parameter m with an increasing number of adaptation cycles.  Problem 20 of Chapter 6 addresses the application of the LMS algorithm to the study of an MVDR beamformer. Using the RLS algorithm, repeat the computer experiment described therein.

2) (a) Explain in detail the procedure to find the optimum value of the updated weight vector.  (b) Explain the procedure for solving a constrained optimization problem. Explain in detail how an affine projection adaptive filter converges at a rate faster than that of the corresponding normalized LMS algorithm.  Illustrate how an adaptive filter performs circular convolution instead of linear convolution.

3) What are the equations that define the operation of the LMS algorithm of the canonical model of the complex LMS algorithm? Set up the equations that define the operation of the LMS algorithm that is used to implement adaptive noise cancelling applied to a sinusoidal interference. Demonstrate that the LMS algorithm acts as a low-pass filter with a low cutoff frequency when the step-size parameter μ is small.

4) In Section 3.9, we considered the use of an all-pole lattice filter as the generator of an autoregressive process. This filter may also be used to efficiently compute the autocorrelation sequence r(1), r(2), . . . , r(m), normalized with respect to r(0). The procedure involves initializing the states (i.e., unit-delay elements) of the lattice inverse filter to 1, 0, . . . , 0 and then allowing the filter to operate with zero input. In effect, this procedure provides a lattice interpretation of Eq. (3.67) that relates the autocorrelation sequence {r(0), r(1), . . . , r(M)} to the augmented sequence of reflection coefficients {P0, k1, c, kM}. Demonstrate the validity of the procedure for the following values of final order M:(a) M = 1. (b) M = 2. (c) M = 3.

5) In a lattice-based structure for joint-process estimation, explain why the backward error prediction method is preferable to forward prediction. For m = 2, compute r(2) for the autocorrelation function and the reflection coefficients given P0, k1, k2, and k3. Find the inverse recursion using Levinson–Durbin recursion for the tap weights a3,1, a3,2, a3,3, and a4,4 of a prediction-error filter of order 3. Determine the corresponding reflection coefficient k1, k2, k3, and k4 for the order 4.

6) Determine the complex Gaussian process for the odd case of N = 5 with u(n) consisting of the five samples u1, u2, u3, u4, and u5. By using the Gaussian moment-factoring theorem, determine the complex Gaussian process for the even case of N = 2. U(n) consists of the samples u1 and u2. Prove that a random process {X (t)} is mean square continuous if its autocorrelation function is continuous.

7) Define the estimation error. Apply the generalized Wirtinger calculus to the expected value of the estimation error multiplied by its complex conjugate; the product, in its intact form, represents the cost function. Define the cost function in its expanded form, using the expectation operator.  Apply the generalized Wirtinger calculus.

8) A data stream consisting of i.i.d. symbols is applied to a binary phase-shift keying (PSK) system. The resulting modulated signal, x(n), is applied to a linear channel of unknown impulse response. For blind equalization of the channel, the constant-modulus algorithm (i.e., the Godard algorithm with p = 2) is used. (a) Plot the error signal e(n) versus the decoded sequence y(n). (b) Assuming the use of the signed-error version of the CMA, plot the piece-wise approximation of e(n). (c) Formulate the CMA and its signed-error version. You may refer to footnote 7 for a brief description of the signed-error CMA.

9) Change the problem description minimally, keeping the IDBD parameter unchanged, so that the IDBD algorithm diverges. For example, you may increase the value of the input by 10-fold or increase the variances of the white noise by 100-fold. Hence, respond to the following two questions for each example case:   (i) Why does the IDBD algorithm diverge?   (ii)  What is the minimal change required in IDBD parameters (e.g., the initial step-size parameter, m(0), or the meta-step-size-parameter, k) so that the divergence is prevented?

pur-new-sol

Purchase A New Answer

Custom new solution created by our subject matter experts

GET A QUOTE