Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

In this question we will deal with a distributed optimization problem. Consider the following network of nodes as they seem in the attached image: Also,

In this question we will deal with a distributed optimization problem.
Consider the following network of nodes as they seem in the attached image:
Also, consider that node i has the local cost tunction
fi(x)=xAiTx+bxT
wher
e
Ai=L(i)D(i)L-1()i
by
i=i-4,i12i, and bi=Li(8)-[10],i=1,2,3,4.
Network nodes need to work together to solve the problem
x*=argminxinR2i=14fi(x)
For the above scenario, the following are requested:
Compute the optimal solution to the above problem, assuming that all knot functions are known at a
central knot.
Assuming that all edges adjacent to a node i have equal, positive weights that sum to unity, i.e.
Wij=1|n(i)|,i=1,2,3,4jinn(i),
where n(i) is the set of neighboring nodes of node i, compute the weight matrix W, the weighted
edge-vertex interface matrix B and the Laplacian matrix L.
Based on the above, write the cost function corresponding to the previous problem using the primal
relaxation / penalty function method.
Write the equations for updating the estimates of the parameter x, according to the distributed
gradient descent algorithm, at all nodes of the network. Implement the distributed gradient
algorithm in this way, and provide illustrative plots of the error versus the number of iterations of
the algorithm at all nodes of the network. Consider the range of values of the parameter for which
satisfactory agreement between the nodes is achieved.
, i =1,2,3,4.
1. Compute the optimal solution to the above problem, assuming that all knot functions are known at a central knot.
2. Assuming that all edges adjacent to a node i have equal, positive weights that sum to unity, i.e.
Wij =1/|n(i)|
, i =1,2,3,4 and j in n(i),
where n(i) is the set of neighboring nodes of node i, compute the weight matrix W , the weighted edge-vertex interface matrix B and the Laplacian matrix L.
3. Based on the above, write the cost function corresponding to the previous problem using the primal relaxation / penalty function method.
4. Write the equations for updating the estimates of the parameter x, according to the distributed gradient descent algorithm, at all nodes of the network. Implement the distributed gradient algorithm in this way, and provide illustrative plots of the error versus the number of iterations of the algorithm at all nodes of the network. Consider the range of values of the parameter \alpha for which satisfactory agreement between the nodes is achieved.
5. Based on the above, write the Lagrangian function corresponding to the previous problem.
6. For the problem we are studying, give the form of the distributed westward ascent algorithm, where the sub-problem of minimizing the Lagrangian function (at each node) is solved by the gradient descent algorithm, and some termination criterion (Can we do better in the case of the problem we are studying?). Implement this algorithm and provide illustrative plots of the error versus the number of iterations of the algorithm, across all nodes in the network.
Please address each question individually.
image text in transcribed

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Systems Design Implementation And Management

Authors: Peter Robb,Carlos Coronel

5th Edition

061906269X, 9780619062699

More Books

Students also viewed these Databases questions