Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

. 1. (40 pts) Implement the the DFS algorithm. Use a text file (called input.txt) that contains an adjacency matrix to read-in data and create

.

image text in transcribed

1. (40 pts) Implement the the DFS algorithm. Use a text file (called input.txt) that contains an adjacency matrix to read-in data and create a directed graph to be searched. The matrix can be of any size. Row elements in the input.txt file must be separated with commas. Example The output of your program should provide the order of visited nodes. In addition to source and documentation, provide a text file, called REPORT.TXT, briefly describing how (i.e. data structures) and what you have implemented (i.e. what works, what doesn't). Tip: Follow the suggested DFS data structure for the algorithm. Don't improvise. 2. (60 points) Iterative Deepening was developed in the 70's and combines the positive elements of DFS and BFS. It is a simple, but a somewhat counter-intuitive idea: perform repeated depth-limited depth-first searches, using an increasing depth limit, until a solution is found. t is counter-intuitive because each repetition of a depth-limited DFS will wastefully repeat all the work done by previous repetitions. However, in typical trees (not those that are very unbalanced), most of the nodes are in the bottom level, so it does not matter much if the upper levels are visited multiple times since: number of nodes at depth k> number of nodes at depth k 1 or less This means that Iterative Deepening simulates BFS with linear space complexity. Assuming that a problem with branching factor b and the goal node at depth k, time complexity is O(bk), and space complexity is O(bk). Overall, the disadvantage of iterative deepening search is the painfully redundant, rechecking of every node it has already checked with each new iteration. Implement Iterative Deepening. Provide the source and a separate file for the input data. Use the same input and reading-in methods as you used on the first question. Using the same text file REPORT TXT, briefly describe how (i.e. data structures) and what you have implemented (i.e. what works, what doesn't) for this problem. 3. Extra Credit: (15 pts- you must have correctly completed the other two problems to receive credit). Attempting to improve Iterative Deepening, one could provide the mechanism to "remember what nodes have already been seen, but this sacrifices the gained memory efficiency that made the algorithm worthwhile. In addition, upon such improvement, nodes at the maximum level for one iteration will need to be re-accessed and expanded in the following iteration. Implement this "improved" Iterative Deepening. Provide the source and a separate file for the input data. 1. (40 pts) Implement the the DFS algorithm. Use a text file (called input.txt) that contains an adjacency matrix to read-in data and create a directed graph to be searched. The matrix can be of any size. Row elements in the input.txt file must be separated with commas. Example The output of your program should provide the order of visited nodes. In addition to source and documentation, provide a text file, called REPORT.TXT, briefly describing how (i.e. data structures) and what you have implemented (i.e. what works, what doesn't). Tip: Follow the suggested DFS data structure for the algorithm. Don't improvise. 2. (60 points) Iterative Deepening was developed in the 70's and combines the positive elements of DFS and BFS. It is a simple, but a somewhat counter-intuitive idea: perform repeated depth-limited depth-first searches, using an increasing depth limit, until a solution is found. t is counter-intuitive because each repetition of a depth-limited DFS will wastefully repeat all the work done by previous repetitions. However, in typical trees (not those that are very unbalanced), most of the nodes are in the bottom level, so it does not matter much if the upper levels are visited multiple times since: number of nodes at depth k> number of nodes at depth k 1 or less This means that Iterative Deepening simulates BFS with linear space complexity. Assuming that a problem with branching factor b and the goal node at depth k, time complexity is O(bk), and space complexity is O(bk). Overall, the disadvantage of iterative deepening search is the painfully redundant, rechecking of every node it has already checked with each new iteration. Implement Iterative Deepening. Provide the source and a separate file for the input data. Use the same input and reading-in methods as you used on the first question. Using the same text file REPORT TXT, briefly describe how (i.e. data structures) and what you have implemented (i.e. what works, what doesn't) for this problem. 3. Extra Credit: (15 pts- you must have correctly completed the other two problems to receive credit). Attempting to improve Iterative Deepening, one could provide the mechanism to "remember what nodes have already been seen, but this sacrifices the gained memory efficiency that made the algorithm worthwhile. In addition, upon such improvement, nodes at the maximum level for one iteration will need to be re-accessed and expanded in the following iteration. Implement this "improved" Iterative Deepening. Provide the source and a separate file for the input data

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Internals A Deep Dive Into How Distributed Data Systems Work

Authors: Alex Petrov

1st Edition

1492040347, 978-1492040347

More Books

Students also viewed these Databases questions

Question

Describe the linkages between HRM and strategy formulation. page 80

Answered: 1 week ago