Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Help me with this programming please! /** * A simplified web crawler, specialized to crawl local URIs rather * than to retrieve remote documents. */

Help me with this programming please!

/**

* A simplified web crawler, specialized to crawl local URIs rather

* than to retrieve remote documents.

*/

public class UriCrawler {

/**

* Instantiates a new UriCrawler. The maximum number of documents a crawler

* will attempt to visit, ever, is limited to visitQuota.

*

* @param visitQuota

* the maximum number of documents a crawler will attempt to

* visit

* @throws IllegalArgumentException

* if maximumRetrievalAttempts is less than one

*/

public UriCrawler(int visitQuota) throws IllegalArgumentException {

if (visitQuota < 1) {

throw new IllegalArgumentException();

}

// TODO

}

/**

* Returns the set of URIs that this crawler has attempted to visit

* (successfully or not).

*

* @return the set of URIs that this crawler has attempted to visit

*/

public Set getVistedUris() {

// TODO

return null;

}

/**

* Returns the set of RetrievedDocuments corresponding to the URIs

* this crawler has successfully visited.

*

* @return the set of RetrievedDocuments corresponding to the URIs

* this crawler has successfully visited

*/

public Set getVisitedDocuments() {

// TODO

return null;

}

/**

* Adds a URI to the collections of URIs that this crawler should attempt to

* visit. Does not visit the URI.

*

* @param uri

* the URI to be visited (later!)

*/

public void addUri(URI uri) {

// TODO

}

/**

* Attempts to visit a single as-yet unattempted URI in this crawler's

* collection of to-be-visited URIs.

*

* Visiting a document entails parsing the text and links from the URI.

*

* If the parse succeeds:

*

* - The "file:" links should be added to this crawler's collection of

* to-be-visited URIs.

*

* - A new RetrievedDocument should be added to this crawler's collection of

* successfully visited documents.

*

* If the parse fails, this method considers the visit attempted but

* unsuccessful.

*

* @throws MaximumVisitsExceededException

* if this crawler has already attempted to visit its quota of

* visits

* @throws NoUnvisitedUrisException

* if no more unattempted URI remain in this crawler's

* collection of URIs to visit

*/

public void visitOne() throws MaximumVisitsExceededException, NoUnvisitedUrisException {

// TODO

}

/**

* Attempts to visit all URIs in this crawler (and any URIs they reference,

* and so on).

*

* This method will not raise a MaximumVisitsExceededException if there are

* more URIs than can be visited. It will instead stop once the UriCrawler's

* quota has been reached.

*/

public void visitAll() {

// TODO

}

}

I need to translate bevaiors into code

Please help thank you.

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Making Databases Work The Pragmatic Wisdom Of Michael Stonebraker

Authors: Michael L. Brodie

1st Edition

1947487167, 978-1947487161

More Books

Students also viewed these Databases questions

Question

1. Identify the sources for this conflict.

Answered: 1 week ago