Question
New to pyhton and need help. Language used: Python 3. Also using requests and BeautifulSoup. Build a web crawler function that starts with a url
New to pyhton and need help. Language used: Python 3. Also using requests and BeautifulSoup.
Build a web crawler function that starts with a url representing a topic and outputs a list of at least 15 relevant urls. The urls can be pages within the original domain but should have a few outside the original domain.
Write a function to loop through your urls and and scrape all text off each page. Store each pages text in its own file.
Write a function to clean up the text. You might need to delete newlines and tabs. Extract sentences with NLTKs sentence tokenizer. Write the sentences for each file to a new file. That is, if you have 15 files in, you have 15 files out.
You might need to clean up the cleaned up files manually to delete irrelevant material.
Write a function to extract at least 10 important terms from the pages using an importance measure such as term frequency. First, its a good idea to lower-case everything, remove stopwords and punctuation. Then build a vocabulary of unique terms. Create a dictionary of unique terms where the key is the token and the value is the count across all documents. Print the top 25-40 terms.
Manually determine the top 10 terms based on your domain knowledge.
Build a searchable knowledge base of facts that your bot can share related to the 10 terms.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started