Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Part 3 (20 points): Word frequencies. One of the primary domains we'll work with this semester is text. The most common approach to dealing with
Part 3 (20 points): Word frequencies. One of the primary domains we'll work with this semester is text. The most common approach to dealing with large bodies of text is statistical, which requires counting the number of words in a document. wc.py gives you a starting point for a program that will read words from a file and construct a frequency distribution. Please add the following features - you may need to explore the Python documentation a little to figure these out. 1. The way we're processing command-line args is icky. Please replace this with the argparse module. 2. The current wc.py will only process a single file. Fix it to be able to take as input a directory name or path and call wc for all files in that directory, or subdirectories. (look at os. walk) to see how to do this.) 3. Add a '--hist=picklePfile' argument. If this is present, you should instead read in a previously pickled frequency distribution and print out the words and counts. If the '-s' argument is present, print the words in order of increasing frequency
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started