Answered step by step
Verified Expert Solution
Question
1 Approved Answer
This is a simple worst-case complexity problem, but I'm having trouble grasping the concept. The following code counts how often different words occur in a
This is a simple worst-case complexity problem, but I'm having trouble grasping the concept.
The following code counts how often different words occur in a text file.
char toLC (char c) { return (c >= 'A' && c <= 'Z') ? c-'A'+'a' : c; } void countWordsInFile (istream& inFile, map& counts) { string word; while (inFile >> word) { // strip away any non-alphabetic characters string::size_type n = word.find_first_not_of (wordCharacters); if (n != string::npos) word = word.substr (0, n); if (word.length() > 0) { // if there's anything left, count it word = transform(word.begin(), word.end(), word.begin(), toLC); /** ... increment the appropriate counter in counts ... **/ map ::iterator pos = counts.find (word); if (pos == counts.end()) // if this is the 1st time we've seen // this word counts.insert (map ::value_type(word, 1)); else // else if we've seen this word before ++((*pos).second); } } }
Let W denote the number of words in being processed. Many of these words, in a typical document, will be duplicates. Let D denote the number of distinct (non-duplicate) words, D <= W. Assume the length of any individual word is bounded by some constant (e.g., no single word is longer than "antidisestablishmentarianism"). What is the worst-case complexity for countWordsInFile?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started