Question
When maintaining software, an ideal situation is to be able to fix bugs at a higher rate than they are discovered. Of course, this cannot
When maintaining software, an ideal situation is to be able to fix bugs at a higher rate than they are discovered. Of course, this cannot be the case all the time, but we might be interested in discovering the best period of time for which this happened, to try to learn from our successes and try to repeat this again. To this end, we are looking at some data from a defect tracking tool: when a bug is found (opened bug), it is entered in this database, with the date it was reported. Then, when it is fixed (closed bug), we enter the fixed date in the database. From this, we can extract the number of bugs opened and the number of bugs closed every week. Here is an example of such data for 7 weeks:
What we want here is to find the period of time for which we got a total difference that is the smallest (i.e., the period for which we had the best situation of having bugs fixed at a higher rate than their discovery). For our example above, such period corresponds to week 3 to 6 inclusively. Give an efficient divide-and-conquer algorithm to solve this problem, which would be faster than the brute-force solution of taking each possible time intervals, and adding up the differences over that interval.
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Week: & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \# bugs opened: & 2 & 14 & 0 & 10 & 9 & 6 & 4 \\ \hline \# bugs closed: & 12 & 2 & 7 & 14 & 7 & 9 & 3 \\ \hline Difference: & 10 & 12 & 7 & 4 & 2 & 3 & 1 \\ \hline \end{tabular}Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started