Question
2) Consider a Hadoop job that processes an input data file of size equal to 189 disk blocks (189 different blocks, not considering HDFS replication
2) Consider a Hadoop job that processes an input data file of size equal to 189 disk blocks (189 different blocks, not considering HDFS replication factor). The mapper in this job requires 1 minute to read and fully process a single block of data. Reducer requires 1 second (not minute) to produce an answer for one key worth of values and there are a total of 2000 distinct keys (mappers generate a lot more key-value pairs, but keys only occur in the 1-2000 range for a total of 2000 unique entries). Assume that each node has a reducer and that the keys are distributed evenly. The total cost will consist of time to perform the Map phase plus the cost to perform the Reduce phase. a) How long will it take to complete the job if you only had one Hadoop worker node? Assume that that only one mapper and only one reducer are created on every node. b) 30 Hadoop worker nodes? c) 60 Hadoop worker nodes? d) 100 Hadoop worker nodes? e) Would changing the replication factor have any affect your answers for a-d?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started