Question
28. Which is not the core Hadoop concepts? a. Developers need to worry about network programming, temporal dependencies or low-level infrastructure b. Developers should not
28. Which is not the core Hadoop concepts?
a. Developers need to worry about network programming, temporal dependencies or low-level infrastructure
b. Developers should not write code which communicates between nodes
c. Data is spread among machines in advance
d. Data is replicated multiple times on the system for increased availability and reliability
29. In Hadoop, data is loaded into systems as blocks. How big it is typically?
a. 16MB or 32MB
b. 32MB or 48MB
c. 48MB or 96MB
d. 64MB or 128MB
30. Which is not correct in Hadoop Fault Tolerance?
a. If a node fails, the master will detect that failure and re-assign the work to a different node on the system
b. Restarting a task requires communication with nodes working on other portions of the data
c. If a failed node restarts, it is automatically added back to the system and assigned new tasks
d. If a node appears to be running slowly, the master can redundantly execute another instance of the same task
31. Which is not true in Hadoop?
a. Two core components in Hadoop are HDFS and MapReduce
b. Hadoop Ecosystem includes Pig, Hive, HBase, Flume, Oozie, Sqoop, etc
c. A Hadoop cluster can have only one node
d. Hadoop is an open-source project overseen by the Apache Software Foundation
32. In the following answers, which is not true in MapReduce?
a. MapReduce is the system used to store data in the Hadoop cluster
b. It consists of two phases: Map, and then Reduce
c. Between Map and Reduce, there is a stage known as the shuffle and sort
d. Each Map task operates on a discrete portion of the overall dataset, Typically one HDFS block of data
33. From the following answers, which is not true about the basic concepts of HDFS?
a. HDFS is a file system written in Java sits on top of a native file system such as ext3, ext4 or xfs
b. Provides redundant storage for massive amounts of data using expensive special computers.
c. HDFS performs best with a modest number of large files, each file typically 100MB or more
d. HDFS is optimized for large, streaming reads of files rather than random reads
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started