Hadoop interview questions part 47

Hadoop interview questions part 47

Take as many assements as you can to improve your validate your skill rating

Total Questions: 20

1. Sqoop can also import the data into Hive by generating and executing a ____________ statement to define the data’s layout in Hive.

Correct Answer is : CREATE TABLE

2. The __________ tool imports a set of tables from an RDBMS to HDFS.

Correct Answer is : import-tables

3. Which of the following argument is not supported by import-all-tables tool ?

Correct Answer is : –class-name

4. ____________ is a distributed real-time computation system for processing large volumes of high-velocity data.

Correct Answer is : Storm

5. Point out the correct statement :

Correct Answer is : All of the mentioned

6. Storm integrates with __________ via Apache Slider

Correct Answer is : Compaction

7. For Apache __________ users, Storm utilizes the same ODBC interface.

Correct Answer is : Hive

8. Point out the wrong statement :

Correct Answer is : Storm is difficult and can be used with only Java

9. Storm is benchmarked as processing one million _______ byte messages per second per node

Correct Answer is : 100

10. Apache Storm added open source, stream data processing to _________ Data Platform

Correct Answer is : Hortonworks

11. How many types of nodes are present in Storm cluster ?

Correct Answer is : 3

12. __________ node distributes code across the cluster.

Correct Answer is : Nimbus

13. ____________ communicates with Nimbus through Zookeeper, starts and stops workers according to signals from Nimbus

Correct Answer is : Supervisor

14. Which of the following node is responsible for executing a Task assigned to it by the JobTracker ?

Correct Answer is : TaskTracker

15. Point out the correct statement :

Correct Answer is : MapReduce tries to place the data and the compute as close as possible

16. ___________ part of the MapReduce is responsible for processing one or more chunks of data and producing the output results.

Correct Answer is : Maptask

17. _________ function is responsible for consolidating the results produced by each of the Map() functions/tasks.

Correct Answer is : Reduce

18. Point out the wrong statement :

Correct Answer is : None of the mentioned

19. Although the Hadoop framework is implemented in Java , MapReduce applications need not be written in :

Correct Answer is : Java

20. ________ is a utility which allows users to create and run jobs with any executable as the mapper and/or the reducer.

Correct Answer is : Hadoop Streaming

Similar Interview Questions

    Search for latest jobs

    Icon
    Icon