Hadoop interview questions part 11

Hadoop interview questions part 11

Take as many assements as you can to improve your validate your skill rating

Total Questions: 20

1. ____________ is used when you want the sink to be the input source for another operation.

Correct Answer is : Agent Tier Event –

2. ___________ is where you would land a flow (or possibly multiple flows joined together) into an HDFS-formatted file system.

Correct Answer is : Collector Tier Event

3. ____________ sink can be a text file, the console display, a simple HDFS path, or a null bucket where the data is simply deleted.

Correct Answer is : Basic

4. Flume deploys as one or more agents, each contained within its own instance of :

Correct Answer is : JVM

5. _________ is the name of the archive you would like to create.

Correct Answer is : archiveName

6. Point out the correct statement :

Correct Answer is : All of the mentioned

7. Using Hadoop Archives in __________ is as easy as specifying a different input filesystem than the default file system.

Correct Answer is : MapReduce

8. The __________ guarantees that excess resources taken from a queue will be restored to it within N minutes of its need for them.

Correct Answer is : scheduler

9. Point out the wrong statement :

Correct Answer is : None of the mentioned

10. _________ is a pluggable Map/Reduce scheduler for Hadoop which provides a way to share large clusters.

Correct Answer is : Capacity Scheduler

11. Which of the following parameter describes destination directory which would contain the archive ?

Correct Answer is :

12. _________ identifies filesystem path names which work as usual with regular expressions.

Correct Answer is : none of the mentioned

13. __________ is the parent argument used to specify the relative path to which the files should be archived to

Correct Answer is : -p

14. Mapper implementations are passed the JobConf for the job via the ________ method

Correct Answer is : JobConfigurable.configure

15. Input to the _______ is the sorted output of the mappers.

Correct Answer is : Reducer

16. The right number of reduces seems to be :

Correct Answer is : 0.95

17. Point out the wrong statement :

Correct Answer is : Reducer has 2 primary phases

18. The output of the _______ is not sorted in the Mapreduce framework for Hadoop.

Correct Answer is : None of the mentioned

19. Which of the following phases occur simultaneously ?

Correct Answer is : Shuffle and Sort

20. Mapper and Reducer implementations can use the ________ to report progress or just indicate that they are alive.

Correct Answer is : Reporter

Similar Interview Questions

    Search for latest jobs

    Icon
    Icon