Hadoop Learning

This blog is for those who want to learn the concept of big data Hadoop. Big Data is one of the emerging technologies and very interesting to learn. There are too much scope in the field of Big Data and it will grow as the time progress.  Its more challenging as well as interesting field .

I will take you through from the very basic to the advance concepts of Hadoop and its eco systems which will help you to grow in the field of Big Data. 

Lets walk through Hadoop. Hadoop is an open source software for distributing storage and processing of very large data sets of clusters built from commodity hardware. Commodity hardware are the low cost computer machines that are easily available to use. Hadoop basically works on the processing of the data by spiting it into multiple sets of small data sets.

There are different ecosystems of Hadoop such as HDFS, Map Reduce(MR), Hive, Impala, Oozie, Kafka, Flume, Spark, Pig etc. We will learn all the ecosystems one by one. 

In coming tutorial we will learn about the basic concepts of Big Data Hadoop and why it is required. 

Comments

Popular posts from this blog

DistCp2

Yarn Apache Hadoop

Big Data Intro

Big Data Intro Part-2

DistCP

HDFS Part -3

HDFS Part -1

HDFS Part -2