Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.
In this guide, we will learn how to run our first project using Hadoop and Docker. I will leave the example code at the end of the guide. So, let’s start!
Setup the Environment
First, we need to prepare our environment. Make sure you have Docker and Docker-compose installed on your machine.
- Clone this repository: git clone https://github.com/big-data-europe/docker-hadoop.
- Build and run containers: docker-compose up -d. The -d flag is used to run the containers in the background.
- To check if the containers are running: docker ps.
To the container world
Now, we want to move our files to the container and configure everything there. Of course, you can configure the volume mount and there will be no need to move any files. Here, I am assuming you have no prior knowledge of docker-compose and will move the files to the container.
- Create a directory for the input files and the code: mkdir
. - Copy the input files to the HDFS: docker cp namenode:/tmp/.
- Copy the code to the HDFS: docker cp
.java namenode:/tmp/.
- Open a shell in the name node container: docker exec -it namenode /bin/bash. You can now run HDFS commands.
- Navigate to the directory where the code is located: cd /tmp/. For organization purposes, you can create a directory for the code and move the code to that directory: mkdir
and mv . You can also create an output directory for the generated classes: mkdir classes.
To the Hadoop World
We want to configure the HDFS by exporting its path and creating the required directories for the code and the input files.
- Export the HADOOP_CLASSPATH: export HADOOP_CLASSPATH=$(hadoop classpath).
- Create the required directories in the HDFS:
1. Create the root directory for this project: hadoop fs -mkdir /.
2. Create the directory for the input files: hadoop fs -mkdir //Input.
3. Copy the input files to the HDFS: hadoop fs -put /tmp/ //Input.
You can open UI for HDFS at http://localhost:9870.
Compile your Code
You don’t have to install Java, the docker image has it already installed and we will use it within the container only.
- Compile the code: javac -classpath $HADOOP_CLASSPATH -d ./classes/ ./
.java. The -d flag is used to specify the directory where the classes are located and the . is used to specify the current directory.
- Put the compiled code in a jar file: jar -cvf
.jar -C ./classes ..
Don’t forget to hit the Clap and Follow buttons to help me write more articles like this.
Run your First Job
To run the code: hadoop jar .jar
You can find your HDFS files at http://localhost:9870/explorer.html#/.
Check the Output
- Check if the job was successful: hadoop job -list all. The job name is the second column of the output and the status is the third column.
- To check the output: hadoop fs -cat /
/Output/*. This will display the output in the terminal. To save the output to a file: hadoop fs -cat / /Output/* > - To retrieve the output file from the HDFS: docker cp namenode:/
/Output/
Appendix
You can use those files for testing.
- Create a file called WordCount.java with the following content:
// Java imports
import java.io.IOException;
import java.util.StringTokenizer;
// Hadoop imports
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
// WordCount class
public class WordCount {
// This is the mapper class
public static class TokenizerMapper
extends Mapper
- Create a file called input.txt with the following content:
Mostafa
Wael
Mostafa
Kamal
Wael
Mohammed
Mohammed
Mostafa
Kamal
Wael
Mostafa
Mostafa
This example counts the occurrence of each word in the input.txt.
Don’t forget to hit the Clap and Follow buttons to help me write more articles like this.
👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇
🚀Join FAUN Developer Community & Get Similar Stories in your Inbox Each Week
Run your first Big Data project using Hadoop and Docker in less than 10 Minutes! was originally published in FAUN Publication on Medium, where people are continuing the conversation by highlighting and responding to this story.
This post first appeared on Top Digital Transformation Strategies For Business Development: How To Effectively Grow Your Business In The Digital Age, please read the originial post: here