In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to build hadoop and hbase clusters in docker". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to build hadoop and hbase clusters in docker".
To build a cluster with docker, you first need to construct the docker image required by the cluster. One way to build an image is to use an existing image such as a simple linux system, run a container, manually install and configure the software needed for the cluster in the container, and then commit the container to the new image. Another way is to use Dockerfile to automate the construction of images.
The second is used below.
1. Create a ubuntu14.04 system image with ssh service
Use the ubuntu14 system to install hadoop and hbase, and because the hadoop cluster machines communicate through ssh, you need to install the ssh service on the ubuntu14 system.
Write the Dockerfile as follows:
# version: debugman007/ssh:v1# desc: the ssh# settings installed on ubuntu14.04 inherit from ubuntu14.04 official Mirror Like FROM ubuntu:14.04 # here is some basic information about the creator MAINTAINER debugman007 (skc361@163.com) RUN rm-vf / var/lib/apt/lists/*RUN apt-get update RUN apt-get install-y openssh-server openssh-client vim wget curl sudo # add user test Set the password to test and grant sudo permission RUN useradd-m test RUN echo "test:test" | chpasswdRUN cd / etc/sudoers.d & & touch nopasswdsudo & & echo "test ALL= (ALL) ALL" > > nopasswdsudo# change the shell of test users to bash, otherwise ssh logs in to ubuntu server The command line does not display the user name and directory RUN usermod-s / bin/bash test RUN echo "root:root" | chpasswd # configure sshRUN mkdir / var/run/sshd RUN sed-I 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' / etc/ssh/sshd_configRUN sed' s@session\ s*required\ s*pam_loginuid.so@session optional pam_loginuid.so@g'-I / etc/pam.d/sshdENV NOTVISIBLE "in users profile" RUN echo "export VISIBLE=now" > > / etc / profile EXPOSE 22 USER testRUN ssh-keygen-t rsa-P''- f / .ssh/id_rsaRUN cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys USER rootRUN ssh-keygen-t rsa-P'- f ~ / .ssh/id_rsaRUN cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
The created system image is located at: https://hub.docker.com/r/debugman007/ubt14-ssh/
Dockerfile is located at: https://github.com/gtarcoder/dockerfiles/blob/master/ubt14-ssh/Dockerfile
two。 Create hadoop and hbase basic images
Write the Dockerfile as follows:
# version: debugman007/ubt14-hadoop-hbase:v1# desc: ssh installed on ubuntu Java Hadoop # FROM debugman007/ubt14-ssh:v1 # the following is the basic information of some creators MAINTAINER debugman007 (skc361@163.com) # provide dns services for hadoop clusters RUN sudo apt-get-y install dnsmasq # install and configure java environment # RUN yum-y install java-1.7.0- Openjdk*ADD http://mirrors.linuxeye.com/jdk/jdk-7u80-linux-x64.tar.gz / usr/local/ RUN cd / usr/local & & tar-zxvf jdk-7u80-linux-x64.tar.gz & & rm-f jdk-7u80-linux-x64.tar.gz ENV JAVA_HOME/ usr/local/jdk1.7.0_80ENV CLASSPATH ${JAVA_HOME} / lib/dt.jar:$JAVA_HOME/lib/tools.jarENV PATH $PATH:$ {JAVA_HOME} / bin Install and configure hadoopRUN groupadd hadoopRUN useradd-m hadoop- g hadoopRUN echo "hadoop:hadoop" | chpasswd ADD http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz / usr/local/RUN cd / usr/local & & tar-zxvf hadoop-2.7.3.tar.gz & & rm-f hadoop-2.7.3.tar.gzRUN chown-R hadoop:hadoop / usr/local/hadoop-2.7 .3Run cd / usr/local & & ln-s. / hadoop-2.7.3 hadoopENV HADOOP_PREFIX / usr/local/hadoopENV HADOOP_HOME / usr/local/hadoopENV HADOOP_COMMON_HOME / usr/local/hadoopENV HADOOP_HDFS_HOME / usr/local/hadoopENV HADOOP_MAPRED_HOME / usr/local/hadoopENV HADOOP_YARN_HOME / usr/local/hadoopENV HADOOP_CONF_DIR / usr/local/hadoop/etc/hadoopENV PATH ${HADOOP_HOME} / bin:$PATH ADD http://www-eu. Apache.org/dist/hbase/1.2.4/hbase-1.2.4-bin.tar.gz / usr/local/RUN cd / usr/local & & tar-zxvf hbase-1.2.4-bin.tar.gz & & rm-f hbase-1.2.4-bin.tar.gz RUN chown-R hadoop:hadoop / usr/local/hbase-1.2.4RUN cd / usr/local & & ln-s. / hbase-1.2.4 hbase ENV HBASE_HOME / usr/ Local/hbaseENV PATH ${HBASE_HOME} / bin:$PATH RUN echo "hadoop ALL= NOPASSWD: ALL" > > / etc/sudoers USER hadoopRUN ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsaRUN cat ~ / .ssh/id_rsa.pub > > ~ / .ssh / authorized_keys
The created image is located at: https://hub.docker.com/r/debugman007/ubt14-hadoop-hbase/
Dockerfile is located at: https://github.com/gtarcoder/dockerfiles/blob/master/ubt14-hadoop-hbase/Dockerfile
3. Configure the hadoop,hbase image
Dockerfile is as follows:
# version: debugman007/ubt14-hadoop-hbase:master# desc: ssh installed on ubuntu Java, hadoop Hbase####FROM debugman007/ubt14-hadoop-hbase:base # here is some basic information about the creators MAINTAINER debugman007 (skc361@163.com) ADD hadoop-env.sh $HADOOP_HOME/etc/hadoop/ ADD mapred-env.sh $HADOOP_HOME/etc/hadoop/ ADD yarn-env.sh $HADOOP_HOME/etc/hadoop / ADD core-site.xml $HADOOP_HOME/etc/hadoop/ ADD hdfs-site.xml $HADOOP_HOME/etc/hadoop/ ADD mapred-site.xml $HADOOP_HOME/etc/hadoop/ ADD yarn-site.xml $HADOOP_HOME/etc/hadoop/ ADD slaves $HADOOP_HOME/etc/hadoop/ ADD hbase-env.sh $HBASE_HOME/conf/ADD hbase-site.xml $HBASE_HOME/conf/ADD regionservers $HBASE_HOME/conf/ USER hadoopRUN sudo mkdir-p / opt/hadoop/data/zookeeperRUN sudo chown-R hadoop: Hadoop $HADOOP_HOME/etc/hadoop RUN sudo chown-R hadoop:hadoop $HBASE_HOME/confRUN sudo chown-R hadoop:hadoop / opt/hadoop RUN sudo chown-R hadoop:hadoop / home/hadoop COPY bootstrap.sh / home/hadoop/ RUN chmod 766 / home/hadoop/bootstrap.sh ENTRYPOINT ["/ home/hadoop/bootstrap.sh"] CMD ["/ bin/bash"]
In addition to the Dockerfile file, you also need some configuration files for hadoop and hbase, including core-site.xml, hadoop-env.sh, hbase-env.sh, hbase-site.xml, hdfs-site.xml, mapred-env.sh, regionservers, slaves, yarn-env.sh, yarn-site.xml, etc.
The created image is located at: https://hub.docker.com/r/debugman007/ubt14-hadoop-hbase/
Dockerfile and the configuration file are located at: https://github.com/gtarcoder/dockerfiles/tree/master/ubt14-hadoop-hbase-v1
4. Start
(1) start a container as a hadoop master node:
Docker run-it-- name hadoop-master-h hadoop-master-d-P-p 50070 hadoop-master 50070-p 8088 hadoop-master 8088 debugman007/ubt14-hadoop-hbase:v1
(2) start three containers as hadop slave nodes
Docker run-it-- name hadoop-slave1-h hadoop-slave1 debugman007/ubt14-hadoop-hbase:v1
The three container names and the host names in the container are set to hadoop-slave1, hadoop-slave2, and hadoop-slave3.
(3) set the / etc/hosts file of each node
Edit a run_hosts.sh script file to set up name resolution for each node. Suppose the IP address in the hadoop-master container is 10.0.1.2, and the IP address of hadoop-slave1/2/3 is 10.0.1.3, respectively.
#! / bin/bashecho 10.0.1.2 hadoop-master > > / etc/hostsecho 10.0.1.3 hadoop-slave1 > > / etc/hostsecho 10.0.1.4 hadoop-slave2 > > / etc/hostsecho 10.0.1.5 hadoop-slave3 > > / etc/hostsecho 10.0.1.3 regionserver1 > > / etc/hosts # hbase regionserver server echo 10.0.1.4 regionserver2 > > / etc/hosts
Execute the script in both the hadoop-master and hadoop-salve1/2/3 containers.
(4) enter the hadoop-master container
Docker exec-it hadoop-master bash
Then execute it under the / usr/local/hadoop/sbin directory in the container:
Hdfs namenode-format./start-dfs.sh./start-yarn.sh
Start the hadoop service.
Execute start-hbase.sh within the container to start the hbase service.
(5) check whether the service starts normally in the hadoop-master node and hadoop-slave node.
Jps view is performed in each node container, and it is found that it appears in the mster node
Jps, ResourceManager, NameNode, SecondaryNamenode, HMaster and other services
Jps, DataNode, NodeManager, HRegionServer and other services appear in the slave node.
The above services indicate that the cluster starts normally.
Thank you for reading, the above is the content of "how to build hadoop and hbase clusters in docker". After the study of this article, I believe you have a deeper understanding of how to build hadoop and hbase clusters in docker, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.