In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the vmware virtual machine redhat7.2 under the docker container how to install hadoop, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, let the editor take you to know about it.
Ip configuration:
# cat / etc/sysconfig/network-scripts/ifcfg-eno16777736 TYPE=EthernetBOOTPROTO=staticDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_PEERDNS=yesIPV6_PEERROUTES=yesIPV6_FAILURE_FATAL=noNAME=eno16777736UUID=dadee176-cc84-43f4-9ea9-e30a30ca3abfDEVICE=eno16777736ONBOOT=yes#20160708 addIPADDR0=192.168.128.130PREFIXO0=24GATEWAY0=192.168.128.1#DNS1=#DNS2=
DNS configuration
# cat / etc/resolv.conf# Generated by NetworkManager# No nameservers found; try putting DNS servers into your# ifcfg files in / etc/sysconfig/network-scripts like so:## DNS1=xxx.xxx.xxx.xxx# DNS2=xxx.xxx.xxx.xxx# DOMAIN=lab.foo.com bar.foo.comnameserver 192.168.128.1
Local yum configuration
# Mount iso file # mkdir-p / media/cdrom# vi / etc/fstab'''/opt/rhel-server-7.2-x86_64-dvd.iso / media/cdrom iso9660 defaults,ro Loop0 0 packages # mount-a # df-lh'''/dev/loop0 3.8G 3.8G 0% / media/cdrom'''# vi / etc/yum.repos.d/rhel-media.repo [rhel-media] name=Red Hat Enterprise Linux 7.2baseurl = file:///media/cdromenabled=1gpgcheck=1gpgkey=file:///media/cdrom/RPM-GPG-KEY-redhat-release# Clean caches # yum clean# caches software packages on the server locally To improve the speed of searching for installation software # yum makecache
Hostname modification
Hostnamectl-staticset-hostname rhels7-docker
First, install docker
Due to the speed of domestic access to the docker official website, the domestic accelerated image daocloud.io is used here.
Curl-sSL https://get.daocloud.io/docker | sh
The installation process will create a user group docker
View docker version
Docker version
Start docker and check the status
Systemctl start docker.servicesystemctl status docker.service
Display system information (premise: docker service is started)
Docker info
2. Pull centos image
Docker pull daocloud.io/library/centos:centos7
3. Start the image
1. Check the local image first
Docker images
As follows:
Note: centos is the image after installing hadoop, and daocloud.io/library/centos has just been pulled. The following operations are all based on this.
2. Start
Docker run-h master-- dns=192.168.128.1-it daocloud.io/library/centos:centos7
Description:
-h master # specify the hostname
-- dns=192.168.128.1 # varies from person to person, and configuration errors will affect later software installation
-it # starts in interactive mode
Details can be viewed at docker run-- help.
IV. Install the necessary software and configuration
1. Install the basic software
Yum install-y wget vim openssh-server openssh-clients net-tools
Description: netstat, ifconfig commands are included in the net-tools package
The sshd service will not be started after installation. The container is managed by docker and cannot use some system commands. To start sshd, you need to execute the following command:
/ usr/sbin/sshd-D &
Note: sshd service is a must for hadoop. Here, start the container by script and run sshd.
Vi / root/run.sh content #! / bin/bash/usr/sbin/sshd-D entitles chmod + x / root/run.sh
2. Network configuration
The docker container communicates with the outside through bridging, so you don't want to specify dns every time you start the container.
Modify the default dns
2.1 modify the host configuration file / etc/default/docker
DOCKER_NETWORK_OPTIONS= "--dns=192.168.128.1"
2.2 modify the host configuration file / lib/systemd/system/docker.service
[Service] EnvironmentFile=-/etc/default/dockerExecStart=/usr/bin/docker daemon-H fd:// $OPTIONS\ $DOCKER_NETWORK_OPTIONS
For details, see: http://docs.master.dockerproject.org/engine/admin/systemd/
Restart the host docker service
Systemctl daemon-reloadsystemctl restart docker.service# can use this command to check whether the startup command of docker is in effect ps-ef | grep docker
Root 2415 10 14:41? 00:00:10 / usr/bin/docker daemon-H fd://-- dns=192.168.128.1
Root 2419 2415 0 14:41? 00:00:01 docker-containerd-l / var/run/docker/libcontainerd/docker-containerd.sock-- runtime docker-runc-- start-timeout 2m
3. Install jdk8
Wget-no-check-certificate-no-cookies-header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gzmkdir / usr/javatar zxf jdk-8u91-linux-x64.tar.gz-C / usr/javaecho 'export JAVA_HOME=/usr/java/jdk1.8.0_91' > > / etc/bashrcecho' export PATH=$PATH:$JAVA_HOME/bin' > > / etc/bashrcecho 'export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar' > > / etc/bashrcsource / etc/bashrc
4. Install hadoop
4.1 install hadoop and configure environment variables
Wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gzmkdir / usr/local/hadooptar zxf hadoop-2.7.2.tar.gz-C / usr/local/hadoopecho 'export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.2' > > / etc/bashrcecho' export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop' > > / etc/bashrcecho 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin' > > / etc/bashrcsource / etc/bashrc
4.2 configure hadoop
Create the following directory under the HADOOP_HOME directory
Tmp: temporary directory
Namenode:NameNode storage directory
Datanode:DataNode storage directory
Change to the HADOOP_CONFIG_HOME directory
Cp mapred-site.xml.template mapred-site.xml
Configure core-site.xml
Hadoop.tmp.dir / usr/local/hadoop/hadoop-2.7.2/tmp A base for other temporary dirctories. Fs.default.name hdfs://master:9000 true The name of the default file system. A URI whose scheme and authority determine the FileSystem implemntation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implemnetation class. The uri's authority is used to determine the host, port, etc. For a filesystem.
Configure hdfs-site.xml
Dfs.replication 2 true Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. Dfs.namenode.name.dir / usr/local/hadoop/hadoop-2.7.2/namenode true dfs.datenode.data.dir / usr/local/hadoop/hadoop-2.7.2/datanode true
Configure mapred-site.xml
Maperd.job.tracker master:9001 The host and port that the MapReduce job tracker runs at. IF "local", then jobs are run in-process as a single map and reduce task
4.3 configure ssh password-free login
Ssh-keygen-Q-t rsa-b 2048-f / etc/ssh/ssh_host_rsa_key-N''ssh-keygen-Q-t ecdsa-f / etc/ssh/ssh_host_ecdsa_key-N' 'ssh-keygen-t dsa-f / etc/ssh/ssh_host_ed25519_key-N''
Then modify the master container / etc/ssh/sshd_config file
Change UsePAM yes to UsePAM no
Change UsePrivilegeSeparation sandbox to UsePrivilegeSeparation no
[root@b5926410fe60 /] # sed-I "s/#UsePrivilegeSeparation.*/UsePrivilegeSeparation no/g" / etc/ssh/sshd_ config[ root @ b5926410fe60 /] # sed-I "s/UsePAM.*/UsePAM no/g" / etc/ssh/sshd_config after modification, restart SSHD [root @ b5926410fe60 /] # / usr/sbin/sshd-D
4.4 modify the container root password
Passwd root
5. Save the docker container container
Docker commit-m "hadoop installed" 690a57e02578 centos:hadoop
Delete redundant containers
Docker rm
Description: 690a57e02578 is container_id, which varies from person to person. You can view it through docker ps.
After saving, you can view the local image through docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Centos hadoop b01079411e19 45 seconds ago 1.434 GB
Daocloud.io/library/centos centos7 ea08fb8c4ba5 7 days ago 196.8 MB
5. Start hadoop
Description: key factors 1, sshd services, 2, / etc/hosts configuration mapping to master nodes
Modify the / root/run.sh of the container. The container ip is allocated from 172.17.0.2 by default. There are three nodes, and the last one starts the master node. Therefore, the masterip is 172.17.0.4.
#! / bin/bashecho '172.17.0.4 master' > > / etc/hosts/usr/sbing/sshd-D
In addition: on the host, you can also use docker inspect: to view container details (output as Json) according to container_id, such as ip, mac, hostname, etc.
Docker inspect-f'{{.NetworkSettings.IPAddress}} '690a57e02578docker inspect-f' {{.NetworkSettings.MacAddress}} '690a57e02578docker inspect-f' {{.Config.Hostname}} '690a57e02578
1. Start 3 containers based on the new image (centos:hadoop)
Docker run-d-p 10012 docker run 22-- name slave1 centos:hadoop / root/run.shdocker run-d-p 10022 name slave2 centos:hadoop / root/run.shdocker run-d-p 10002 root/run.shdocker run 22-- name master-h master-P-- link slave1:slave1-- link slave2:slave2 centos:hadoop / root/run.sh
Note: the-p parameter specifies that port 22 of container is mapped to the host port respectively, and the host can be accessed locally through ssh, port 10002max 10012max 10022 is connected to three containers.
2. Start hadoop
2.1 Connect to the master container
Docker exec-it 175c3129e021 / bin/bash
2.2 formatting namenode
Hdfs namenode-format
The following information is displayed to indicate that the format is successful
16-07-09 08:12:36 INFO common.Storage: Storage directory / usr/local/hadoop/hadoop-2.7.2/namenode has been successfully formatted.
08:12:36 on 16-07-09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid > = 0
16-07-09 08:12:36 INFO util.ExitUtil: Exiting with status 0
2.3 start hadoop
Since the environment variable has been configured, you can run it directly after entering the container.
Start-all.sh
Use jps to view processes
# jps163 NameNode675 NodeManager1316 Jps581 ResourceManager279 DataNode429 SecondaryNameNode
6. The host is configured with iptables to realize port forwarding
Forwarding direction: container port 50070 host 50070
Execute on the host
Iptables-t nat-A PREROUTING-d 192.168.128.130-p tcp-- dport 50070-j DNAT-- to-destination 172.17.0.4 tcp 50070
Description: 192.168.128.130 is the host, 172.17.0.4 is the master container
At this point, the hadoop cluster in the container on the virtual machine (or host) can be accessed locally.
Thank you for reading this article carefully. I hope the article "how to install docker container under vmware virtual machine redhat7.2" shared by the editor will be helpful to everyone. At the same time, I also hope you can support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.