In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The three servers are configured with IP as follows:
192.168.11.131
192.168.11.132
192.168.11.133
Configure hostnames separately
Master:
# hostnamectl set-hostname master
The other two are configured as slave1 and slave2 respectively.
Each server shuts down the selinux and firewall:
# vi / etc/sysconfig/selinux
SELINUX=enforcing-- > SELINUX=disabled
# setenforce 0
# systemctl stop firewalld
# systemctl disable firewalld
Replace the Yum source:
[root@master ~] # mkdir apps
Upload package
Wget-1.14-15.el7.x86_64.rpm
[root@master apps] # rpm-ivh wget-1.14-15.el7.x86_64.rpm
[root@master apps] # cd / etc/yum.repos.d/
[root@master yum.repos.d] # wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master yum.repos.d] # mv Centos-7.repo CentOS-Base.repo
[root@master yum.repos.d] # scp CentOS-Base.repo root@192.168.11.132:/etc/yum.repos.d/
[root@master yum.repos.d] # scp CentOS-Base.repo root@192.168.11.133:/etc/yum.repos.d/
Each server executes
# yum clean all
# yum makecache
# yum update
Ntp time synchronization:
As the ntp server, master is configured as follows
# yum install-y ntp
Ntpserver:
Modification time of master as ntp master server
# date-s "2018-05-27 23:03:30"
# vi / etc/ntp.conf
Add two lines under the comment
# restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
Server 127.127.1.0
Fudge 127.127.1.0 stratum 11
Below the comment
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
Slave1 and slave2 as ntp clients are configured as follows
# vi / etc/ntp.conf
Add two lines under the same comment
# restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
Server 192.168.11.131
Fudge 127.127.1.0 stratum 11
Add comments on four lines
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
Synchronization time error
# ntpdate 192.168.11.131
25 Jun 07:39:15 ntpdate [25429]: the NTP socket is in use, exiting
Resolve:
# lsof-iPUR 123
-bash: lsof: command not found
# yum install-y lsof
# lsof-iPUR 123
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Ntpd 1819 ntp 16u IPv4 33404 0t0 UDP *: ntp
Ntpd 1819 ntp 17u IPv6 33405 0t0 UDP *: ntp
Ntpd 1819 ntp 18u IPv4 33410 0t0 UDP localhost:ntp
Ntpd 1819 ntp 19u IPv4 33411 0t0 UDP slave1:ntp
Ntpd 1819 ntp 20u IPv6 33412 0t0 UDP localhost:ntp
Ntpd 1819 ntp 21u IPv6 33413 0t0 UDP slave1:ntp
# kill-9 1819
Update time again
# ntpdate 192.168.11.131
24 Jun 23:37:27 ntpdate [1848]: step time server 192.168.11.131 offset-28828.363808 sec
# date
Sun Jun 24 23:37:32 CST 2018
Useradd:
# groupadd hduser
# useradd-g hduser hduser
# passwd hduser
Ssh password-free authentication:
All nodes generate authorized_keys:
# su hduser
$cd
$ssh-keygen-t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/ home/hduser/.ssh/id_rsa):
Created directory'/ home/hduser/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in / home/hduser/.ssh/id_rsa.
Your public key has been saved in / home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:KfyLZTsN3U89CbFAoOsrkI9YRz3rdKR4vr/75R1A7eE hduser@master
The key's randomart image is:
+-[RSA 2048]-+
| .o. | |
|. . .. | |
|. .. oo |
| | o.o.oo. | |
| o + .s.. .. E. |
| | + o.B.. Oo. |
| | o =. = o +.. |
|. . O * oo. O o. |
| | oo==+. | . . | |
+-[SHA256]-+
$cd .ssh /
$cp id_rsa.pub authorized_keys
All nodes authenticate each other:
Master:
[hduser@master .ssh] $ssh-copy-id-I id_rsa.pub hduser@slave1
[hduser@master .ssh] $ssh-copy-id-I id_rsa.pub hduser@slave2
Verify:
[hduser@master .ssh] $ssh slave1
Last failed login: Wed Jun 27 04:55:44 CST 2018 from 192.168.11.131 on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Wed Jun 27 04:50:05 2018
[hduser@slave1 ~] $exit
Logout
Connection to slave1 closed.
[hduser@master .ssh] $ssh slave2
Last login: Wed Jun 27 04:51:53 2018
[hduser@slave2 ~] $
Slave1:
[hduser@slave1 .ssh] $ssh-copy-id-I id_rsa.pub hduser@master
[hduser@slave1 .ssh] $ssh-copy-id-I id_rsa.pub hduser@slave2
Slave2:
[hduser@slave2 .ssh] $ssh-copy-id-I id_rsa.pub hduser@master
[hduser@slave2 .ssh] $ssh-copy-id-I id_rsa.pub hduser@slave1
Upload package:
[hduser@master ~] $cd src
[hduser@master src] $ll
Total 356128
-rw-r--r-- 1 root root 44575568 Jun 16 17:24 hadoop-0.20.2.tar.gz
-rw-r--r-- 1 root root 288430080 Mar 16 2016 jdk1.7.0_79.tar
Configure jdk:
[hduser@master src] $tar-xf jdk1.7.0_79.tar-C..
[hduser@master src] $cd..
[hduser@master ~] $vi .bashrc
Add
Export JAVA_HOME=/home/hadoop/jdk1.7.0_79
Export JRE_HOME=$JAVA_HOME/jre
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
Export PATH=$PATH:$JAVA_HOME/bin
[hduser@master ~] $source .bashrc
[hduser@master ~] $java-version
Java version "1.7.079"
Java (TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot (TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
The configuration of the other two nodes is the same as above
Configure hadoop:
Decompress each node
Tar-zxf hadoop-0.20.2.tar.gz-C..
Master:
[hduser@master conf] $pwd
/ home/hduser/hadoop-0.20.2/conf
[hduser@master conf] $vi hadoop-env.sh
Export JAVA_HOME=/home/hduser/jdk1.7.0_79
[hduser@master conf] $vi core-site.xml
Fs.default.name
Hdfs://master:9000
[hduser@master conf] $vi hdfs-site.xml
Dfs.replication
two
[hduser@master conf] $vi mapred-site.xml
Mapred.job.tracker
Master:9001
[hduser@master conf] $vi masters
# localhost
Master
[hduser@master conf] $vi slaves
# localhost
Slave1
Slave2
Copy the configuration file to the other two nodes
[hduser@master conf] $scp hadoop-env.sh slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp core-site.xml slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp hdfs-site.xml slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp mapred-site.xml slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp masters slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp slaves slave1:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp hadoop-env.sh slave2:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp core-site.xml slave2:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp hdfs-site.xml slave2:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp mapred-site.xml slave2:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp masters slave2:~/hadoop-0.20.2/conf/
[hduser@master conf] $scp slaves slave2:~/hadoop-0.20.2/conf/
Format the file system
[hduser@master conf] $cd.. / bin
[hduser@master bin] $. / hadoop namenode-format
Start the service
[hduser@master bin] $. / start-all.sh
[hduser@master bin] $jps
1681 JobTracker
1780 Jps
1618 SecondaryNameNode
1480 NameNode
[hduser@slave1 conf] $jps
1544 Jps
1403 DataNode
1483 TaskTracker
[hduser@slave2 conf] $jps
1494 TaskTracker
1414 DataNode
1555 Jps
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.