In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Configure fully distributed
1. Modify static IP and host name
① Planning IP and Hostname Mapping
② related documents
Etc/hostname
Etc/hosts
Etc/resolve.conf
Etc/sysconfig/network-script/ifcfg-ens3
2. Configure ssh
① deletes the ~ / .ssh directory of each node
② creates a ~ / .ssh directory on each node and specifies permissions
Mkdir ~ / .ssh-m 700
③ generates a public-private key pair at the primary node
/ / generate a public-private key pair
Ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsa
Cp id_rsa.pub authorized_keys / / authorized_keys is 644 permissions
/ / distribute the public key
Scp / .ssh/authorized_keys centos@s202:/home/centos/.ssh/
Ssh-copy-id centos@s202
④ distribution key
Scp / .ssh/authorized_keys centos@s212:/~/.ssh/
⑤ tests whether ssh is successful
Ssh s212
3. Modify the configuration file and distribute it to each node
① core-site.xml
Fs.defaultFS
Hdfs://s201
② hdfs-site.xml
Dfs.replication
three
Dfs.namenode.name.dir
/ home/centos/hadoop/hdfs/name
Dfs.datanode.data.dir
/ home/centos/hadoop/hdfs/data
Dfs.namenode.checkpoint.dir
/ home/centos/hadoop/hdfs/namesecondary
③ mapred-site.xml
Mapreduce.framework.name
Yarn
④ yarn-site.xml
Yarn.resourcemanager.hostname
S201
Yarn.nodemanager.local-dirs
/ home/centos/hadoop/nm-local-dir
Yarn.nodemanager.aux-services
Mapreduce_shuffle
⑤ Distribution profile
Rsync etc centos@s212:/soft/hadoop/
4. NN and DN specify
Slaves file configuration
5. Clone the host and modify the host IP and name of each node
The method is the same as step 1
6. Format the file system
Hdfs namenode-format
7. Start the cluster and observe the process
1. Start hdfs
Start-dfs.sh
/ / involving the process
Namenode 50070 metadata
Datanode 50075 data
Secondarynamenode 50090
2. Start yarn
Start-yarn.sh
/ / involving the process
Resourcemanager
Nodemanager
Scripts are involved:
1 、 xcall.sh
#! / bin/bash
For host in `cat / soft/hadoop/etc/hadoop/ slaves`; do
Echo = $host =
Ssh $host $@
Done
2 、 xsync.sh
#! / bin/bash
Param=$1
Dir= `dirname $param`
Fullpath= `pwd-P`
User= `whoami`
Filename= `basename $param`
Cd $dir
For host in `cat / soft/hadoop/etc/hadoop/ slaves`; do
Echo = $host =
Rsync-lr $filename $user@$host:$fullpath
Done
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.