In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how hadoop is based on Linux7 installation configuration graphics, Xiaobian thinks it is quite practical, so share it with you as a reference, I hope you can gain something after reading this article.
Prepare the prepared ingredients as shown in the above picture (ps: hadoop-3.1.2-src changed to hadoop-3.1.2
src means source file? Anyway, it's changed. Everyone, pay attention to the wrong place in the screenshot at the back. I'll correct it when I have time. Liver pain)
Install Centos7
Right click on desktop to open terminal--enter ifconfig--view ens33 ip--remember then open xftp6
click New
Select multiple ingredients, right click to transfer, intranet transfer speed is not fast nor slow
It's perfect.
Unzip hadoop installation package tar-zxvf hadoop-3.1.2-src.tar.gz
When reinstalling centos7 and decompressing, it was divided into folders
Prepared as above
Open xshell New
Enter your host ip and write your username and password on user authentication
Yes, that's it--and then all three machines need to be renamed.
Time synchronization time zone consistency. To ensure that the host time is set accurately, the time zone of each machine must be consistent. In the experiment we need to synchronize network time, so we first choose the same time zone. Make sure the time zone is the same, otherwise there will be time zone differences after synchronization. You can use the date command to check your machine time. Select Time Zone: tzselect
1. turn off the firewall
When the state is dead, the firewall is closed. Close firewall: systemctl stop firewalld View status: systemctl status firewalld
2. The hosts file configuration (three machines) is shown below. Enter the ip of each node.
3. Master acts as an ntp server, modifying the ntp configuration file. (Executed by Master)
vi /etc/ntp.conf server 127.127.1.0 # local clock fudge 127.127.1.0 stratum 10 #stratum can also be set to other values, the range is 0~15
Restart the ntp service. /bin/systemctl restart ntpd.service Synchronization of other machines (slave1, slave2) Wait about five minutes before synchronizing the master server time on another machine. ntpdate master If the configuration platform has no external network connection, you can set the three machines to uniform time. Enter the command: date -s 10:00 (time)
Finally getting to the point?? Don't panic. Come on.
1. SSH
(1) Each node generates a public and private key separately:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa (three machines)
The key generation directory is in the.ssh directory under the user home directory. Enter the corresponding directory to view:
cd .ssh/
(2) Id_dsa.pub is the public key, id_dsa is the private key, and then copy the public key file into the authorized_keys file: (master only)
cat id_dsa.pub >> authorized_keys (note operating under.ssh/path)
Connecting yourself to the host is also called ssh inner loop.
ssh master
(3) Allow master node to log in to slave node via SSH without password. (Operation in slave)
In order to achieve this function, the public key files of the two slave nodes must contain the public key information of the master node.
When the master can safely access these two slave nodes.
slave1 node remotely logs in to master node via scp command and copies master's public key file to current directory
and renamed master_das.pub, which requires password verification.
scp master:~/.ssh/id_dsa.pub ./ master_das.pub
Append the master node's public key file to the authorized_keys file:
cat master_das.pub >> authorized_keys
(1) Each node generates public and private keys respectively: ssh-keygen -t dsa -P''-f ~/.ssh/id_dsa (three machines) Key generation directory In the.ssh directory under the user's home directory, enter the corresponding directory to view: cd .ssh/(2) Id_dsa.pub is the public key, id_dsa is the private key, and then copy the public key file into the authorized_keys file: (master only) cat id_dsa.pub >> authorized_keys (note operating under.ssh/path) Connect yourself on the host, also known as ssh inner loop. ssh master
At this point,
Master can connect to Slave1.
When slave1 node is connected for the first time,"yes" is required to confirm the connection, which means that the master node needs to be manually queried when connecting slave1 node, and cannot be automatically connected. After entering yes, the connection is successful, and then the logout exits to the master node.
The same is true for slave2.
jdk has been installed before, so we will directly configure the environment, just like when Windows is configured with environment variables (three sets)
Modify environment variables: vi /etc/profile> Add the following content: > export JAVA_HOME=/usr/java/jdk1.8.0_241> export CLASSPATH=$JAVA_HOME/lib/export> PATH=$PATH:$JAVA_HOME/bin> export PATH JAVA_HOME CLASSPATH
Effective environment variables: source /etc/profile
Here's a tip for scp
scp /etc/profile slave1:/etc/profile ##This will pass to slave1 and slave2
Finally Hadoop? Congratulations Ning!
Configure environment variables: vi/etc/profileexport HADOOP_HOME=/usr/hadoop/hadoop-3.1.2export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexport PATH=$PATH:$HADOOP_HOME/bin
A step I always forget, shout it out to me!
Use the following command to make the profile effective: source /etc/profile
Tips The following is the content of the configuration file. This article will not explain the content for the time being, but I have prepared the standard configuration file for everyone.
Edit the hadoop environment profile hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_241 In this file, there will be a lot of comment statements. Find the template you want to configure, delete the #and perfect it.
And then I'm going to be lazy!!! I uploaded a few configuration files, we copy to this folder can be prompted whether to overwrite the time to enter y can be
core-site.xml yarn-site.xml hdfs-site.xml mapred-site.xml
You also need to write a slave file plus slave1 slave2 as shown below
And master files.
(9) Distribution hadoop: scp -r /usr/hadoop root@slave1:/usr/scp -r /usr/hadoop root@slave2:/usr/
format hadoop in master hadoop namenode -format If you report an error, see if it is the error of the following link. There is a solution in it
About "hadoop how to install configuration based on Linux7" This article is shared here, I hope the above content can be of some help to everyone, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.