In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
First of all, to install hadoop, you should install the development environment of java and configure jdk.
Actually create a hadoop user
Cd to this user's directory
Create a directory for apps. From now on, everything will be decompressed into apps.
The files in the extracted directory are as follows
Here lib is the local library!
Bin is your own operation command.
Sbin is the start command!
Etc is a configuration file
Include is the header file of the local library!
Our jar package is in the share file!
The following figure shows the directory in the share package
The configuration file in the previous etc/hadoop/ directory is left to configure and modify the configuration file marked with red lines.
Here we configure hadoop-env.sh first.
We set the value of JAVA_HOME here as shown in the following figure, because you want to use ssh remote connection, so using the original ${JAVA_HOME} will not work.
The second part is configured in the core-site.xml file
The first configuration in the figure above specifies the file system of hadoop, and the second specifies the data directory in which the processes of each host in the cluster work.
Third, modify hdfs.site.xml
Save two copies of the data and make a backup!
Then modify the mapred-site.xml.template
Here, the platform on which mapreduce runs is specified as yarn, otherwise it defaults to local.
Don't forget to carry out this order.
Fourth, configure yarn-site.xml next.
After installing, set the environment variable of hadoop!
These two are the main ones!
Then use the scp command to copy the entire apps directory to another host
Also pass / etc/profile over.
And then use the
These two commands, one is to start the namenode node, and the other is to check the java process, here is used to see if the namenode is started!
This means that it has been activated.
Then we can visually view the file system using the ip address!
Http://192.168.150.129:50070/dfshealth.html#tab-overview
Here 192.168.150.129 is the ip of my mini1 virtual machine. Host the namenode node again!
Then we may see that the space used is 0
This is because we did not start datanode. So here we start the datanode of any host at random.
Be sure to run under the user su hadoop.
Use the command hadoop-daemon.sh start datanode
Then use the jps command to see if the datanode process is started
If there is no DataNode in the jps command, then you can check the / home/..../hadoop-hadoop-datanode-mini2.log above.
Note that the only difference here is that it is log and the image above is out.
If you want to turn off the datanode, you can use the
This order.
Start all datanode and namenode with script
Here you need to modify the configuration file of slaves
Modify it
You can start the script mini2 and mini3 as datanode,mini1 to namenode, (here the script runs in mini1)
But keep entering the password. So you can use secret-free login!
The situation is: I want to log in to mini2,mini3 in mini1 and then log in without a password!
We can use the command ssh-keygen in mini1 to generate the secret key
Then copy it to other hosts in turn
Then it can be used normally.
Here we can see that it can be started directly without entering a password at all!
After the configuration is complete, we can start the script directly! Start all the configurations in slaves!
We can see that no password was entered.
Of course, we can also use the command to stop all the stop-dfs.sh directly.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.