In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Vmware virtual machine environment:
192.168.60.128 master 192.168.60.129 node129 192.168.60.130 node130
1. Modify the / etc/sysconfig/network and / etc/hosts of each virtual machine
# modify hostname:vim / etc/sysconfig/network# modify hosts as follows: vim / etc/hosts 192.168.60.128 master 192.168.60.129 node129 192.168.60.130 node130
2. Configure three machines to trust each other (take 128 machines as an example):
2.1 ssh-keygen-t rsa2.2 ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@192.168.60.129 ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@192.168.60.1302.3 repeat the above operation on every remaining machine. Install jdk and configure environment variables
Install jdk on each host and configure the environment variables. (if you find it troublesome, you can install jdk before cloning.)
1) download the jdk installation package (Baidu), and drag the installation package into the virtual machine
2) enter the current directory of the installation package through the cd command, and extract it using the following command.
Tar-zxvf jdk. (installation package name)
3) use the following command to move the extracted folder to the / usr directory
# Note, there will be no jdk1.8... after moving to / usr. This directory is to move all the files in this directory to / usr/java, mv jdk1.8... (folder name) / usr/java
4) configure environment variables
Sudo vim / etc/profile
Add four lines at the end:
# java
Export JAVA_HOME=/usr/java
Export JRE_HOME=/usr/java/jre
Export CLASSPATH=$JAVA_HOME/lib
Export PATH=:$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
Enter the following command to make the configuration effective: source / etc/profile
4.master configures hadoop, and then transfers the hadoop file of master to the node node
1) unpack and move
# decompress hadoop package tar-zxvf hadoop... # move the installation package to mv hadoop... in the / home/hadoop directory / home/hadoop/hadoop
2) create a new folder
# create the following directory mkdir dfs mkdir dfs/name mkdir dfs/data mkdir tmp under the / home/hadoop directory
Modify the JAVA_ home value (export JAVA_HOME=/usr/java)
4) configuration file: yarn-env.sh
Modify the JAVA_ home value (export JAVA_HOME=/usr/java)
5) configuration file: slaves
Modify the content to:
Node129node130
6) configuration file: core-site.xml
Fs.defaultFS
Hdfs://master:9000
Io.file.buffer.size
131072
Hadoop.tmp.dir
File:/home/hadoop/tmp
Abase for other temporary directories.
7) configuration file: hdfs-site.xml
Dfs.namenode.secondary.http-address
Master:9001
Dfs.namenode.name.dir
File:/home/hadoop/dfs/name
Dfs.datanode.data.dir
File:/home/hadoop/dfs/data
Dfs.replication
two
Dfs.webhdfs.enabled
True
8) configuration file: mapred-site.xml
Create and then edit
Cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.jobhistory.address
Master:10020
Mapreduce.jobhistory.webapp.address
Master:19888
9) configuration file: yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.aux-services.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Yarn.resourcemanager.address
Master:8032
Yarn.resourcemanager.scheduler.address
Master:8030
Yarn.resourcemanager.resource-tracker.address
Master:8031
Yarn.resourcemanager.admin.address
Master:8033
Yarn.resourcemanager.webapp.address
Master:8088
10) transfer hadoop to the node129 and node130 / home/hadoop directories
Scp-r / home/hadoop/hadoop hadoop@node129:/home/hadoop scp-r / home/hadoop/hadoop hadoop@node130:/home/hadoop5, configure environment variables, and start hadoop to check whether the installation is successful
1) configure environment variables
# Edit the environment variables above / etc/profilesudo vim / etc/profile# that have already added java, and then add them later to export HADOOP_HOME=/home/hadoop/hadoopexport PATH=$PATH:$HADOOP_HOME/sbinexport PATH=$PATH:$HADOOP_HOME/bin
Execution
Source / etc/profile
To make the document effective.
2) start hadoop and enter the hadoop installation directory
Bin/hdfs namenode-formatsbin/start-all.sh
3) enter jps under master and node to view the process after startup
If you see the following result, you are successful.
Master:
Node:
6. Submit the first mapreduce task (wordcount) to the hadoop cluster system
1. Hdfs dfs-mkdir-p / data/input creates a test directory / data/input on the virtual distributed file system
2. Hdfs dfs-put README.txt / data/input copies the README.txt files in the current directory to the virtual distributed file system
3. Hdfs dfs-ls / data/input to see if the files we copied exist in the file system
4. Run the following command to submit the word count task to hadoop
Go to the jar file directory and execute the following instructions.
Hadoop jar / home/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount / data/input / data/output/result
Check result, and the result is in part-r-00000 under result.
Hdfs dfs-cat / data/output/result/part-r-00000
Since then, the hadoop cluster has been built successfully!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.