In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Machine distribution
Hadoop1 192.168.56121
Hadoop2 192.168.56122
Hadoop3 192.168.56123
Prepare to install the package
Jdk-7u71-linux-x64.tar.gz
Zookeeper-3.4.9.tar.gz
Hadoop-2.9.2.tar.gz
Upload the installation package to the / usr/local directory of the three machines and extract it
Configure hosts
Echo "192.168.56.121 hadoop1" > > / etc/hostsecho "192.168.56.122 hadoop2" > > / etc/hostsecho "192.168.56.123 hadoop3" > > / etc/hosts
Configure environment variables
/ etc/profile
Export HADOOP_PREFIX=/usr/local/hadoop-2.9.2export JAVA_HOME=/usr/local/jdk1.7.0_71
Deploy zookeeper
Create a zoo user
Useradd zoopasswd zoo
Change the owner of the zookeeper directory to zoo
Chown zoo:zoo-R / usr/local/zookeeper-3.4.9
Modify zookeeper configuration file
To the / usr/local/zookeeper-3.4.9/conf directory
Cp zoo_sample.cfg zoo.cfgvi zoo.cfgtickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper-3.4.9clientPort=2181server.1=hadoop1:2888:3888server.2=hadoop2:2888:3888server.3=hadoop3:2888:3888
The create myid file is placed in the / usr/local/zookeeper-3.4.9 directory, and only 1-255numbers are saved in the myid file, which is the same as id in the server.id line in zoo.cfg.
Myid is 1 in hadoop1
Myid is 2 in hadoop2
Myid is 3 in hadoop3
Start the zookeeper service on three machines
[zoo@hadoop1 zookeeper-3.4.9] $bin/zkServer.sh start
Verify zookeeper
[zoo@hadoop1 zookeeper-3.4.9] $bin/zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: / usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfgMode: follower
Configure Hadoop
Create a user
Useradd hadooppasswd hadoop
Change the owner of hadoop directory to hadoop
Chmod hadoop:hadoop-R / usr/local/hadoop-2.9.2
Create a directory
Mkdir / hadoop1 / hadoop2 / hadoop3chown hadoop:hadoop / hadoop1chown hadoop:hadoop / hadoop2chown hadoop:hadoop / hadoop3
Configure mutual trust
Ssh-keygenssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@hadoop1ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@hadoop2ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@hadoop3# uses the following command to test the trust ssh hadoop1 datessh hadoop2 datessh hadoop3 date
Configure environment variables
/ home/hadoop/.bash_profile
Export PATH=$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$PATH
Configuration parameters
Etc/hadoop/hadoop-env.sh
Export JAVA_HOME=/usr/local/jdk1.7.0_71
Etc/hadoop/core-site.xml
Fs.defaultFS hdfs://ns hadoop.tmp.dir / usr/loca/hadoop-2.9.2/temp io.file.buffer.size 4096 ha.zookeeper.quorum hadoop1:2181,hadoop2:2181,hadoop3:2181
Etc/hadoop/hdfs-site.xml
Dfs.nameservices ns dfs.ha.namenodes.ns nn1 Nn2 dfs.namenode.rpc-address.ns.nn1 hadoop1:9000 dfs.namenode.http-address.ns.nn1 hadoop1:50070 dfs.namenode.rpc-address.ns.nn2 hadoop2:9000 dfs.namenode.http-address.ns.nn2 hadoop2:50070 dfs.namenode.shared.edits.dir qjournal://hadoop1:8485 Hadoop2:8485 Hadoop3:8485/ns dfs.journalnode.edits.dir / hadoop1/hdfs/journal dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh. Private-key-files / home/hadoop/.ssh/id_rsa dfs.namenode.name.dir file:/hadoop1/hdfs/name File:/hadoop2/hdfs/name dfs.datanode.data.dir file:/hadoop1/hdfs/data,file:/hadoop2/hdfs/data,file:/hadoop3/hdfs/data dfs.replication 2 dfs.webhdfs.enabled true dfs.hosts.exclude/usr/local/hadoop-2.9.2/etc/hadoop/excludes
Etc/hadoop/mapred-site.xml
Mapreduce.framework.name yarn yarn-site.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.hostname hadoop1
Etc/hadoop/slaves
Hadoop1hadoop2hadoop3
Start the command for the first time
1. Start the Zookeeper of each node first, and execute the following command on each node: bin/zkServer.sh start2, and execute the following command on a namenode node Create the namespace hdfs zkfc-formatZK3, start journalnodesbin/hadoop-daemon.sh start journalnode4 on each journalnode node, format the namenode and journalnode directories hdfs namenode-format ns5 on the primary namenode node, start the namenode process sbin/hadoop-daemon.sh start namenode6 on the primary namenode node, and execute the first line command on the standby namenode node. This is to format the directory of the standby namenode node and copy the metadata from the primary namenode node, and this command will not format the journalnode directory again! Then start the standby namenode process with the second command! Hdfs namenode-bootstrapStandbysbin/hadoop-daemon.sh start namenode7, execute the following command sbin/hadoop-daemon.sh start zkfc8 on both namenode nodes, and start datanodesbin/hadoop-daemon.sh start datanode on all datanode nodes
Daily start and stop order
# start script, start all node service sbin/start-dfs.sh# stop scripts, stop all node service sbin/stop-dfs.sh verification
Jps check process
Http://192.168.56.122:50070
Http://192.168.56.121:50070
Upload and download test files
# create directory [hadoop@hadoop1 ~] $hadoop fs-mkdir / test# verify [hadoop@hadoop1 ~] $hadoop fs-ls / Found 1 itemsdrwxr-xr-x-hadoop supergroup 0 2019-04-12 12:16 / test# upload file [hadoop@hadoop1 ~] $hadoop fs-put / usr/local/hadoop-2.9.2/LICENSE.txt / test# verify [hadoop@hadoop1 ~] $hadoop fs-ls / test Found 1 items-rw-r--r-- 2 hadoop supergroup 106210 2019-04-12 12:17 / test/LICENSE.txt# download the file to / tmp [hadoop@hadoop1 ~] $hadoop fs-get / test/LICENSE.txt / tmp# verify [hadoop@hadoop1 ~] $ls-l / tmp/LICENSE.txt-rw-r--r--. 1 hadoop hadoop 106210 Apr 12 12:19 / tmp/LICENSE.txt
Reference: https://blog.csdn.net/Trigl/article/details/55101826
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.