In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Version selection
Choosing the Hadoop version is critical to HBase deployment. The following table shows the Hadoop version information supported by different HBase. Based on the HBase version, you should choose the appropriate Hadoop version.
HBase-0.92.xHBase-0.94.xHBase-0.96Hadoop-0.20.205SXXHadoop-0.22.xSXXHadoop-1.0.xSSSHadoop-1.2.xNTSSHadoop-0.23.xXSNTHadoop-2.xXSS
S = supported and tested, support
X = not supported, not supported
NT = not tested enough. It can be run, but the tests are insufficient.
one。 Preparatory work
1. Choose the appropriate supporting software. The software packages used in this article are
Hadoop-1.2.1-bin.tar.gz
Hbase-0.94.10.tar.gz
Jdk-6u20-linux-i586.bin
Zookeeper-3.4.6.tar.gz
two。 Environment preparation, this experiment has three machines to do a cluster, 1 master,2 slave.
1) install openssl,rsync on each machine
2) create a user hadoop on each machine and modify the hostname in / etc/sysconfig/network
And add the following mapping to / etc/hosts
192.168.10.1 master
192.168.10.2 slave1
192.168.10.3 slave2
Note: these 3 hosts mapped by DNS cannot have other dns mapping names. Otherwise, the hbase that sets up the cluster and prepares to build the table will report a very strange error.
Org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
Result in table building and writing
3) install JDK
Create a folder / usr/java, and execute after moving jdk-6u20-linux-i586.bin to this folder
Add java path to / etc/profile
Export JAVA_HOME=/usr/java/jdk1.6.0_45
Export JRE_HOME=/usr/java/jdk1.6.0_45/jre
Export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
Two. Install hadoop
Ssh password-less login is established between 1.master and two slave machines (for security reasons, it is best to establish it through hadoop users)
1) first switch to hadoop user input
Ssh-keygen-t rsa
In this way, a pair of public and private keys id_rsa id_rsa.pub will be generated under the home directory of hadoop users. Ssh/
2) then append the contents of id_rsa.pub to the authorized key authorized_keys file in the same directory on the master computer (if not, just create one)
Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
This enables ssh to log in without a password. You can try to command ssh localhost to log in to see if you need a password.
3) transfer the file id_rsa.pub to slave1 and 2 through the scp command
Scp id_rsa.pub hadoop@192.168.10.2:~
Scp id_rsa.pub hadoop@192.168.10.3:~
Then append the contents to key authorized_keys as well.
Cat ~ / id_rsa.pub > > ~ / .ssh/authorized_keys
In this way, the Maser machine can log in to the two slave without a password.
4) as above, append the id_rsa.pub content of slave1 and slave2 to the authorized_keys on master.
Achieve ssh login without password between master and slave.
5) be sure to log in to each other with the hostname. Otherwise, there will be a ssh error when starting hadoop later, resulting in startup failure.
two。 Unpack the hadoop package, put it in the / usr/local/hadoop directory, and add a directory tmp
And change the owner of the entire directory to hadoop
Tar zxvf hadoop-1.2.1-bin.tar.gz
Mv hadoop-1.2.1/ / usr/local/hadoop
Mkdir / usr/local/hadoop/tmp
Chown-R hadoop:hadoop / usr/local/hadoop
3. Modify the configuration file of hadoop. Generally, if you want hadoop to start normally, you need to modify at least 4 configuration files.
Are
1) modify the configuration file / usr/local/hadoop/conf/hadoop-env.sh
Add at the end of the file
Export JAVA_HOME=/usr/java/jdk1.6.0_45
2) modify the configuration file / usr/local/hadoop/conf/core-site.xml as follows
Hadoop.tmp.dir configuration tmp storage directory
/ usr/local/hadoop/tmp
Fs.default.name
Hdfs://master:9000 configure the address and port number of HDFS
3) modify the configuration file / usr/local/hadoop/conf/hdfs-site.xml as follows
Dfs.name.dir
${hadoop.tmp.dir} / dfs/name
Dfs.data.dir
/ usr/local/hadoop/data
Dfs.replication
one
4. Modify configuration file / usr/local/hadoop/conf/mapred-site.xml
Mapred.job.tracker
Hbase1:9001
5. Add the path to hadoop in / etc/profile
Export HADOOP_HOME=/usr/local/hadoop
Export PATH=$HADOOP_HOME/bin:$PATH
Make it effective.
Source / etc/profile
So the hadoop of master has been installed.
6. Use scp to transfer the folder / usr/local/hadoop to the same path on both slave machines
Scp-r / usr/local/hadoop root@192.168.10.2:/usr/local
Scp-r / usr/local/hadoop root@192.168.10.3:/usr/local
Modify the owner
Chown-R hadoop:hadoop / usr/local/hadoop
Add the following variables to / etc/profile
Export HADOOP_HOME=/usr/local/hadoop
Export PATH=$HADOOP_HOME/bin:$PATH
Make it effective.
Source / etc/profile
three. Start the distributed file system hadoop
First, to start for the first time on master, su switches to the hadoop user to execute the following command
Hadoop namenode-format
The appearance of sucessfully formatted indicates that the formatting is successful.
Then use the hadoop user to run the script file start-all.sh to start
four. Verify hadoop
After startup, use the command jsp to view the process.
Generally speaking, there are several processes under a normal master.
JobTracker
NameNode
SecondaryNameNode
Under a normal slave, there are two
JobTracker
NameNode
In this way, hadoop is successfully installed.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 219
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.