Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build hbase Cluster by hadoop

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "hadoop how to build hbase cluster", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "hadoop how to build hbase cluster" this article.

One: uninstall the default jdk of the redhat operating system

1: find installation default installation jdk

Rpm-qa | grep java

2: delete jdk

Rpm-e-- nodeps java-1.6.0-openjdk-1.6.0.0-1.21.b17.el6.x86_64

Two: install jdk

1: install using root account

2: create a directory: / usr/java

3: download jdk and store it in / usr/java directory: jdk-6u33-linux-x64.bin

4: add execution permissions to the installation file:

Chmod + x jdk-6u43-linux-x64.bin

5: execute the jdk installation package

. / jdk-6u43-linux-x64.bin

6: add environment variables to the / etc/profile file

Export JAVA_HOME=/usr/java/jdk1.6.0_43

Export JRE_HOME=$JAVA_HOME/jre

Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar

Export PATH=$PATH:$JAVA_HOME/bin

7: the configuration takes effect, execute the following command

Source / etc/profile

8: special parameters in the configuration

Set the number of files that can be opened by each process and the maximum number of processes that can be started at the same time.

Vi / etc/security/limits.conf

Hadoop soft nofile 65535

Hadoop hard nofile 65535

Hadoop soft nproc 32000

Hadoop hard nproc 32000

Echo "session required pam_limits.so" > > / etc/pam.d/common-session

9:vm.swappiness parameter setting to reduce the active degree of memory pages swapping to disk.

Echo "vm.swappiness = 0" > > / etc/sysctl.conf

Three: host allocation, add the following four lines to the / etc/hosts file of each machine

192.168.205.23 inm1

192.168.205.24 inm2

192.168.205.25 inm3

192.168.205.26 inm4

Four: turn off all machine firewalls

Chkconfig iptables off

Service iptables stop

Five: create hadoop users combined with hadoop users on each machine

1: create a user group: groupadd hadoop

2: create user: useradd-g hadoop hadoop

3: change the password: passwd hadoop

Six: configure SSH on the master.hadoop machine

[hadoop@master ~] $ssh-keygen-t rsa-P ""

Enter file in which to save the key (/ home/hadoop/.ssh/id_rsa): / home/hadoop/.ssh/id_rsa

[hadoop@master ~] cat $HOME/.ssh/id_rsa.pub > > $HOME/.ssh/authorized_keys

[hadoop@master] chmod 7000.ssh/

[hadoop@master] chmod 60000.ssh/authorized_key

Verification

[hadoop@master ~] ssh localhost

[hadoop@master ~] ssh inm1

Copy the ssh configuration to another machine

[hadoop@master] ssh-copy-id-I $HOME/.ssh/id_rsa.pub hadoop@inm2

[hadoop@master] ssh-copy-id-I $HOME/.ssh/id_rsa.pub hadoop@inm3

Seven: zookeeper three-node cluster installation

1: install zookeeper with three servers and install it on hadoop users

192.168.205.24 、 192.168.205.25 、 192.168.205.26

2: use the cloudera version of zookeeper:zookeeper-3.4.5-cdh5.4.0.tar.gz

3: extract and modify the directory name

Tar-zxf zookeeper-3.4.5-cdh5.4.0.tar.gz

4: configure zookeeper, create a zoo.cfg file in the conf directory, and add file content

TickTime=2000

InitLimit=5

SyncLimit=2

DataDir=/homt/hadoop/storage/zookeeper/data

DataLogDir=/homt/hadoop/storage/zookeeper/logs

ClientPort=2181

Server.1=inm2:2888:3888

Server.2=inm3:2888:3888

Server.3=inm4:2888:3888

MaxClientCnxns=60

5: set the JVM parameter and add the following to the conf/java.env file

Export JVMFLAGS= "- Xms1g-Xmx1g $JVMFLAGS"

6: create a directory for zookeeper data files and logs

/ home/hadoop/storage/zookeeper/data

/ home/hadoop/storage/zookeeper/logs

Create the file myid in the / home/hadoop/storage/zookeeper/data directory and add: 1

7: copy the installed zookeeper and storage directories to inm3 and inm4 machines.

Scp-r zookeeper inm4:/home/hadoop

Scp-r storage inm4:/home/hadoop

Modify the value of the myid file on the inm3 machine to 2

Modify the value of the myid file on the inm3 machine to 3

8: start the server

. / bin/zkServer.sh start

9: verify the installation

. / bin/zkCli.sh-server inm3:2181

Eight: install HDFS,hadoop-2.0.0-cdh5.2.0

The user's hadoop account enters the system

1: decompress tar-xvzf hadoop-2.0.0-cdh5.4.0.tar.gz

2: configure Hadoop environment variables: modify vi ~ / .bashrc, and add the following configuration at the end of the file:

Export HADOOP_HOME= "/ home/hadoop/hadoop-2.0.0-cdh5.4.0"

Export HADOOP_MAPRED_HOME= "/ home/hadoop/hadoop-2.0.0-mr1-cdh5.4.0"

Export HBASE_HOME= "/ home/hadoop/hbase-0.94.6-cdh5.4.0"

Export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin

# prevent the error of not finding native lib when starting hdfs.

Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/

4: enter the hadoop-2.0.0-cdh5.4.0/src directory and build hadoop native lib:libhadoop.so

Mvn package-Pnative-DskipTests-Dbundle.snappy=true-Dsnappy.prefix=/usr/local/lib

Then refer to "hadoop2.0 lzo installation" to build lzo native lib, and put the relevant native lib into: $HADOOP_HOME/lib/native/ directory

5: make the configuration effective

Source .bashrc

6: modify the mastes and slaves files in the HADOOP_HOME/etc/hadoop directory

Contents of masters file:

Inm1

Contents of slaves file:

Inm2

Inm3

Inm4

7: modify HADOOP_HOME/etc/hadoop/core-site.xml file configuration

8: modify HADOOP_HOME/etc/hadoop/hdfs-site.xml file configuration

9: synchronize the hadoop project to the inm2,inm3,inm4 machine

Scp-r hadoop-2.0.0-cdh5.4.0 inm2:/home/hadoop

Scp-r hadoop-2.0.0-cdh5.4.0 inm2:/home/hadoop

Scp-r hadoop-2.0.0-cdh5.4.0 inm2:/home/hadoop

10: format file system

Hadoop namenode-format

11: start hdfs and yarn, and the startup script is in the HADOOP_HOME/sbin directory

. / start-dfs.sh

Eight: install mapreduce v1 Hadoopmuri 2.0.0 Mustang Mr1muri cdh5.4.0

1: decompress tar-xvzf, hadoop-2.0.0-mr1-cdh5.4.0.tar.gz

2: copy the files under $HADOOP_HOME/lib/native/ to HADOOP_MAPRED_HOME/lib/native/Linux-amd64-64

3: modify the mastes and slaves files in the HADOOP_MAPRED_HOME/conf directory

Contents of masters file:

Inm1

Contents of slaves file:

Inm2

Inm3

Inm4

4: modify HADOOP_MAPRED_HOME/etc/hadoop/core-site.xml file configuration

5: synchronize the hadoop-mr1 project to the inm2,inm3,inm4 machine

Scp-r hadoop-2.0.0-mr1-cdh5.4.0 inm2:/home/hadoop

Scp-r hadoop-2.0.0-mr1-cdh5.4.0 inm2:/home/hadoop

Scp-r hadoop-2.0.0-mr1-cdh5.4.0 inm2:/home/hadoop

6: start mapreduce, and the startup script is in the HADOOP_HOME/bin directory

. / start-mapred.sh

Nine: install hbase-0.94.6-cdh5.4.0

1: decompress tar-xvzf hbase-0.94.6-cdh5.4.0.tar.gz

2: copy the files under $HADOOP_HOME/lib/native/ to HBASE_HOME/lib/native/Linux-amd64-64

3: modify the HBASE_HOME/conf/regionservers file to add the name of the machine running the HRegionServer process.

Inm2

Inm3

Inm4

4: modify the HBASE_HOME/conf/hbase-site.xml file

5: synchronize the hbase project to the inm2,inm3,inm4 machine

Scp-r hbase-0.94.6-cdh5.4.0 inm2:/home/hadoop

Scp-r hbase-0.94.6-cdh5.4.0 inm2:/home/hadoop

Scp-r hbase-0.94.6-cdh5.4.0 inm2:/home/hadoop

6: start the hbase cluster on inm1

HBASE_HOME/bin/start-hbase.sh

7: execute hbase shell to enter hbase console. Execute the list command to verify the installation.

These are all the contents of the article "how to build hbase clusters in hadoop". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report