Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop+Hbase+Zookeeper cluster configuration

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

System version: CentOS 7.3 minimized installation

Software version: Hadoop 2.8.0 Hbase 1.3.1 Zookeeper 3.4.9

Cluster planning:

Hostname IPhadoop01192.168.1.61hadoop02192.168.1.62hadoop03192.168.1.63

I. initial configuration of the server (executed by all servers)

1. Modify the host name and IP address according to the cluster plan.

2. Turn off the firewall

Systemctl stop firewalld.servicesystemctl disable firewalld.service

3. Close Selinux

Sed-I "s/SELINUX=enforcing/SELINUX=disabled/g" / etc/selinux/configsed-I "s/SELINUXTYPE=targeted/#SELINUXTYPE=targeted/g" / etc/selinux/config

4. Install YumSource and software

Yum install epel-release-yyum install yum-axelget-yyum install expect wget unzip bash-completion vim*-yecho "alias vi='vim'" > > / etc/bashrc

5. Add host

Echo "192.168.1.61 hadoop01192.168.1.62 hadoop02192.168.1.63 hadoop03" > > / etc/hosts

6. Configure password-free login

# perform the following operations on all servers ssh-keygen# one enter # perform the following operations on hadoop01 cd / root/.sshcat id_rsa.pub > > authorized_keysscp authorized_keys hadoop02:/root/.ssh# perform the following operations cd / root/.sshcat id_rsa.pub > > authorized_keysscp authorized_keys hadoop03:/root/.ssh# on hadoop03 perform the following operations cd / root/.sshcat id_rsa.pub > > authorized_keysscp authorized_keys hadoop01:/root/.sshscp Authorized_keys hadoop02:/root/.ssh# verify configuration # ssh other servers on any server Whether you can log in directly

7. Install JDK

Cd / tmp# go to the official website to download jdk-8u131-linux-x64.rpmyum install jdk-8u131-linux-x64.rpm-y

8. Add system variables

Echo "export JAVA_HOME=/usr/java/jdk1.8.0_131export PATH=\ $PATH:\ $JAVA_HOME/binexport HADOOP_HOME=/data/hadoopexport PATH=\ $PATH:\ $HADOOP_HOME/binexport ZK_HOME=/data/zkexport PATH=\ $PATH:\ $ZK_HOME/binexport HBASE_HOME=/data/hbaseexport PATH=\ $PATH:\ $HBASE_HOME/bin" > > / etc/profile

9. Upgrade and restart the system

Yum update-yreboot

II. Zookeeper cluster deployment

1. Download and install

# execute mkdir / datacd / tmpwget https://archive.apache.org/dist/zookeeper/stable/zookeeper-3.4.9.tar.gztar zxvf zookeeper-3.4.9.tar.gzmv zookeeper-3.4.9 / data/zkmkdir / data/zk/logsmkdir / data/zk/datachown-R root:root / data/zk on all servers

2. Add a profile

# execute cat > > / data/zk/conf/zoo.cfg / data/zk/data/myid# on all servers and execute echo "3" > / data/zk/data/myid on hadoop03

4. Add boot script and configure system service

Echo "[Unit] Description=ZookeeperAfter=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forkingPIDFile=/data/zk/data/zookeeper_server.pidExecStart=/data/zk/bin/zkServer.sh startExecStop=/data/zk/bin/zkServer.sh stop[ install] WantedBy=multi-user.target" > > / usr/lib/systemd/system/zookeeper.servicesystemctl enable zookeeper.servicesystemctl start zookeeper.servicesystemctl status-l zookeeper.service

5. Verify the configuration

# execute zkServer.sh status on any server

III. Hadoop cluster deployment

1. Download and install

Cd / tmpwget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz tar zxvf hadoop-2.8.0.tar.gzmv hadoop-2.8.0 / data/hadoopcd / data/hadoop/mkdir tmp hdfsmkdir hdfs/name hdfs/tmp hdfs/datachown-R root:root / data/hadoop/

2. Modify / usr/hadoop/etc/hadoop/hadoop-env.sh

# modify line 25 jdk environment variable export JAVA_HOME=/usr/java/jdk1.8.0_131# modify line 33, configure file directory location export HADOOP_CONF_DIR=/data/hadoop/etc/hadoop

3. Modify / usr/hadoop/etc/hadoop/core-site.xml. The modified file is as follows:

Hadoop.tmp.dir / data/hadoop/tmp true A base for other temporary directories. Fs.default.name hdfs://192.168.1.61:9000 true io.file.buffer.size 131072 ha.zookeeper.quorum 192.168.1.61:2181192 .168.1.62: 2181192.168.1.63:2181

4. Modify / usr/hadoop/etc/hadoop/hdfs-site.xml. The modified file is as follows:

Dfs.replication 2 dfs.name.dir / data/hadoop/hdfs/name dfs.data.dir / data/hadoop/hdfs/data dfs.namenode.secondary.http-address 192.168.1.61:9001 dfs.webhdfs.enabled true dfs.permissions false

5. Copy and modify / usr/hadoop/etc/hadoop/mapred-site.xml

Cd / data/hadoop/etc/hadoop/cp mapred-site.xml.template mapred-site.xml mapreduce.framework.name yarn

6. Modify / usr/hadoop/etc/hadoop/yarn-site.xml. The modified file is as follows:

Yarn.resourcemanager.address 192.168.1.61:18040 yarn.resourcemanager.scheduler.address 192.168.1.61:18030 yarn.resourcemanager.webapp.address 192.168.1.61:18088 yarn.resourcemanager.resource-tracker.address 192 .168.1.61: 18025 yarn.resourcemanager.admin.address 192.168.1.61:18141 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler

7. Configure / usr/hadoop/etc/hadoop/slaves. The modified file is as follows:

192.168.1.61192.168.1.62192.168.1.63

Copy the Hadoop installation folder to another server

Scp-r / data/hadoop hadoop02:/datascp-r / data/hadoop hadoop03:/data

9. Format the HDFS file system

Hadoop namenode-format

10. Start the hadoop cluster

Cd / data/hadoop/sbin/./start-all.sh# this command starts all nodes directly and can only be executed on the hadoop01 server

11. Verify the configuration

# check cluster status hadoop dfsadmin-report

View through the page http://192.168.1.61:50070/dfshealth.html

IV. Hbase cluster deployment

1. Download and install

Cd / tmpwget http://apache.fayea.com/hbase/1.3.1/hbase-1.3.1-bin.tar.gztar zxvf hbase-1.3.1-bin.tar.gzmv hbase-1.3.1 / data/hbasechown-R root:root / data/hbase/

2. Modify / data/hbase/conf/hbase-env.sh. The modified file is as follows:

# modify line 27 jdk environment variable export JAVA_HOME=/usr/java/jdk1.8.0_131# modify line 128 disable own Zookeeperexport HBASE_MANAGES_ZK=false

3. Modify / data/hbase/conf/hbase-site.xml. The modified file is as follows:

Hbase.rootdir hdfs://192.168.1.61:9000/hbase hbase.cluster.distributed true hbase.zookeeper.quorum 192.168.1.61:2181192.168.1.62:2181192.168.1.63:2181 hbase.master.port 16000 hbase.master.info.port 16010

4. Modify / data/hbase/conf/regionservers. The modified file is as follows:

192.168.1.61192.168.1.62192.168.1.63

5. Copy the Hadoop configuration file to the hbase configuration file directory

Cd / data/hbase/conf/cp / data/hadoop/etc/hadoop/core-site.xml .cp / data/hadoop/etc/hadoop/hdfs-site.xml.

6. Copy the Hbase installation folder to another server

Scp-r / data/hbase hadoop02:/datascp-r / data/hbase hadoop03:/data

7. Start the Hbase cluster

Cd / data/hbase/bin/./start-hbase.sh# this command starts all nodes directly and can only be executed on the hadoop01 server

8. Verify the installation

# enter shellhbase shell

View through the page http://192.168.1.61:16010

Cluster configuration is complete!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report