In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Hadoop installation
Virtual machine (centos7)
Master:192.168.0.228
Slave:192.168.0.207
Software
Apache-hive-1.2.1-bin.tar.gz
Hadoop-2.6.0-cdh6.4.8.tar.gz
Jdk-8u65-linux-x64.tar.gz
Mysql-connector-java-5.1.31-bin.jar
Hbase-0.98.15-hadoop2-bin.tar
Zookeeper-3.4.6.tar
1. Turn off the firewall
Systemctl disable firewalld.service
Systemctl stop firewalld.service
Setenforce 0
Vim / etc/selinux/config permanent shutdown
Change SELINUX=enforce to SELINUX=disable
two。 Configure Hostnam
192.168.0.228: echo "master" > / etc/hostname
192.168.0.207: echo "slave" > / etc/hostname
3. Inter-host resolution
Add the ip address and hostname under the two machines / etc/hosts files
4. Configure SSH mutual trust
Master
Yum-y install sshpass
Ssh-keygen returns all the way.
Ssh-copy-id-I / .ssh/id_rsa.pub root@192.168.0.220
Slave
Yum-y install sshpass
Ssh-keygen returns all the way.
Ssh-copy-id-I / .ssh/id_rsa.pub root@192.168.0.201
As shown in the picture, OK
5. Install JDK
Both machines need to be configured
Tar zxvf jdk-8u65-linux-x64.tar.gz
Mv jdk1.8.0_65 / usr/jdk
Set environment variabl
Vim / etc/profile
Export JAVA_HOME=/usr/jdk
Export JRE_HOME=/usr/jdk/jre
Export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
Export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
Execute source / etc/profile
test
Java-version as shown in the figure
6. Install Hadoop
Tar zxvf hadoop-2.6.0-cdh6.4.8.tar.gz
Mv hadoop-2.6.0-cdh6.4.8 / usr/hadoop
Cd / usr/hadoop
Mkdir-p dfs/name
Mkdir-p dfs/data
Mkdir-p tmp
6.1 Edit configuration file
Salves
Yarn-env.sh
Yarn-site.xml
Mapred-site.xml
Hdfs-env.sh
Core-site.xml
Hadoop-env.sh
Cd / usr/hadoop/etc/hadoop
Vim slaves
192.168.0.220 # add slaveIP
Vim hadoop-env.sh / vim yarn-env.sh
Export JAVA_HOME=/usr/jdk # add java variable
Vim core-site.xml
Fs.defaultFS
Hdfs://192.168.0.228:9000
Io.file.buffer.size
131702
Hadoop.tmp.dir
File:/usr/hadoop/tmp
Hadoop.proxyuser.hadoop.hosts
*
Hadoop.proxyuser.hadoop.groups
*
Vim hdfs-site.xml
Dfs.namenode.name.dir
: / usr/hadoop/dfs/name
Dfs.datanode.data.dir
: / usr/hadoop/dfs/data
Dfs.replication
two
Dfs.namenode.secondary.http-address
192.168.0.228:9001
Dfs.webhdfs.enabled
True
Vim mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.jobhistory.address
192.168.0.228:10020
Mapreduce.jobhistory.webapp.address
192.168.0.228:19888
Vim yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.auxservices.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Yarn.resourcemanager.address
192.168.0.228:8032
Yarn.resourcemanager.scheduler.address
192.168.0.228:8030
Yarn.resourcemanager.resource-tracker.address
192.168.0.228:8031
Yarn.resourcemanager.admin.address
192.168.0.228:8033
Yarn.resourcemanager.webapp.address
192.168.0.228:8088
Yarn.nodemanager.resource.memory-mb
seven hundred and sixty eight
Copy the directory to the slave machine
Scp-r / usr/hadoop root@192.168.0.207:/usr/
Format namenode
. / bin/hdfs namenode-format
Start hdfs
. / sbin/start-dfs.sh. / sbin/start-yarn.sh
Testing with jps
Visit 192.168.0.228RV 50070
192.168.0.228:8088
Install MySQL and Hive
Local schema: this schema stores metadata in the local database (usually MySQL). This can support multi-user, multi-session.
MySQL:
Wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
Rpm-ivh mysql-community-release-el7-5.noarch.rpm
Yum-y install mysql-community-server
Systemctl start mysql start
Mysqladmin-uroot password' password' sets the password for root
Mysql-uroot-ppassword
Create database hive; creates a hive library
Grant all on hive.* to 'hive'@'localhost' identified by' hive'; authorization
Hive
Tar zxf apache-hive-1.2.1-bin.tar.gz
Mv apache-hive-1.2.1-bin/ / usr/hadoop/hive
Configuration variable
Vim / etc/profile
Export HIVE_HOME=/usr/hadoop/hive
Export PATH=$HIVE_HOME/bin:$HIVE_HOME/conf:$PATH
Execute source / etc/profile
Mv mysql-connector-java-5.1.31-bin.jar / usr/hadoop/hive/lib
Copy the JDBC driver package to the lib of hive
Cd / usr/hadoop/hive/conf
Cp hive-default.xml.template hive-site.xml
Vim hive-site.xml change profile
Cd / usr/hadoop/hive/bin/
Start Hive
Install zookeeper and Hbase
1.Zookeeper
The Master configuration is as follows:
Tar zxf zookeeper-3.4.6.tar
Mv zookeeper-3.4.6 / usr/hadoop/zookeeper
Change the owner of the file
Chown-R root:root / usr/hadoop/zookeeper
Cd / usr/hadoop/zookeeper
Mkdir data creates a zookeeper data storage directory
Configuration variable vim / etc/profile
Join export ZOOKEEPER_HOME=/usr/hadoop/zookeeper
Export PATH=$PATH:$ZOOKEEPER_HOME/bin
Execute source / etc/profile
The configuration file is stored in the conf/ directory, and the name of the zoo_sample.cfd file is changed to zoo.cfg. The configuration is as follows:
Cp zoo_sample.cfd zoo.cfg
Vim zoo.cfg
TickTime=2000
InitLimit=10
SyncLimit=5
DataDir=/usr/hadoop/zookeeper/data
ClientPort=2181
Enter the ip address or hostname of master and slave:
Server.1=192.168.0.228:2888:3888
Server.2=192.168.0.207:2888:3888
Mkdir data/myid creates a myid file
Vim myid
Fill in the server in front of native ip in zoo.cfg. The number after it.
one
The file copier slave node
Scp-r / usr/hadoop/zookeeper/ root@192.168.0.207:/root/hadoop/
Slave configuration:
Configuration variable vim / etc/profile
Join export ZOOKEEPER_HOME=/usr/hadoop/zookeeper
Export PATH=$PATH:$ZOOKEEPER_HOME/bin
Execute source / etc/profile
Cd / usr/hadoop/zookeeper/data
Mkdir data/myid creates a myid file
Vim myid
Fill in the server in front of native ip in zoo.cfg. The number after it.
two
Start:
[root@master bin] # / usr/hadoop/zookeeper/bin/zkServer.sh start
Enter jps to view, as shown in the figure
Install Hbase
1. Tar unzips the hbase installation package
2. Configure hbase
A 、 / conf/hbase-env.sh
Export JAVA_HOME= / usr/jdk
Export HBASE_MANAGES_ZK=false (zookeeper that comes with hbase can be enabled, so there is no need to install zookeeper separately. If installed separately, it is false)
B 、 conf/hbase-site.xml
This configuration uses the zookeeper that comes with hbase.
Hbase.rootdir s
Hdfs://master:9000/hbase
Hbase.cluster.distributed
True
Hbase.zookeeper.quorum
Slave1,slave2,slave3
Dfs.replication
two
The separately installed zookeeper is configured as follows
Regionservers
Hbase.rootdir
Hdfs://master:9000/hbase
Hbase.cluster.distributed
True
Hbase.zookeeper.quorum
Master,slave1,slave2,slave3
Dfs.replication
two
Hbase.zookeeper.property.dataDir
/ home/hadoop/zk
Note that the hbase.rootdir configuration needs to be consistent with the hadoop configuration.
C 、 conf/regionservers
Slave1
Slave2
Slave3
The configuration of the hbase is complete at this point. Copy it to the slave1~salve3 with the scp command.
Start hbase
Start-hbase.sh
Use jps to see if it starts properly, or check it through a browser, master:60010.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.