In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
What is HBase? HBase is a distributed column-oriented database based on the Hadoop file system. It is an open source project and is scale-out. HBase is a data model similar to Google's large table design that provides fast, random access to large amounts of structured data. It takes advantage of the fault tolerance provided by Hadoop's file system (HDFS). It is an ecosystem of Hadoop that provides random real-time read / write access to data and is part of the Hadoop file system. People can store HDFS data directly or through HBase. Use HBase to read consumer / random access data in HDFS. HBase is on top of Hadoop's file system and provides read-write access. HBase and HDFSHDFSHBaseHDFS are distributed file systems suitable for storing large-capacity files. HBase is a database built on top of HDFS. HDFS does not support fast individual record lookup. HBase provides quick lookup in larger tables it provides high-latency batch processing; there is no batch concept. It provides billions of records with low latency access to a single row record (random access). The data it provides can only be accessed sequentially. HBase internally uses hash tables and provides random access, and it stores indexes, which can quickly find the data in the HDFS file. Storage Mechanism of HBase
HBase is a column-oriented database that is sorted by rows in a table. Table schema definitions can only have column families, that is, key-value pairs. A table has multiple column families and each column family can have any number of columns. The values of subsequent columns are continuously stored on disk. Each cell value in the table has a timestamp. In short, in a HBase:
A table is a collection of rows. A row is a collection of column families. A column family is a collection of columns. A column is a collection of key-value pairs.
2. HBase cluster deployment 1, download installation # download installation package wget http://archive.apache.org/dist/hbase/1.2.6/hbase-1.2.6-bin.tar.gz# extract installation package tar xf hbase-1.2.6-bin.tar.gzmv hbase-1.2.6 / usr/local/hbase# create directory mkdir-p / home/hbase/ {log,pid,tmp} 2, configure HBase environment variables
Edit the file / etc/profile.d/hbase.sh.
# HBASE ENVexport HBASE_HOME=/usr/local/hbaseexport PATH=$PATH:$HBASE_HOME/bin
Make the HADOOP environment variable effective
Source / etc/profile.d/hbase.sh III, HBase configuration (namenode01) 1, configuration hbase-env.sh
Edit the file / usr/local/hbase/conf/hbase-env.sh and modify the following information.
Export JAVA_HOME=/usr/java/defaultexport HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoopexport HBASE_LOG_DIR=/home/hbase/logexport HBASE_PID_DIR=/home/hbase/pidexport HBASE_MANAGES_ZK=false2, configure region server regionservers
Edit the file / usr/local/hbase/conf/regionservers and modify it as follows.
Datanode01datanode02datanode033, configuring column storage hbase-site.xml
Edit the file / usr/local/hbase/conf/hbase-site.xml and modify it as follows.
Hbase.rootdir hdfs://namenode01:9000/hbase hbase.tmp.dir / home/hbase/tmp hbase.cluster.distributed true hbase.master.port 60000 hbase.master.info.port 60010 hbase.regionserver.port 60020 hbase.regionserver.info.port 60030 Hbase.zookeeper.property.clientPort 2181 zookeeper.session.timeout 120000 hbase.zookeeper.quorum zk01:2181 Zk02:2181 Zk03:2181 hbase.zookeeper.property.maxClientCnxns 300 4, copy configuration files to other nodes cd / usr/local/hbase/confscp * datanode01:/usr/local/hbase/conf scp * datanode02:/usr/local/hbase/confscp * datanode03:/usr/local/hbase/conf 4, HBase startup 1, execute start-hbase.sh2 in namenode01, Check the WEB interface of HBase [root@namenode01 conf] # jps14512 NameNode14786 ResourceManager15204 HMaster15405 Jps [root@datanode01 ~] # jps3509 DataNode3621 NodeManager3238 HRegionServer1097 QuorumPeerMain3839 Jps [root@datanode02 ~] # jps3668 Jps3048 HRegionServer3322 DataNode3434 NodeManager1101 QuorumPeerMain [root@datanode03 ~] # jps3922 DataNode4034 NodeManager4235 Jps1102 QuorumPeerMain3614 HRegionServer3 and HBase
Visit http://192.168.1.200:60010/master-status
Visit http://192.168.1.201:60030/rs-status
4 、 Enter the hbase shell to verify [root@namenode01 ~] # hbase shellSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.5.jarring Found binding in Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jarring] SLF4J: See http:// Www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell Enter 'help' for list of supported commands.Type "exit" to leave the HBase ShellVersion 1.2.6, rUnknown Mon May 29 02:25:32 CDT 2017hbase (main): 001row 0 > listTABLE 0 row (s) in 0.2210 seconds= > [] hbase (main): 002 servers 0 > status1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.