In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to build the configuration of HBase-0.98.9, the article introduces in great detail, has a certain reference value, interested friends must read it!
One configuration 1.1 hbase-env.sh
Everything else remains the same, export HBASE_MANAGES_ZK=false, which means not using the zookeeper that comes with hbase, but using an external zookeeper (I use zookeeper configured with the hadoop cluster)
1.2 hbase-site.xml hbase.rootdir hdfs://master1:8020/hbase The directory shared by region servers. Hbase.zookeeper.property.clientPort 2181 Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. Zookeeper.session.timeout 120000 hbase.zookeeper.quorum master1,master2,slave1 hbase.tmp.dir / root/hbasedata hbase.cluster.distributed true
1.3 explanation of some configuration parameters of regionserversmaster1master2slave11.4
Hbase.zookeeper.property.clientPort: specify the connection port of the zk
The connection timeout between zookeeper.session.timeout:RegionServer and Zookeeper. When the timeout expires, the ReigonServer will be removed from the RS cluster list by Zookeeper. When HMaster receives the removal notification, the regions responsible for the server will re-balance and let other surviving RegionServer take over.
Hbase.zookeeper.quorum: default is localhost, which lists the servers in zookeepr ensemble
2 start / shut down
Execute on master1
Bin/start-hbase.sh
Bin/stop-hbase.sh
3 Test
If the web management page can be opened, the Hmaster starts successfully: http:master1:60010
Execute on master1
{HBASE_HOME} / bin/hbase shell
Go to the shell command line and check that it is not working properly by creating tables and other operations.
Hbase (main): 001 scores','grade', 0 > create 'scores','grade',' notes
The HBase cluster needs to rely on a Zookeeper ensemble. All nodes in the HBase cluster and clients that want to access the HBase need to be able to access the Zookeeper ensemble. HBase comes with Zookeeper, but to make it easier for other applications to use Zookeeper, it's best to use a separately installed Zookeeper ensemble.
In addition, Zookeeper ensemble is generally configured with odd nodes, and Hadoop cluster, Zookeeper ensemble cluster and HBase cluster are three independent clusters that do not need to be deployed on the same physical node. They communicate with each other through the network.
It should be noted that if you want to prohibit starting the zookeeper that comes with hbase, you not only need the export HBASE_MANAGES_ZK=false configuration just now, but also need the hbase.cluster.distributed in hdfs-site.xml to be true, otherwise you will encounter Could not start ZK at requested port of 2181 error when starting, this is because hbase tries to start the zookeeper that comes with it, and we have started the zookeeper we installed, which uses port 2181 by default, so there is an error.
Also, sometimes the stop-hbase.sh execution doesn't finish for a long time, probably because you turned off zookeeper before.
Finally, Hbase does not need mapreduce, so as long as start-dfs.sh starts hdfs, then starts zookeeper on each node of zookeeper, and finally hbase-start.sh starts hbase.
2. Summary of errors encountered in installation and operation:
1. Habse failed to start. Check the log and find an error:
Failed to start zookeeper BindException: Address already in use
Reason: zk was started before hbase started (because I'm using an external zk cluster). The solution is to set the environment variable to export HBASE_MANAGES_ZK=false and make sure that the same setting is set in hbase-env.sh.
two。 Start the hmaster again and find that it is still not up. Check the log and find that an error has been reported:
Hbase java.lang.runtimeexception hmaster aborted, the line that looks for error is failed on connection exception: java.net.ConnectException: master1 Connection refused
Reason: the master1 node failed to connect to the hadoop cluster. It is suspected that there is a problem with the hbase-site.xml configuration. Its connection to hadoop is configured as follows:
Hbase.rootdir
Hdfs://master1:9000/hbase
9000 is configured here, and the original port of the cluster fs.defaultfs is 8020:
After the modification, restart hvase and everything is fine.
3. The built-in hadoop jar package of hbase is different from the existing cluster version and needs to be replaced to ensure stability and consistency.
Rm-rf / usr/hbase-0.98.21-hadoop2/lib/hadoop*.jar
Find / usr/hadoop/share/hadoop-name "hadoop*jar" | xargs-I cp {} / usr/hbase-0.98.21-hadoop2/lib
Restart after replacement and found that hmaster cannot be started again. Check the log and find:
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
A jar bag is missing.
Solution:
Need to copy hadoop-2.6.3/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar to hbase/lib
4. After several twists and turns above, it is found that after rebooting hbase, only hMaster can start, and other nodes cannot. The log error is as follows. After starting regionserver manually, it is found that hmaster has become blocked again, and table operations cannot be performed in the shell console. 2015-07-01 04 master.AssignmentManager 39 master.AssignmentManager 34480 WARN [MASTER_META_SERVER_OPERATIONS-master:60000-0]: Can't move 1588230740, there is no destination server available. 2015-07-01 04 master.AssignmentManager 34480 WARN [MASTER_META_SERVER_OPERATIONS-master:60000-0] master.AssignmentManager: Unable to determine a plan to assign {ENCODED = > 1588230740, NAME = > 'hbase:meta,1', STARTKEY = >', ENDKEY = >''} 2015-07-01 04 master.AssignmentManager 35480 WARN [MASTER_META_SERVER_OPERATIONS-master:60000-0] master.AssignmentManager: Can't move 1588230740, there is no destination server available. 2015-07-01 04 master.AssignmentManager 35481 WARN [MASTER_META_SERVER_OPERATIONS-master:60000-0] master.AssignmentManager: Unable to determine a plan to assign {ENCODED = > 1588230740, NAME = > 'hbase:meta,1', STARTKEY = >', ENDKEY = >'} 2015-07-01 04 Vera 39379 ERROR [RpcServer.handler=6,port=60000] master.HMaster: Region server server2.corp.gs.com,60020,1435743503791 reported a fatal error: ABORTING region server server1.corp.gs.com,60020,1435743483790: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected Currently processing server1.corp.gs.com 60020 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead 1435743483790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead (ServerManager.java:339) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport (ServerManager.java:254) at org.apache.hadoop.hbase.master.HMaster.regionServerReport (HMaster.java:1343) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod (RegionServerStatusProtos.java:5087) at org.apache.hadoop.hbase.ipc .RpcServer.call (RpcServer.java:2175) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run (RpcServer.java:1879) Cause: org.apache.hadoop.hbase.YouAreDeadException: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected Currently processing server1.corp.gs.com,60020,1435743483790 as dead server
Reason: it is most likely that the hbase data of the zk machine is incorrect.
Solution:
1. Stop all hbase processes first
two。 Delete hbase data from zk
Hbase zkcli or bin/zkCli.sh enter the zk command line ls / # to see the existence of hbase data rmr / hbase # Delete hbase data
3. Run hdfs fsck / hbase to ensure that the data does not conflict
4. Restart start-hbase.sh
5. If only the primary node is still running, start regionserver manually
. / hbase-daemon.sh start regionserver
6. Run hbase hbck to monitor whether the data is consistent. If there is a problem, run hbase hbck-fix (or
-repair)
* running hbase hbck-repair will sometimes cause an error in connection refuse. This is because regionserver is down again. Make sure that regionserver is running.
Please refer to this article for more questions, http://apache-hbase.679495.n3.nabble.com/Corrupted-META-td4072787.html.
5. Hmaster cannot be started after restarting hbase. Error org.apache.hadoop.hbase.util.FileSystemVersionException: File system needs to be upgraded. You have version null and I want version 8 .
Solution:
You can find that the hbase.version file may have disappeared in log. If you are just testing hbase, you can delete the / hbase in hdfs directly. And then restart it, but the previous data is lost.
Bin/hadoop fs-ls / hbase
It is found that / hbase/hbase.version has indeed disappeared, and you can find it in the / lost+found directory.
If you need to retain previous data, you can follow these steps:
Bin/hadoop fs-mv / hbase / hbase.bk
Restart HBase, and the / hbase/hbase.version file is generated, and then:
Bin/hadoop fs-cp / hbase/hbase.version / hbase.bk/bin/hadoop fs-rmr / hbase bin/hadoop fs-mv / hbase.bk/ hbase
In this way, restart HBase again, find that Hbase starts splitting hlogs, and data can be restored.
6. After upgrading hbase to 1.1.6, it was found that web port 60010 could not be opened.
It turns out that since the master web of hbase after version 1.0 does not run by default, you need to configure the default port yourself. The configuration is as follows
Just add a little content to hbase-site.xml
Hbase.master.info.port
60010
In the same way, you can configure regionserver web ports
These are all the contents of the article "how to build the configuration of HBase-0.98.9". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.