In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how HBase starts the script. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Common scripts mainly include:
1. $HBASE_HOME/bin/start-hbase.sh
Start the entire cluster
2. $HBASE_HOME/bin/stop-hbase.sh
Stop the entire cluster
3. $HBASE_HOME/bin/hbase-daemons.sh
Start or stop, all regionserver or zookeeper or backup-master
4. $HBASE_HOME/bin/hbase-daemon.sh
Start or stop, single master or regionserver or zookeeper
5. $HBASE_HOME/bin/hbase
The final started implementation is executed by this script
Generally, the HBase cluster is started through start-hbase.sh. The script execution process is as follows:
#! / usr/bin/env bash# $? The last running command's closing code # $# passes the number of arguments to the script # $0 shell script's name # $1 shell script's first parameter # $2 shell script's second parameter # $@ shell script's list of all parameters # Start hadoop hbase daemons.# Run this on master node.usage= "Usage: start-hbase.sh" bin= `dirna me "${BASH_SOURCE-$0}" `bin= `cd "$bin" > / dev/null; pwd` # 1, load the relevant configuration. "$bin" / hbase-config.sh # start hbase daemonserrCode=$? # end code if [$errCode-ne 0] # then exit $errCodefi # 2, parsing parameter (version 0.96 and later can take a unique parameter autorestart, the function is to restart) if ["$1" = "autorestart"] # gets the parameters of start-hbase.sh The parameter then commandToRun= "autorestart" else commandToRun= "start" fi # HBASE-6504-only take the first line of the output in case verbose gc is ondistMode= `$bin/hbase-- config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head-n 1` # is not provided when calling to determine whether hbase is distributed mode. # 3 configured in hbase-site.xml. Call the appropriate startup script if ["$distMode" = = 'false'] then "$bin" / hbase-daemon.sh-- config "${HBASE_CONF_DIR}" $commandToRun master $@ else "$bin" / hbase-daemons.sh-- config "${HBASE_CONF_DIR}" $commandToRun zookeeper "$bin" / hbase-daemon.sh-- config "${HBASE_CONF_DIR}" $commandToRun master "$bin" / hbase-daemons.sh-- config "${HBASE_CONF_" DIR} "--hosts" ${HBASE_REGIONSERVERS} "$commandToRun regionserver" $bin "/ hbase-daemons.sh-- config" ${HBASE_CONF_DIR} "--hosts" ${HBASE_BACKUP_MASTERS} "$commandToRun master-backupfi
The role of hbase-config.sh:
Loads relevant configurations, such as the HBASE_HOME directory, the conf directory (HBASE_CONF_DIR), the regionserver machine list (HBASE_REGIONSERVERS), the JAVA_HOME directory, and the HBASE_BACKUP_MASTERS machine list. It calls $HBASE_HOME/conf/hbase-env.sh.
If [- z "$HBASE_ENV_INIT"] & & [- f "${HBASE_CONF_DIR} / hbase-env.sh"]; then. "${HBASE_CONF_DIR} / hbase-env.sh" export HBASE_ENV_INIT= "true" fi
The role of hbase-env.sh:
Mainly configure JVM and its GC parameters, you can also configure the log directory and parameters, configure whether hbase management ZK is required, configure the process id directory, and so on.
# export JAVA_HOME=/usr/sca_app/java/jdk1.7.0# Where log files are stored. $HBASE_HOME/logs by default.# export HBASE_LOG_DIR=$ {HBASE_HOME} / logs# Tell HBase whether it should manage it's own instance of Zookeeper or not.# export HBASE_MANAGES_ZK=true
The role of hbase-daemons.sh:
The process started as needed.
# Run a hbase command on all slave hosts.# Modelled after $HADOOP_HOME/bin/hadoop-daemons.sh usage= "Usage: hbase-daemons.sh [--config]\ [--hosts regionserversfile] [start | stop] command args..." # if no args specified, show usageif [$#-le 1]; then echo $usage exit 1fi bin= `dirname "${BASH_SOURCE-$0}" `bin= `cd "$bin" > / dev/null; pwd`. $bin/hbase-config.sh remote_cmd= "cd ${HBASE_HOME}; $bin/hbase-daemon.sh-- config ${HBASE_CONF_DIR} $@" args= "--hosts ${HBASE_REGIONSERVERS}-- config ${HBASE_CONF_DIR} $remote_cmd" command=$2case $command in (zookeeper) exec "$bin/zookeepers.sh" $args;; (master-backup) exec "$bin/master-backup.sh" $args;; (*) exec "$bin/regionservers.sh" $args;;esac
The role of zookeepers.sh:
If HBASE_MANAGES_ZK "=" true "in hbase-env.sh, parse the xml configuration file through the class ZKServerTool, get the list of ZK nodes (that is, the configuration values of hbase.zookeeper.quorum), and then send remote commands to those nodes through SSH:
Cd ${HBASE_HOME}; $bin/hbase-daemon.sh-- config ${HBASE_CONF_DIR} start/stop zookeeper if ["$HBASE_MANAGES_ZK" = "true"]; then hosts= `"$bin" / hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool | grep'^ ZK host:' | sed's, ^ ZK host:,' `cmd=$ "${@ /\}" for zookeeper in $hosts Do ssh $HBASE_SSH_OPTS $zookeeper $cmd 2 > & 1 | sed "s / ^ / $zookeeper: /" & if ["$HBASE_SLAVE_SLEEP"! = "]; then sleep $HBASE_SLAVE_SLEEP fi donefi
The role of regionservers.sh:
Similar to zookeepers.sh, get a list of regionserver machines through the ${HBASE_CONF_DIR} / regionservers configuration file, and then SSH sends remote commands to those machines:
Cd ${HBASE_HOME}; $bin/hbase-daemon.sh-- config ${HBASE_CONF_DIR} start/stop regionserver
The role of master-backup.sh:
Through the configuration file ${HBASE_CONF_DIR} / backup-masters, get the list of backup-masters machines (in the default configuration, this configuration file does not exist, so backup-master will not be started), and then SSH sends remote commands to these machines:
Cd ${HBASE_HOME}; $bin/hbase-daemon.sh-- config ${HBASE_CONF_DIR} start/stop master-- backup
The role of hbase-daemon.sh:
Whether it is zookeepers.sh, regionservers.sh or master-backup.sh, the local hbase-daemon.sh will eventually be called, and the execution process is as follows:
1. Run hbase-config.sh and load various configurations (java environment, log configuration, process ID directory, etc.)
two。 Specify the file execution and log output path
# get argumentsstartStop=$1JAVA=$JAVA_HOME/bin/javaexport HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-$command-$HOSTNAMEexport HBASE_LOGFILE=$HBASE_LOG_PREFIX.log if [- z "${HBASE_ROOT_LOGGER}"]; thenexport HBASE_ROOT_LOGGER=$ {HBASE_ROOT_LOGGER:- "INFO,RFA"} fi if [- z "${HBASE_SECURITY_LOGGER}"] Then export HBASE_SECURITY_LOGGER=$ {HBASE_SECURITY_LOGGER:- "INFO,RFAS"} fi logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.outloggc=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.gcloglog= "${HBASE_LOG_DIR} / ${HBASE_LOGFILE}" pid=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.pidexport HBASE_ZNODE_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.znodeexport HBASE_START_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.autorestart# hbase0.98# thiscmd=$0 The absolute path to the hbase-daemon.sh file is as follows in # hbase1.0.1, but the value obtained is the same as the above thiscmd= "$bin/$ (basename ${BASH_SOURCE-$0})" args=$@ case $startStop in (start) check_before_starthbase_rotate_log $logouthbase_rotate_log $loggcecho starting $command, and the following command passes internal_start as an argument to the hbase-daemon.sh script nohup $thiscmd-- config "$HBASE_CONF_DIR}" internal_start $command $args
< /dev/null >${logout} 2 > & 1 & sleep 1; head "${logout}"; (autorestart) check_before_starthbase_rotate_log $logouthbase_rotate_log $loggcnohup $thiscmd-- config "${HBASE_CONF_DIR}" internal_autorestart $command $args
< /dev/null >${logout} 2 > & 1 &;; (internal_start) # Add to the command log file vital stats on our environment.echo "`date`Starting $command on `hostname`" > > $loglogecho "`ulimit-a`" > $loglog 2 > & 1nice-n $HBASE_NICENESS "$HBASE_HOME" / bin/hbase\-- config "${HBASE_CONF_DIR}"\ $command "$@" start > > "$logout" 2 > & 1 & echo $! > $pidwaitcleanZNode; (internal_autorestart) touch "$HBASE_START_FILE" # keep starting the command until asked to stop. Reloop on software crashwhile true dolastLaunchDate= `date +% s` $thiscmd-- config "${HBASE_CONF_DIR}" internal_start $command $args # if the file does not exist it means that it was not stopped properly by the stop commandif [!-f "$HBASE_START_FILE"]; then exit 1fi # if the cluster is being stopped then do not restart it again.zparent= `$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.parent`if ["$zparent" = = "null"]; then zparent= "/ hbase" Fizkrunning= `$ bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.state`if ["$zkrunning" = "null"]; then zkrunning= "running"; fizkFullRunning=$zparent/$zkrunning$bin/hbase zkcli stat $zkFullRunning 2 > & 1 | grep "Node does not exist" 1 > / dev/null 2 > & 1#grep returns 0 if it found something, 1 otherwiseif [$?-eq 0]; then exit 1fi # If ZooKeeper cannot be found, then do not restart$bin/hbase zkcli stat $zkFullRunning 2 > & 1 | grep Exception | grep ConnectionLoss 1 > / dev/null 2 > & 1if [$?-eq 0] Then exit 1fi # if it was launched less than 5 minutes ago, then wait for 5 minutes before starting it again.curDate= `date +% s`expr $lastLaunchDate + 300`if [$limitDate-gt $curDate]; then sleep 300fi done;; (stop) rm-f "$HBASE_START_FILE" if [- f $pid]; then pidToKill= `cat $pid` # kill-0 = = see if the PID exists if kill-0$ pidToKill > / dev/null 2 > & 1 Thenecho-n stopping $commandecho "`date` Terminating $command" > > $loglogkill $pidToKill > / dev/null 2 > & 1waitForProcessEnd $pidToKill $command elseretval=$?echo no $command to stop because kill-0 of pid $pidToKill failed with status $retval fielse echo no $command to stop because no pid file $pidfirm-f $pid; (restart) # stop the command$thiscmd-- config "${HBASE_CONF_DIR}" stop $command$ args & wait_until_done $! # wait a user-specified sleep periodsp=$ {HBASE_RESTART_SLEEP:-3} if [$sp-gt 0] Then sleep $spfi# start the command$thiscmd-- config "${HBASE_CONF_DIR}" start $command$ args & wait_until_done $!; (*) echo $usage exit 1;; esac
3. What if it's a start command?
Scroll the out output file, scroll the gc log file, and output startup time + ulimit-an information in the log file, such as
"Mon Nov 26 10:31:42 CST 2012 Starting master on dwxx.yy.taobao".. open files (- n) 65536..
4. Call $HBASE_HOME/bin/hbase start master/regionserver/zookeeper
5. Execute wait and wait for the process started in 3 to finish.
6. Execute cleanZNode to delete the node that regionserver registers on zk. The purpose of this is: when the regionserver process exits unexpectedly, the 3-minute ZK heartbeat timeout wait can be avoided, and the downtime recovery will be carried out directly by master.
7. What if it's a stop command?
Check whether the process exists according to the process ID; call the kill command and wait until the process does not exist
8. What if it's a restart command?
After calling stop, call start.
The function of $HBASE_HOME/bin/hbase:
The final started implementation is executed by this script.
1. You can view its usage by typing $HBASE_HOME/bin/hbase
[mvtech3@cu-dmz3 bin] $hbaseUsage: hbase [] [] Options:-- config DIR Configuration direction to use. Default:. / conf-- hosts HOSTS Override the list in 'regionservers' file Commands:Some commands take arguments. Pass no args or-h for usage. Shell Run the HBase shell hbck Run the hbase 'fsck' tool hlog Write-ahead-log analyzer hfile Store file analyzer zkcli Run the ZooKeeper shell upgrade Upgrade hbase master Run an HBase HMaster node regionserver Run an HBase HRegionServer node zookeeper Run a Zookeeper server rest Run an HBase REST server thrift Run the HBase Thrift server thrift2 Run the HBase Thrift2 server clean Run the HBase clean up script classpath Dump hbase CLASSPATH mapredcp Dump CLASSPATH entries required by mapreduce version Print the version CLASSNAME Run the class named CLASSNAME
2.bin/hbase shell, which is the commonly used shell tool, is used by both DDL and DML in operation and maintenance, and its implementation (call to hbase) is written in ruby.
[mvtech3@cu-dmz3 bin] $hbase shellHBase Shell; enter 'help' for list of supported commands.Type "exit" to leave the HBase ShellVersion 0.98.1-hadoop2, r1583035, Sat Mar 29 17:19:25 PDT 2014 hbase (main): 001 >
3.bin/hbase hbck
Operation and maintenance tools are commonly used to check the data consistency status of the cluster, and its execution is to directly call the main function in org.apache.hadoop.hbase.util.HBaseFsck.
4.bin/hbase hlog
Log analysis tool, its execution is to directly call the main function in org.apache.hadoop.hbase.wal.WALPrettyPrinter.
5.bin/hbase hfile
Hfile analysis tool, its execution is to directly call the main function in org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.
6.bin/hbase zkcli
View / manage ZK's shell tool, which calls org.apache.zookeeper.ZooKeeperMain 's main function.
7.bin/hbase master 、 regionserver 、 zookeeper
The execution of $HBASE_HOME/bin/hbase start master/regionserver/zookeeper directly calls the main function of org.apache.hadoop.hbase.master.HMasterorg.apache.hadoop.hbase.regionserver.HRegionServerorg.apache.hadoop.hbase.zookeeper.HQuorumPeer, and these main functions are new, a HMaster/HRegionServer/QuorumPeer of Runnable, which is constantly Running....
8.bin/hbase classpath print classpath
9.bin/hbase version prints hbase version information
10.bin/hbase CLASSNAME
All classes that implement the main function can be run through this script, such as the previous hlog hfile hbck tool, which is essentially a shortcut call to this interface, while other class that does not provide shortcuts can also be called with this interface, such as Region merge call: $HBASE_HOME/bin/hbase/org.apache.hadoop.hbase.util.Merge.
Summary of script usage:
1. Start the cluster, start-hbase.sh
two。 Shut down the cluster, stop-hbase.sh
3. Turn on / off all regionserver and zookeeper
Hbase-daemons.sh start/stop regionserver/zookeeper
4. Turn on / off a single regionserver, zookeeper
Hbase-daemon.sh start/stop regionserver/zookeeper
5. Turn master on / off
Hbase-daemon.sh start/stop master, whether or not it becomes an active master depends on whether there is currently an active master.
Thank you for reading! This is the end of the article on "how to start the script for HBase". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.