Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Delete and add Hadoop+hbase nodes

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

This file is mainly divided into four parts, install and deploy hadoop_hbase, hbase basic commands, remove a hadoop+hbase node, add and remove a hadoop+hbase node

1. Install and configure Hadoop 1.0.3+hbase-0.92.1

Environmental generalization

HostnameRolesht-sgmhadoopcm-01 (172.16.101.54) NameNode, ZK, HMastersht-sgmhadoopdn-01 (172.16.101.58) DataNode, ZK, HRegionServersht-sgmhadoopdn-02 (172.16.101.59) DataNode, ZK, HRegionServersht-sgmhadoopdn-03 (172.16.101.60) DataNode, HRegionServersht-sgmhadoopdn-04 (172.16.101.66) DataNode, HRegionServer

Using tnuser users, ssh mutual trust is required between each machine node

Each machine node needs to install jdk1.6.0_12 and configure environment variables

[tnuser@sht-sgmhadoopcm-01 ~] $cat .bash _ profile

Export JAVA_HOME=/usr/java/jdk1.6.0_12

Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

Export HADOOP_HOME=/usr/local/contentplatform/hadoop-1.0.3

Export HBASE_HOME=/usr/local/contentplatform/hbase-0.92.1

Export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin

[tnuser@sht-sgmhadoopcm-01] $rsync-avz-- progress ~ / .bash_profile sht-sgmhadoopdn-01:~/

[tnuser@sht-sgmhadoopcm-01] $rsync-avz-- progress ~ / .bash_profile sht-sgmhadoopdn-02:~/

[tnuser@sht-sgmhadoopcm-01] $rsync-avz-- progress ~ / .bash_profile sht-sgmhadoopdn-03:~/

[tnuser@sht-sgmhadoopcm-01] $rsync-avz-- progress ~ / .bash_profile sht-sgmhadoopdn-04:~/

Create a directory

[tnuser@sht-sgmhadoopcm-01 contentplatform] $mkdir-p / usr/local/contentplatform/data/dfs/ {name,data}

[tnuser@sht-sgmhadoopcm-01 contentplatform] $mkdir-p / usr/local/contentplatform/temp

[tnuser@sht-sgmhadoopcm-01 contentplatform] $mkdir-p / usr/local/contentplatform/logs/ {hadoop,hbase}

Modify the relevant configuration files for hadoop

[tnuser@sht-sgmhadoopcm-01 conf] $vim hadoop-env.shexport JAVA_HOME=/usr/java/jdk1.6.0_12export HADOOP_HEAPSIZE=3072export HADOOP_OPTS=-Djava.net.preferIPv4Stack=trueexport HADOOP_LOG_DIR=/usr/local/contentplatform/logs/hadoop [tnuser@sht-sgmhadoopcm-01 conf] $cat core-site.xml hadoop.tmp.dir / usr/local/contentplatform/temp fs.default.name hdfs://sht-sgmhadoopcm-01:9000 hadoop.proxyuser.tnuser.hosts sht-sgmhadoopdn-01.telenav .cn hadoop.proxyuser.tnuser.groups appuser [tnuser@sht-sgmhadoopcm-01 conf] $cat hdfs-site.xml dfs.replication 3 dfs.name.dir / usr/local/contentplatform/data/dfs/name dfs.data.dir / usr/local/contentplatform/data/dfs/data dfs.permissions false dfs.support.append true dfs.datanode.max.xcievers 4096 dfs.datanode.dns.nameserver 10.224.0.102 mapred.min.split.size 100663296 Dfs.datanode.socket.write.timeout 0dfs.datanode.socket.write.timeout3000000dfs.socket.timeout3000000 dfs.http.address 0.0.0.0 cat mapred-site.xml mapred.job.tracker sht-sgmhadoopcm-01:9001 mapred.system.dir 50070 [tnuser@sht-sgmhadoopcm-01 conf] $cat mapred-site.xml mapred.job.tracker sht-sgmhadoopcm-01:9001 mapred.system.dir / usr/local/contentplatform/data/mapred/system/ mapred.local.dir / usr/local/contentplatform/data/mapred/local/ mapred.tasktracker.map.tasks.maximum 4 mapred.tasktracker. Reduce.tasks.maximum 1 io.sort.mb 200m true io.sort.factor 20 true mapred.task.timeout 7200000 mapred.child.java.opts-Xmx2048m

Modify the relevant configuration files for hbase

[tnuser@sht-sgmhadoopcm-01 conf] $cat hbase-env.shexport JAVA_HOME=/usr/java/jdk1.6.0_12export HBASE_HEAPSIZE=5120export HBASE_LOG_DIR=/usr/local/contentplatform/logs/hbaseexport HBASE_OPTS= "- XX:+UseConcMarkSweepGC" export HBASE_OPTS= "- server-Djava.net.preferIPv4Stack=true-XX:+UseParallelGC-XX:ParallelGCThreads=4-XX:+AggressiveHeap-XX:+HeapDumpOnOutOfMemoryError" export HBASE_MANAGES_ZK=true # here means to use hbase with zk for true No additional installation of ZK [tnuser @ sht-sgmhadoopcm-01 conf] $cat regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04 [tnuser@sht-sgmhadoopcm-01 conf] $cat hbase-site.xml hbase.zookeeper.quorum sht-sgmhadoopcm-01,sht-sgmhadoopdn-01 is required Sht-sgmhadoopdn-02 hbase.zookeeper.dns.nameserver 10.224.0.102 hbase.regionserver.dns.nameserver 10.224.0.102 hbase.zookeeper.property.dataDir / usr/local/contentplatform/data/zookeeper hbase.rootdir hdfs://sht-sgmhadoopcm-01:9000/hbase hbase.cluster.distributed true The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.hregion.max.filesize 536870912 hbase.regionserver.global.memstore.upperLimit 0.2 hbase.regionserver.global.memstore.lowerLimit 0.1 hfile.block.cache.size 0.5 dfs.support.append true hbase.regionserver.lease.period 1800000 hbase.rpc.timeout 1800000 hbase.hstore.blockingStoreFiles 40 zookeeper.session.timeout 900000 hbase.hregion.memstore.flush.size 134217728 hbase.hstore.compaction.max 30 hbase.regionserver.handler.count 10

Copy the entire directory and assign it to other nodes

[tnuser@sht-sgmhadoopcm-01 contentplatform] $ll / usr/local/contentplatform/

Total 103572

Drwxr-xr-x 4 tnuser appuser 34 Apr 6 21:41 data

Drwxr-xr-x 14 tnuser appuser 4096 May 9 2012 hadoop-1.0.3

-rw-r--r-- 1 tnuser appuser 62428860 Apr 5 14:59 hadoop-1.0.3.tar.gz

Drwxr-xr-x 10 tnuser appuser 255 Apr 6 21:36 hbase-0.92.1

-rw-r--r-- 1 tnuser appuser 43621631 Apr 5 15:00 hbase-0.92.1.tar.gz

Drwxr-xr-x 4 tnuser appuser 33 Apr 6 22:43 logs

Drwxr-xr-x 3 tnuser appuser 17 Apr 6 20:44 temp

[root@sht-sgmhadoopcm-01 local] # rsync-avz-- progress / usr/local/contentplatform sht-sgmhadoopdn-01:/usr/local/

[root@sht-sgmhadoopcm-01 local] # rsync-avz-- progress / usr/local/contentplatform sht-sgmhadoopdn-02:/usr/local/

[root@sht-sgmhadoopcm-01 local] # rsync-avz-- progress / usr/local/contentplatform sht-sgmhadoopdn-03:/usr/local/

[root@sht-sgmhadoopcm-01 local] # rsync-avz-- progress / usr/local/contentplatform sht-sgmhadoopdn-04:/usr/local/

Start hadoop

[tnuser@sht-sgmhadoopcm-01 data] $hadoop namenode-format

[tnuser@sht-sgmhadoopcm-01 bin] $start-all.sh

[tnuser@sht-sgmhadoopcm-01 conf] $jps

6008 NameNode

6392 Jps

6191 SecondaryNameNode

6279 JobTracker

Access the HDFS file system:

Http://172.16.101.54:50070

Http://172.16.101.59:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/

Start hbase

[tnuser@sht-sgmhadoopcm-01 ~] $start-hbase.sh

[tnuser@sht-sgmhadoopcm-01 ~] $jps

3792 HQuorumPeer

4103 Jps

3876 HMaster

3142 NameNode

3323 SecondaryNameNode

3408 JobTracker

Http://172.16.101.54:60010

2. Hbase basic commands

Check the hbase running status hbase (main): 001dead 0 > status4 servers, 0 dead, 0.7500 average load creation table t1hbase (main): 008main 0 > create 't1' 'info' View all tables hbase (main): 009 listTABLE 0 > listTABLE T1 View the corresponding hdfs file: [tnuser@sht-sgmhadoopcm-01 hbase-0.92.1] $hadoop dfs-ls / hbase/Found 7 itemsdrwxr-xr-x-tnuser supergroup 0 009-04-06 22:41 / hbase/-ROOT-drwxr-xr- X-tnuser supergroup 0 2019-04-06 22:41 / hbase/.META.drwxr-xr-x-tnuser supergroup 0 2019-04-06 23:14 / hbase/.logsdrwxr-xr-x-tnuser supergroup 0 2019-04-06 22:41 / hbase/.oldlogs-rw-r--r-- 3 tnuser supergroup 38 2019-04-06 22:41 / hbase/hbase.id-rw-r--r-- 3 Tnuser supergroup 3 2019-04-06 22:41 / hbase/hbase.versiondrwxr-xr-x-tnuser supergroup 0 2019-04-07 16:53 / hbase/t1 View Table details hbase (main): 017tnuser supergroup 0 > describe 't1'DESCRIPTION ENABLED {NAME = > 't1' FAMILIES = > [{NAME = > 'info', BLOOMFILTER = >' NONE', REPLICATION_SCOPE = > '013', VERSIONS = >'3', COMPRESSIO true N = > 'NONE', MIN_VERSIONS = >' 0', TTL = > '2147483647', BLOCKSIZE = > '65536', IN_MEMORY = > 'false' BLOCKCACHE = > 'true'}]} determine whether the table exists hbase (main): 018main 0 > exists' t1'Table T1 does exist Disable and enable table hbase (main): 019t1'false 0 > is_disabled 't1'false or: disable 't1' hbase (main): 020t1'false 0 > is_enabled' t1'true or: enable 't1' Insert record: put , hbase (main): 010 main 0 > put 't1 gift paraphernalia hbase (main): 014 get 0 > get 't1 query table record get. 'row1'COLUMN CELLinfo:name timestamp=1554621994538, value=xiaoming hbase (main): 015 CELLinfo:age timestamp=1554623754957 0 > get 't1 column CELLinfo:age timestamp=1554623754957 Value=18hbase (main): 017 COLUMN= 0 > get 't1 recording, {COLUMN= > 'info:age'} COLUMN CELLinfo:age timestamp=1554623754957 Value=18 range scan: hbase (main): 026 scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782 Value=malehbase (main): 027 ROW COLUMN+CELLrow1 column=info:name 0 > scan 't1, {LIMIT = > 2} ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18hbase (main): 034 LIMIT 0 > scan 't1' {STARTROW = > 'row2'} ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782, ue=malehbase (main): 038 row2' 0 > scan 't1cards, {STARTROW = >' row2' ENDROW = > 'row3'} ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18 counter hbase (main): 042 hbase 0 > count 't1per3 row (s) in 0.0200 seconds Delete the column cluster of the specified row hbase (main): 013 hbase 0 > delete 't1 parallel instruction in the row hbase (main): 047 hbase 0 > deleteall 't1' 'row2' emptying table hbase (main): 049t1'Truncating 0 > truncate' t1'Truncating 't1' table (it may take a while):-Disabling table...- Dropping table...- Creating table...0 row (s) in 4.8050 seconds delete table (disable table before deleting table) hbase (main): 058 main 0 > disable 't1'hbase (main): 059truncate 0 > drop 't1'

Create HBase test data

Create 't1parallels inforegation put 't1parallels row1parallels infopednametryparallyxiaomingfutput 't1parallels row2parallels, inforegationageparallels, 18playput 't1coverages, row3genials, emp','personal','professional'put 'emp','1','personal:name','raju'put' emp','1','personal:city','hyderabad'put 'emp','1','professional:designation' 'manager'put 'emp','1','professional:salary','50000'put' emp','2','personal:name','ravi'put 'emp','2','personal:city','chennai'put' emp','2','professional:designation','sr.engineer'put 'emp','2','professional:salary','30000'put' emp','3','personal:name','rajesh'put 'emp','3','personal:city' 'delhi'put' emp','3','professional:designation','jr.engineer'put 'emp','3','professional:salary','25000'hbase (main): 040 scan 0 > scan' t1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554634306493, value=xiaomingrow2 column=info:age, timestamp=1554634306540 Value=18row3 column=info:sex, timestamp=1554634307409, value=male3 row (s) in 0.0290 secondshbase (main): 041 in 0 > scan 'emp'ROW COLUMN+CELL1 column=personal:city, timestamp=1554634236024, value=hyderabad1 column=personal:name, timestamp=1554634235959, value=raju1 column=professional:designation, timestamp=1554634236063, value=manager1 column=professional:salary, timestamp=1554634237419, value=500002 column=personal:city, timestamp=1554634241879, value=chennai2 column=personal:name, timestamp=1554634241782, value=ravi2 column=professional:designation, timestamp=1554634241920, value=sr.engineer2 column=professional:salary, timestamp=1554634242923 Value=300003 column=personal:city, timestamp=1554634246842, value=delhi3 column=personal:name, timestamp=1554634246784, value=rajesh3 column=professional:designation, timestamp=1554634246879, value=jr.engineer3 column=professional:salary, timestamp=1554634247692, value=250003 row (s) in 0.0330 seconds

3. Remove a hadoop+hbase node

First remove the hbase node of sht-sgmhadoopdn-04

The graceful_stop.sh script automatically turns off the balancer and moves region to other regionserver. If the amount of data is large, this step may take a long time.

[tnuser@sht-sgmhadoopcm-01 bin] $graceful_stop.sh sht-sgmhadoopdn-04Disabling balancerless HBASE Shell Enter 'help' for list of supported commands.Type "exit" to leave the HBase ShellVersion 0.92.1, r1298924 Fri Mar 9 16:58:34 UTC 2012balance_switch falseSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jarbank] SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3. ] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.true 0 row (s) in 0.7580 secondsUnloading sht-sgmhadoopdn-04 region (s) SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jarring secondsUnloading sht-sgmhadoopdn-04 region slf4j implied Found binding in LoggerBinder.class] Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jarrabbit Greater orgbank slf4j / Impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972 Built on 02/06/2012 10:48 GMT19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopcm-0119/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_1219/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_12/jre19/ 04According 07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/usr/local/contentplatform/hbase-0.92.1/lib/native/Linux-amd64-6419-04-07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=19/ 04 Client environment:os.name=Linux19/04/07 07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser19/04/07 20 : 11:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1/bin19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Initiating client connection ConnectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopcm-01:2181 Sht-sgmhadoopdn-02:2181 sessionTimeout=900000 watcher=hconnection19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Opening socket connection to server / 172.16.101.58:218119/04/07 20:11:14 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24569@sht-sgmhadoopcm-01.telenav.cn19/04/07 20:11:14 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.19/04/07 20:11:14 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2181, initiating session19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2181, sessionid = 0x169f7b052050003 Negotiated timeout = 90000019 of 07 20:11:15 INFO region_mover: Moving 2 region (s) from sht-sgmhadoopdn-04,60020,1554638724252 during this cycle19/04/07 20:11:15 INFO region_mover: Moving region 1028785192 (0 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:16 INFO region_mover: Moving region d3a10ae012afde8e1e401a2e400accc8 (1 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:17 INFO region_mover: Wrote list of moved regions to / tmp/sht-sgmhadoopdn-04Unloaded sht-sgmhadoopdn -04 region (s) sht-sgmhadoopdn-04: * sht-sgmhadoopdn-04: * * sht-sgmhadoopdn-04: * This system is for the use of authorized users only. Usage of * sht-sgmhadoopdn-04: * this system may be monitored and recorded by system personnel. * sht-sgmhadoopdn-04: * * sht-sgmhadoopdn-04: * Anyone using this system expressly consents to such monitoring * sht-sgmhadoopdn-04: * and they are advised that if such monitoring reveals possible * sht-sgmhadoopdn-04: * evidence of criminal activity, system personnel may provide the * sht-sgmhadoopdn-04: * evidence from such monitoring to law enforcement officials. * sht-sgmhadoopdn-04: * * sht-sgmhadoopdn-04: * sht- Sgmhadoopdn-04: stopping regionserver.... [tnuser@sht-sgmhadoopcm-01 hbase] $echo "balance_switch true" | hbase shell to view hbase node status [tnuser@sht-sgmhadoopcm-01 hbase] $echo "status" | hbase shell3 servers 1 dead, 1.3333 average load [tnuser@sht-sgmhadoopdn-04 hbase] $jps23940 Jps23375 DataNode23487 TaskTracker

Http://172.16.101.54:60010

Then remove the datanode and TaskTracker nodes of sht-sgmhadoopdn-04

[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/includesht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03 [tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/excludessht-sgmhadoopdn-04 [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site .xml dfs.hosts / usr/local/contentplatform/hadoop-1.0.3/conf/include true dfs.hosts.exclude / usr/local/contentplatform/hadoop-1.0.3/conf/excludes true [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml mapred.hosts / usr/local/contentplatform/hadoop-1.0.3/conf/include true mapred.hosts.exclude / usr/local/contentplatform/ Hadoop-1.0.3/conf/excludes true reloads configuration NameNode will check and copy the data to other nodes to restore the number of copies, but will not delete sht-sgmhadoopdn-04 to the original data, if the amount of data is large This process is time-consuming-refreshNodes: Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode and those that should be decommissioned or recommissioned. [tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3] $hadoop dfsadmin-refreshNodes [tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3] $hadoop mradmin-refreshNodes if the DataNode and TaskTracker processes on the node sht-sgmhadoopdn-04 are still alive Then close (normally closed in the previous step) [tnuser@sht-sgmhadoopdn-04 hbase] $hadoop-daemon.sh stop datanode [tnuser@sht-sgmhadoopdn-04 hbase] $hadoop-daemon.sh stop tasktracker [tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3] $hadoop dfsadmin-reportWarning: $HADOOP_HOME is deprecated.Configured Capacity: 246328578048 (229.41 GB) Present Capacity: 93446351917 (87.03 GB) DFS Remaining: 93445607424 (87.03 GB) ) DFS Used: 744493 (727.04 KB) DFS Used%: 0%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0---Datanodes available: 3 (4 total) 1 dead) Name: 172.16.101.58:50010Decommission Status: NormalConfigured Capacity: 82109526016 (76.47 GB) DFS Used: 259087 (253.01 KB) Non DFS Used: 57951808497 (53.97 GB) DFS Remaining: 24157458432 (22.5GB) DFS Used%: 0%DFS Remaining%: 29.42%Last contact: Sun Apr 07 20:45:42 CST 2019Name: 172.16.101.60:50010Decommission Status: NormalConfigured Capacity: 82109526016 (76.47 GB) DFS Used: 246799 (241.01 KB) Non DFS Used: 45172382705DFS Remaining: 36936896512 (34.4GB) DFS Used%: 0%DFS Remaining%: 44.98%Last contact: Sun Apr 07 20:45:43 CST 2019Name: 172.16.101.59:50010Decommission Status: NormalConfigured Capacity: 82109526016 (76.47 GB) DFS Used: 238607 (233.01 KB) Non DFS Used: 49758034929 (46.34 GB) DFS Remaining: 32351252480 (30.13 GB) DFS Used%: 0%DFS Remaining%: 39.4%Last contact: Sun Apr 07 20:45 42 CST 2019Name: 172.16.101.66:50010Decommission Status: DecommissionedConfigured Capacity: 0 (0 KB) DFS Used: 0 (0 KB) Non DFS Used: 0 (0 KB) DFS Remaining: 0 (0 KB) DFS Used%: 100%DFS Remaining%: 0%Last contact: Thu Jan 01 08:00:00 CST 1970 at this time the node sht-sgmhadoopdn-04 has no processes [tnuser@sht-sgmhadoopdn-04 hbase] $jps23973 Jpssht-sgmhadoopdn-04 node data still retains [tnuser@sht- Sgmhadoopdn-04 hbase] $hadoop dfs-ls / hbaseWarning: $HADOOP_HOME is deprecated.Found 8 itemsdrwxr-xr-x-tnuser supergroup 0 2019-04-07 17:46 / hbase/-ROOT-drwxr-xr-x-tnuser supergroup 0 2019-04-07 18:23 / hbase/.META.drwxr-xr-x-tnuser supergroup 0 2019-04-07 20:11 / hbase/.logsdrwxr-xr-x-tnuser supergroup 0 2019-04 -07 20:45 / hbase/.oldlogsdrwxr-xr-x-tnuser supergroup 0 2019-04-07 18:50 / hbase/emp-rw-r--r-- 3 tnuser supergroup 38 2019-04-06 22:41 / hbase/hbase.id-rw-r--r-- 3 tnuser supergroup 3 2019-04-06 22:41 / hbase/hbase.versiondrwxr-xr-x-tnuser supergroup 0 2019-04-07 18:51 / hbase/t1 balanced data file node [tnuser@sht-sgmhadoopcm-01 conf] $start-balancer.sh-threshold 10

Finally, modify some configuration files

Delete the sht-sgmhadoopdn-04 line in the regionservers file [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03 [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01: / usr/local/contentplatform/hbase-0.92.1/conf/ [tnuser@sht-sgmhadoopcm-01 Conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/ [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/ deletes the sht-sgmhadoopdn-04 line from the slave file [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03 [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01: / usr/local/contentplatform/hadoop-1.0.3/conf/ [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/ Local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/ [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/ comment and delete the excludes file [tnuser@sht-sgmhadoopcm-01 conf] $rm-rf / usr / local/contentplatform/hadoop-1.0.3/conf/excludes [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site.xml [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml

Restart hadoop and hbase

[tnuser@sht-sgmhadoopcm-01 conf] $stop-hbase.sh [tnuser@sht-sgmhadoopcm-01 hbase] $stop-all.sh [tnuser@sht-sgmhadoopcm-01 hbase] $start-all.sh [tnuser@sht-sgmhadoopcm-01 hbase] $start-hbase.shcheck data: hbase (main): 040 t1'ROW COLUMN+CELLrow1 column=info:name 0 > scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554634306493 Value=xiaomingrow2 column=info:age, timestamp=1554634306540, value=18row3 column=info:sex, timestamp=1554634307409, value=male3 row (s) in 0.0290 secondshbase (main): 041 value=hyderabad1 column=personal:name 0 > scan 'emp'ROW COLUMN+CELL1 column=personal:city, timestamp=1554634236024, value=hyderabad1 column=personal:name, timestamp=1554634235959, value=raju1 column=professional:designation, timestamp=1554634236063, value=manager1 column=professional:salary, timestamp=1554634237419, value=500002 column=personal:city, timestamp=1554634241879 Value=chennai2 column=personal:name, timestamp=1554634241782, value=ravi2 column=professional:designation, timestamp=1554634241920, value=sr.engineer2 column=professional:salary, timestamp=1554634242923, value=300003 column=personal:city, timestamp=1554634246842, value=delhi3 column=personal:name, timestamp=1554634246784, value=rajesh3 column=professional:designation, timestamp=1554634246879, value=jr.engineer3 column=professional:salary, timestamp=1554634247692, value=250003 row (s) in 0.0330 seconds

4. Add a hadoop+hbase node

Add the hadoop node of sht-sgmhadoopdn-04 first

Preparatory work:

Java environment, ssh mutual trust, / etc/hosts file

Add a sht-sgmhadoopdn-04 line in the slave file And synchronize to other nodes [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04 [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01: / usr/local/contentplatform/hadoop-1.0.3/conf/ [tnuser@sht-sgmhadoopcm-01 Conf] $rsync-avz-- progress / usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/ [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/ add the sht-sgmhadoopdn-04 line in the regionservers file And synchronize to other nodes [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04 [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01: / usr/local/contentplatform/hbase-0.92.1/conf/ [tnuser@sht-sgmhadoopcm-01 Conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/ [tnuser@sht-sgmhadoopcm-01 conf] $rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/ delete existing deposits on sht-sgmhadoopdn-04 Launch datanode and tasktracker [tnuser @ sht-sgmhadoopdn-] in the data file rm-rf / usr/local/contentplatform/data/dfs/name/*rm-rf / usr/local/contentplatform/data/dfs/data/*rm-rf / usr/local/contentplatform/data/mapred/local/*rm-rf / usr/local/contentplatform/data/zookeeper/*rm-rf / usr/local/contentplatform/logs/hadoop/*rm-rf / usr/local/contentplatform/logs/hbase/* in sht-sgmhadoopdn-04 04 conf] $hadoop-daemon.sh start datanode [tnuser@sht-sgmhadoopdn-04 conf] $hadoop-daemon.sh start tasktracker check live nodes [tnuser@sht-sgmhadoopcm-01 contentplatform] $hadoop dfsadmin-report http://172.16.101.54:50070 performs balanced data on namenode [tnuser@sht-sgmhadoopcm-01 conf] $start-balancer.sh-threshold 10

Then add the hbase node of sht-sgmhadoopdn-04

Start reginserver [tnuser @ sht-sgmhadoopdn-04 conf] $hbase-daemon.sh start regionserver on sht-sgmhadoopdn-04 to check the hbase node status http://172.16.101.54:60010

Add a backup master

Add configuration files and synchronize to all nodes [tnuser@sht-sgmhadoopcm-01 conf] $vim / usr/local/contentplatform/hbase-0.92.1/conf/backup-masterssht-sgmhadoopdn-01rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/backup-masterssht-sgmhadoopdn-01: / usr/local/contentplatform/hbase-0.92.1/conf/rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/ Conf/backup-masters sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/rsync-avz-- progress / usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/ launch hbase If the hbase cluster is already started Restart the hbase cluster [tnuser@sht-sgmhadoopcm-01 conf] $stop-hbase.sh [tnuser@sht-sgmhadoopcm-01 conf] $start-hbase.sh [tnuser@sht-sgmhadoopdn-01 conf] $vim / usr/local/contentplatform/logs/hbase/hbase-tnuser-master-sht-sgmhadoopdn-01.log2019-04-12 13 stop-hbase.sh 58 stop-hbase.sh 50893 DEBUG org.apache.hadoop.hbase.master.HMaster: HMaster started in backup mode. Stalling until master znode is written.2019-04-12 13 Node 58 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Node / hbase/master already exists and this is not a retry2019-04-12 13 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Adding ZNode for / hbase/backup-masters/sht-sgmhadoopdn-01,60000,1555048730644 in backup master directory2019-04-12 13 13 Node: 50941 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Another master is the active master, sht-sgmhadoopcm-01,60000,1555048728172 Waiting to become the next active master [tnuser@sht-sgmhadoopcm-01 hbase-0.92.1] $jps2913 JobTracker2823 SecondaryNameNode3667 Jps3410 HMaster3332 HQuorumPeer2639 NameNode [tnuser@sht-sgmhadoopdn-01 conf] $jps7539 HQuorumPeer7140 DataNode7893 HMaster8054 Jps7719 HRegionServer7337 TaskTracker failover: [tnuser@sht-sgmhadoopcm-01 hbase] $cat / tmp/hbase-tnuser-master.pid | xargs kill-9 http://172.16.101.58:60010

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report