Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What about the insert Datagram error when hive connects the hbase external table?

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

The main content of this article is to explain "what to do when hive connects hbase external tables with insert data reporting errors". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what to do when hive connects hbase external tables with insert Datagram errors"!

After hive successfully connects the external table of hbase, you can query the data of hbase normally. But inserting data into hbase was wrongly reported.

Error: java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.setDurability (Lorg/apache/hadoop/hbase/client/Durability ) V at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map (ExecMapper.java:168) at org.apache.hadoop.mapred.MapRunner.run (MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper (MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run (MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run (YarnChild.java: At java.security.AccessController.doPrivileged (Native Method) at javax.security.auth.Subject.doAs (Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main (YarnChild.java:158) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.setDurability (Lorg/apache/hadoop/hbase/client/Durability) ) V at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write (HiveHBaseTableOutputFormat.java:142) at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write (HiveHBaseTableOutputFormat.java:117) at org.apache.hadoop.hive.ql.io.HivePassThroughRecordWriter.write (HivePassThroughRecordWriter.java:40) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process (FileSinkOperator.java:743) at org.apache.hadoop.hive.ql. Exec.Operator.forward (Operator.java:837) at org.apache.hadoop.hive.ql.exec.SelectOperator.process (SelectOperator.java:97) at org.apache.hadoop.hive.ql.exec.Operator.forward (Operator.java:837) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process (TableScanOperator.java:115) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward (MapOperator.java:169) At org.apache.hadoop.hive.ql.exec.MapOperator.process (MapOperator.java:561) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map (ExecMapper.java:159)... 8 more

Look for the problem on the Internet and find that MapR has solved the problem. Refer to the page http://doc.mapr.com/display/components/Hive+Release+Notes;jsessionid=73C03B3BB0D8547A19E6CCEF80010D30#HiveReleaseNotes-Hive1.2.1-1601ReleaseNotes

The description of Hive 1.2.1-1601 Release Notes, and the description of commit of fe18d11 is consistent with my current error. But this MapR company uses a patch of the hive version, I use the apache version, it is not realistic to change to the hive version of MapR, try to update my own hive jar package and configuration according to the updated in his patch package, and find that it leads to new problems. It doesn't work this way. We have to go another way.

Check out the official release of hive. There are currently two versions, apache-hive-1.2.1-bin.tar.gz and apache-hive-2.0.0-bin.tar.gz. I am currently 1.2.1, I can upgrade to 2.0.0 and try it.

Download version 2.0.0 and install it, mainly by modifying the hive-site.xml file (execute cp hive-default.xml.template hive-site.xml). At the same time, introduce the jar package of hbase under the hive/lib directory:

Guava-14.0.1.jar protobuf-java-2.5.0.jar hbase-client-1.1.1.jar hbase-common-1.1.1.jar zookeeper-3.4.6.jar hbase-server-1.1.1.jar

Hive-site.xml

Hive.exec.scratchdir / tmp/hive HDFS root scratchdir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratchdir: ${hive.exec.scratchdir} / is created, with ${hive.scratch.dir.permission}. Hive.exec.local.scratchdir / data/hive/logs Local scratch space for Hive jobs hive.downloaded.resources.dir / tmp/hive/temp0_resources Temporary local directory for added resources in the remote file system. ... Javax.jdo.option.ConnectionPassword password password to use against metastore database javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName username Username to use against metastore database hive.session.id temp0 hive.aux.jars. Path file:///data/hive/lib/guava-14.0.1.jar, File:///data/hive/lib/protobuf-java-2.5.0.jar,file:///data/hive/lib/hbase-client-1.1.1.jar,file:///data/hive/lib/hbase-common-1.1.1.jar,file:///data/hive/lib/zookeeper-3.4.6.jar,file:///data/hive/lib/hbase-server-1.1.1.jar The location of the plugin jars that contain implementations of user defined functions and serdes. Hive.querylog.location / data/hive/logs Location of Hive run time structured log file hive.zookeeper.quorum slave1,slave2,master,slave4,slave5,slave6,slave7 List of ZooKeeper servers to talk to. This is needed for: 1. Read/write locks-when hive.lock.manager is set to org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 2. When HiveServer2 supports service discovery via Zookeeper. 3. For delegation token storage if zookeeper store is used, if hive.cluster.delegation.token.store.zookeeper.connectString is not set hbase.zookeeper.quorum slave1,slave2,master,slave4,slave5,slave6,slave7 hive.server2.logging.operation.log.location / data/hive/logs/operation_logs Top level directory where operation logs are stored if logging functionality is enabled

After modifying the configuration file, start the background process of metaStore, execute hive to enter the command line of hive, execute the table of insert into table hbase, and execute successfully.

At this point, I believe you have a deeper understanding of "what to do when hive connects hbase external tables with insert Datagram errors". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report