Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of exception from Sqoop to Hive Table

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you the "Sqoop decimation to Hive table exception example analysis", the content is easy to understand, clear, hope to help you solve the doubt, the following let the editor lead you to study and learn "Sqoop decimation to Hive table exception sample analysis" this article.

1. Test environment description

The CDH cluster has HA enabled for HDFS

Version 5.14.2 for CM and CDH

two。 Problem description

When Sqoop is used to extract MySQL data to the Hive table, the extraction statement is normally executed from the data Load to the Hive table Times "Operation category READ is not supported in state standby". The Sqoop decimation script is executed as follows:

Export HADOOP_USER_NAME=hive

Sqoop import\

-connect "jdbc:mysql://cdh02.fayson.com:3306/cm"\

-- username cm\

-- password password\

-- table HOSTS\

-fields-terminated-by','\

-- target-dir / user/hive/warehouse/test.db/hive_hosts\

-- delete-target-dir\

-- hive-import\

-- hive-overwrite\

-- hive-database test\

-- hive-table hive_hosts\

-- m 1

(can slide left and right)

Execute the above Sqoop script on the command line to show that the job is executing normally

The active NameNode node is cdh02.fayson.com

Switch the NameNode active node to cdh01.fayson.com

Execute the script again to "Loading datato table test.hive_hosts", where the Fayson will be rammed to death for a long time without any response.

According to the error, you cannot move hdfs the data directory. Check that both data directories are accessed normally.

3. Exception analysis and handling

1. According to the exception prompt, there are two HDFS access paths. HA has been enabled in the cluster HDFS, so the HDFS path of the NameNode node before enabling HA should no longer appear.

two。 Check the Location address of Hive Metastore's library and table metadata, and execute it on the node where the HIveMetastore service is located using the following command

[root@cdh01 ~] # export HIVE_CONF_DIR= "/ var/run/cloudera-scm-agent/process/ `ls-1 / var/run/cloudera-scm-agent/process | grep HIVEMETASTORE | sort-n | tail-1`"

[root@cdh01 ~] # HADOOP_CREDSTORE_PASSWORD=$ (strings / proc/$ (ps-ef | grep HiveMetaStore | grep java | awk'{print $2}') / environ | grep CRED | awk-F ='{print $2}') hive-auxpath / usr/share/java/-config $HIVE_CONF_DIR-- service metatool-listFSRoot

(can slide left and right)

3. After checking, it is found that the Location address of HiveMetastore points to the previous NameNode address. The location problem may occur here. Let's update the address point of HiveMetastore NameNode.

4. There are two ways to update the Location address of Hive Metastore: one is to operate in the CM interface, and the other is to update manually on the command line.

Method 1:

Log in to the Clouder Manager management interface to enter the Hive service, first stop the Hive service, and then execute "Update Hive Metastore NameNode" in the menu list of the Hive service.

After the update is completed, start the Hive service, and use the above command on the node where the Hive Metastore service resides to verify whether the update is successful. If the update is not successful here, you can use the second manual update method.

Method 2:

Execute the following command on the node where the HiveMetastore service is located to update the Hive metadata Location address

[root@cdh01 ~] # export HIVE_CONF_DIR= "/ var/run/cloudera-scm-agent/process/ `ls-1 / var/run/cloudera-scm-agent/process | grep HIVEMETASTORE | sort-n | tail-1`"

[root@cdh01 ~] # HADOOP_CREDSTORE_PASSWORD=$ (strings / proc/$ (ps-ef | grep HiveMetaStore | grep java | awk'{print $2}') / environ | grep CRED | awk-F ='{print $2}') hive-auxpath / usr/share/java/-config $HIVE_CONF_DIR-- service metatool-updateLocation hdfs://nameservice1 hdfs://cdh01.fayson.com:8020

[root@cdh01 ~] # HADOOP_CREDSTORE_PASSWORD=$ (strings / proc/$ (ps-ef | grep HiveMetaStore | grep java | awk'{print $2}') / environ | grep CRED | awk-F ='{print $2}') hive-auxpath / usr/share/java/-config $HIVE_CONF_DIR-- service metatool-listFSRoot

(can slide left and right)

Check the update result

Note: when you manually update the NameNode point of the Hie Metastore, the last two parameters of the second command, hdfs://nameservice1 is the post-update value and hdfs://cdh01.fayson.com:8020 is the pre-update value.

4.Sqoop command verification

The Sqoop command executes normally

The above is all the contents of the article "sample analysis of Sqoop decimation to Hive table exceptions". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report