In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to achieve Sqoop import report Zookeeper ConnectionException exception analysis, the content is concise and easy to understand, absolutely can make your eyes bright, through the detailed introduction of this article, I hope you can get something.
Environment:
CDH 6.3.0
Kerberos enabled
Java version: 1.8.0,181
Symptoms:
Under normal user edwuser, on a CDH client node machine, execute the following sqoop import command to import MYSQL data into HBASE:
Sqoop import\-- connectjdbc:mysql://db01:3306/test?characterEncoding=UTF-8\-- username root\-- password mysql\-- table cust_info\-- hbase-tabletest:cust_info\-- column-family cf\-- hbase-create-table\-- hbase-row-key id\-M1
Execute the error report. Stuck in the part of the connection Zookeeper, the connection is rejected and a ConnectionException exception is thrown.
Analysis steps:
You can see from the log that the address of the Zookeeper connection is 127.0.0.1, not the Zookeeper address on the cluster, causing the execution to fail.
First of all, it is suspected that the CDH-related role client configuration information is not normally issued. In the CM interface, the Sqoop Gateway role is added to the client node machine, and the configuration is reissued; the same error is still reported after retry.
On the CDH client node server, browse to directories such as / etc/hbase/conf.cloudera.hbase and / opt/cloudera/parcels/CDH-6.3.0-1.cdh7.3.0.p0.1279813/etc/zookeeper/conf.dist/ where Zookeeper configuration files may be stored.
After checking, it is found that all the configuration files have been issued correctly, which means that it has nothing to do with the cluster client configuration deployment.
At this time can only change the way forward, since the configuration is normal, then will it be the problem of ordinary users? Why don't you try privileged users?
Switch to the root user, execute the command, and find that it can be imported successfully, and the Zookeeper connection address is read correctly!
Since the distribution of the client configuration is normal, it is either due to insufficient permissions (the most likely that ordinary users usually report an error), or because the configuration file is read incorrectly (for example, the zookeeper configuration file read by the user is located to a non-CDH configuration directory). With the idea, begin to carry on the next step of verification.
To verify whether it is a problem for ordinary users or not? It's very simple, just get an ordinary user to try!
Next, on another CDH client node machine, log in with an ordinary user, execute the same Sqoop command, and find that it can also be successfully imported!
Compare the environment variables of the edwuser user on the two client node machines (execute printenv respectively to get these environment variables):
(compared with the user's environment variables, there are problematic machines on the left and normal machines on the right.)
On the machine on which the problem was found, the edwuser user explicitly defined some environment variables that looked like hadoop to .bash profile file, such as. HIVE_HOME variable?
By searching for the keyword HIVE_HOME, I found this description in an article about SQOOP:
# Licensedto the Apache Software Foundation (ASF) under one or more# contributor license agreements. Seethe NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version2.0# (the "License"); you may not use this file except in compliancewith# the License. You may obtain a copy of theLicense at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under theLicense is distributed on an "ASIS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express orimplied.# See theLicense for the specific language governing permissions and# limitations under the License.# includedin all the hadoop scripts with source command# should not be executable directly# also should not be passed any arguments Since we need original $* # the environment information of SetHadoop-specific environment variables here.#Set pathto where bin/hadoop is available#hadoop must be the mr of export HADOOP_COMM/hadoop#Set pathto where hadoop-*-core.jar is available#hadoop, and the configuration information of the directory must be export HBASE_HOME=/root/hbase#Set thepath to where bin/hive is available#hive. The configuration information of export HADOOP_MAPRED_HOME=/root/hadoop/tmp/mapred#set thepath to where bin/hbase is available#hbase is not required. The configuration information of is not required export HIVE_HOME=/root/hive#Set thepath for where zookeper config dir is#export ZOOCFGDIR=
If it is a community version of Sqoop, you can configure these environment variables. However, the configuration files of CDH are uniformly managed and distributed by CM, and do not need to be manually set to environment variables. These configurations appear in the user-level environment variables, but overwrite the client configuration issued by CM.
Because the user environment variable set on the machine in question is HIVE_HOME=/opt/cloudera/parcels/CDH/, students who are familiar with the parcels directory structure all know that under the direct level of the / opt/cloudera/parcels/CDH/ directory, there are no configuration files, and there will be configuration files only in subdirectories that go down several levels.
So when sqoop executes, it goes to the / opt/cloudera/parcels/CDH/ directory according to the setting of the user environment variable HIVE_HOME, and attempts to read the configuration file, but does not read any configuration files. So Sqoop can only directly use the default value of 127.0.0.1 for Zookeeper Server. Of course, there is no Zookeeper Server on the client machine, so there is a natural problem of rejecting the connection and throwing an exception.
Solution:
Since the user environment variable HIVE_HOME on the machine was not known at the time, who set it for what purpose. Did not directly touch the user environment variable file.
Run first:
Unset HIVE_HOME
Temporarily shield the environment variable HIV _ HOME, and then execute the sqoop import command, and the import is successful.
The above content is how to achieve Sqoop import report Zookeeper ConnectionException exception analysis, have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 221
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.