In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to modify the Kerberos domain associated with CDH cluster several times after Spark can not normally get short name mapping problem troubleshooting, I believe many inexperienced people are helpless, for this reason this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.
Environment:
CDH 6.3.1
Oracle JDK 1.8.0_181
Kerberos is enabled.
Symptoms:
This cluster has changed the associated Kerberos domain name from XY.COM.CN to XY.COM.
Now it has been replaced again, from XY.COM to XY.COM.CN.
After the modification is completed, the cluster restarts and Cloudera Manager (hereinafter referred to as CM) shows that everything is normal.
However, in the process of using Spark, if you find a node on the non-Kerberos server, you can execute the command line tools of Spark (including spark-shell, spark-submit, etc.) and report an error directly. The rule for getting short names does not exist:
[main] util.KerberosName(KerberosName.java:getShortName(401)) - No auth_to_local rules applied to cloudera-scm/admin@XY.COM does not exist
Solution process:
From the error message, spark-shell is still trying to short-name map with the previous Kerberos domain name.
You can see that in the error message, the getShortName method still looks for the resolution rule corresponding to the Kerberos domain "XY.COM"; however, the Kerberos domain XY.COM no longer exists, and the current rule corresponds to XY.COM.CN, so naturally it is impossible to apply a valid short name rule to cloudera-scm/admin@XY.COM.
In CDH, the command line tool of spark reads the configuration file delivered by CM by default. Therefore, the first guess is that CM fails to deliver the new client configuration successfully after the Kerberos domain associated with the cluster is modified.
Try refreshing and redeploying clients for individual services in the CM interface. But it didn't work. They will make the same mistake.
At this point, it is inferred that CM itself has some problems in delivering client configuration when dealing with Kerberos domain modification of a cluster that has implemented Kerberos security, resulting in failure to successfully distribute correct configuration files.
Then we must find an option that triggers the delivery of configuration files related to Kerberos principal mapping, modify it and re-deliver it, so that the correct configuration overrides the client configuration file on the current node.
Open CM, locate HDFS service, and locate Additional Rules for Mapping Kerberos Principals to Short Names in the configuration entry.
This option was originally reserved for principals that do not conform to standard Kerberos naming conventions to customize short name mappings. Normally, as long as it is the standard Kerberos principal name form (such as hive/manager1@XY.COM.CN, or hive@XY.COM.CN), the default value of this option "DEFAULT" will get the user name correctly (such as hive/manager1@XY.COM.CN will eventually be mapped to the short name hive), without requiring us to specify it manually.
Because HDFS is the core component on which all other service components depend, our modification through this option will definitely trigger all client deployments of services involving authentication.
Fill in the contents of the custom mapping rule and save it. This custom mapping rule is actually equivalent to the effect of "DEFAULT":
RULE:[1:$1@$0](.*@\ XY.COM.CN)s/@\XY.COM.CN//RULE:[2:$1@$0](.*@\ XY.COM.CN)s/@\XY.COM.CN//
Click Save. Prompt to restart the service, check Undeploy Client.
After the restart is complete, re-run spark-shell, everything is normal! Problem solved.
After reading the above content, do you know how to troubleshoot the problem that Spark cannot obtain short name mapping normally after modifying Kerberos domain associated with CDH cluster many times? If you still want to learn more skills or want to know more related content, welcome to pay attention to the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.