In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.
1. Overview
This document describes how to migrate Cloudera Manager to a new CM node in a Kerberos environment. Through this document, you will learn the following:
1. How to migrate Cloudera Manager nodes
two。 How to migrate MySQL Metabase
3. How to migrate Kerberos MIT KDC
The document is mainly divided into the following steps:
1. Prepare the new Cloudera Manager node
2.MariaDB database migration (optional)
3. Migrate Kerberos MIT KDC (optional)
4. Migrate data from the original CM node to the new node
5. Post-migration cluster service verification
This document focuses on Cloudera Manager node migration and is based on the following assumptions:
The 1.CDH environment has been set up and running normally
two。 The old Cloudera Manager node contains Cloudera Manager Server (i.e. cloudera-scm-server) services and Cloudera Management Service services (Alert Publisher/Event Server/Host Monitor/Reports Manager/Service Monitor)
3. The cluster has completed the configuration of MIT Kerberos and is in normal use.
4. The cluster Hadoop service HBase/Hive/HDFS/Hue/Kafka/Oozie/Spark/Spark2/Yarn/Zookeeper is running normally
The following is the test environment, but it is not the hard limit of this manual:
1. Operating system: Redhat7.2
2.CM version: CM5.11.1
3.CDH version: CDH5.11.1
4. Deploy the cluster with ec2-user
two。 Prepare the new Cloudera Manager node
2.1 New CM host preconditions
The operating system version is consistent with the cluster operating system version (Redhat7.2) turn off firewall configuration clock synchronization, configure according to the current cluster clock synchronization service swap has been set to 10 turn off transparent large pages close SElinux configuration / etc/hosts files or use DNS services to configure cm and os yum sources to create mysql-driven soft links
2.2 New host information
New host IP address: 172.31.18.97
New Hostname:ip-172-31-18-97.ap-southeast-1.compute.internal
1. Host operating system version
two。 Firewalls
3. Clock synchronization
4.swap information
5. Transparent large page
6.SElinux information
7.host information
Yum sources for 8.Cloudera Manager and OS
9. Create the mysql driver package soft chain under the / usr/share/java directory
2.3 install the Cloudera Manager service
Ec2-user@ip-172-31-18-97 log$ sudo yum-y install cloudera-manager-server cloudera-manager-agent
Do not start the service for a while after installing Cloudera Manager.
Note: the Cloudera Manager version of the new node must be the same as the original Cloudera Manager version; do not install other components of CDH on the node
2.4 install the MariaDB database
Since the MariaDB database is installed in the original CM node, the MariaDB database is also installed in the new CM node for data migration (if database migration is not considered, it can not be installed)
Ec2-user@ip-172-31-18-97 log$ sudo yum-y install mariadb-server mariadb-devel
Initialize the MariaDB database
Ec2-user@ip-172-31-18-97 log# sudo systemctl enable mariadb ec2-user@ip-172-31-18-97 log$ sudo systemctl start mariadb ec2-user@ip-172-31-18-97 log$ sudo / usr/bin/mysql\ _ secure\ _ installation
3.MariaDB database migration
If you don't do database migration, you won't skip this chapter.
3.1 back up the original MariaDB data
Export the entire mysql library that needs to be migrated (you can export the required library information as needed)
Root@ip-172-31-25-3 ec2-user# mysqldump-u root-p-A > oldmysql.dump
3.2 Import backup data to the new library
1. Copy the backup file to the new mysql service for data import
Root@ip-172-31-18-97 ec2-user# mysql-u root-p < oldmysql.dump
Note: after the data is imported successfully, you need to execute the command in mysql client: FLUSH PRIVILEGES
4. Migrate Kerberos MIT KDC
4.1 back up the original Kerberos database
Log in to the primary KDC server and use the kdb5_util command to back up the Kerberos database and configuration files
[ec2-user@ip-172-31-25-3] $sudo kdb5_util dump-verbose kerberosdb.dumpfileHTTP/ip-172-31-18-97.ap-southeast-1.compute.internal@CLOUDERA.COMHTTP/ip-172-31-19-209.ap-southeast-1.compute.internal@CLOUDERA.COM … .zookeeper / ip-172-31-28-67.ap-southeast-1.compute.internal@CLOUDERA.COM [ec2-user@ip-172-31-25-3] $/ etc/krb5.conf/var/kerberos/krb5kdc/kdc.conf/var/kerberos/krb5kdc/kadm5.acl
4.2 restore backup data to a new library
1. Install the Kerberos service on the new node:
Yum-y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
Copy the backup data to the new node and restore the data to the Kerberos database by doing the following
two。 Modify the krb5.conf file to overwrite the krb5.conf in the / etc directory
Modify the above red part to the current host ip or hostname
3. Copy the kdc.conf and kadm5.acl files to the / var/kerberos/krb5kdc directory for overwriting
Yum-y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
4. Restore the kerberos database by doing the following when the krb5kdc and kadmin services are stopped
Note: you need to create a kerberos database here, and then do the data import, otherwise the krb5kdc and kadmin services will not start properly.
Start the krb5kdc and kadmin services
Ec2-user@ip-172-31-18-97 kerberos\ _ bak$ sudo systemctl restart krb5kdcec2-user@ip-172-31-18-97 kerberos\ _ bak$ sudo systemctl stop krb5kdc
Verify that the Kerberos is normal and test with the imported user_r
4.3 update the krb5.conf configuration of the cluster
Copy the / etc/krb5.conf file on the KDC master server to all nodes in the cluster and verify that the Kerberos is working.
5. Migrate data from the original CM node to the new node
5.1 back up the original CM node data
Mainly back up the monitoring data and management information of CM. The data catalog includes:
/ var/lib/cloudera-host-monitor
/ var/lib/cloudera-service-monitor
/ var/lib/cloudera-scm-server
/ var/lib/cloudera-scm-eventserver
/ var/lib/cloudera-scm-headlamp
Note: compress the backup command and transfer to prevent directory ownership and permissions from changing
5.2 modify database configuration information for CM
Modify the database configuration file / etc/cloudera-scm-server/db.properties, configuration file contents of the new CM
Modify the red part according to your own configuration information.
5.3CM backup data import new node
Copy the data backed up on the original CM to the new CM node
Restore the backup data to the corresponding directory with the following command
5.4 update the CM Server points to all nodes in the cluster
Modify the server_ host value in the / etc/cloudera-scm-agent/config.ini file on all nodes in the cluster to the hostname of the new CM node
5.5 migrate the Cloudera Management Service role of the original CM node to the new node
Start the cloudera-scm-server and cloudera-scm-agent services for the new CM node
Ec2-user@ip-172-31-18-97 253back# sudo systemctl start cloudera-scm-serveec2-user@ip-172-31-18-97 253back# sudo systemctl start cloudera-scm-agent
Note: after starting the cloudera-scm-agent service on the new CM node, the information of the CM node will be added to the HOSTS table of the cm library to view the HOSTS_ID corresponding to the new CM node.
Log in to the mysql database and view the host information of Cloudera Manager in the cm.HOSTS table
You can see that the new CM node does not have any roles through the CM management interface before migration
Migrate the role of the old CM to the new CM node with the following command
Update ROLES set HOST_ID=11 where NAME like 'mgmt%'
After performing the operation, the role of the original CM node is migrated to the new CM node
Delete the original CM node from the cluster through the CM management interface
Delete the original CM node
Since Kerberos is configured in the cluster, you need to update the server of Kerberos. If Kerberos is not migrated, you do not need to consider this step.
If the cluster is Kerberos-enabled, you need to generate keytab for the new CM node (skip this step if the cluster is not enabled)
Start Cloudera Management Service through the CM management interface
Due to database migration, the database configuration corresponding to hive/hue/oozie needs to be modified (this step can be skipped without database migration)
Restart the cluster after making the above modifications
6. Post-migration cluster service verification
The running interface of the original CM, historical monitoring data
Log in to the CM management platform and check that the cluster status is normal
After migration, you can view the historical monitoring data of the cluster normally.
Hue access and operation are normal
HDFS access and operation are normal
HBase operates normally through hue and shell
7. Analysis of common problems
1. Question one
Problem phenomenon:
The cause of the problem:
Caused by abnormal communication between cloudera-scm-agent services and supervisord.
Solution:
Kill the supervisord process on the alarm node and restart the agent service
two。 Question two
Problem phenomenon:
The cause of the problem:
Due to the failure to migrate the / opt/cloudera/csd directory during CM migration.
Solution:
Copy the / opt/cloudera/csd directory on the original CM node to the directory corresponding to the new CM node
Restart the cloudera-scm-server service
[ec2-user@ip-172-31-18-97 253back] # sudo systemctl start cloudera-scm-server
3. Question three
Problem phenomenon:
Service Monitor failed to start. The exception message is as follows
The cause of the problem:
Due to missing files in the / var/lib/cloudera-service-monitor directory during CM migration
Solution:
Overwrite the data in the / var/lib/cloudera-service-monitor directory again
4. Question 4
Phenomenon description:
After the migration of the cluster is completed, the NameNode and ResourceManager services that have made highly available services cannot display the active and standby nodes normally, and the summary information of the HDFS cannot be displayed properly.
The cause of the problem:
Because the cluster is configured with kerberos, the new CM node does not generate keytab, resulting in
Solution:
Stop all services of the CM node, and then generate the keytab of the host
8. Expansion
How to perform Cloudera Manager migration without stopping the cluster service needs to meet the following conditions:
The hostname and IP addresses of the new CM node are the same as the old CM node; if the database needs to be migrated, the hostname and IP addresses of the new database are the same as the original database, and the data from the original database needs to be imported into the new database; if you need to migrate Kerberos MIT KDC, the hostname and IP addresses of the new MIT KDC node are the same as the old MIT KDC node, and the old MIT KDC database data needs to be imported into the new MIT KDC database
Note: if you only do step 1, there is no need to restart hadoop cluster-related services, and it will not affect the existing jobs in hadoop cluster. If you perform steps 2 or 3, it will have a temporary impact on cluster jobs, but there is no need to restart hadoop cluster-related services.
Drunken whips are famous horses, and teenagers are so pompous! Lingnan Huan Xisha, under the vomiting liquor store! The best friend refuses to let go, the flower of data play!
Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.