Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

0008-how to uninstall CDH (with one button to uninstall github source code)

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Tips: To see the HD uncoded set of pictures, please use your mobile phone to open and click on the picture to enlarge.

1. pre-conditions

This document describes the uninstallation of Cloudera Manager and CDH, and the cluster installed based on CDH using parcles and not configured with security (AD/LDAP, Kerberos, Data Encryption). The following is the test environment, but it is not the hard limit of this operation manual:

1. Operating system version: CENTOS 6.5

MySQL database version 5.1.73

3.CM Version: CM 5.11

4.CDH Version: CDH 5.11

5. Uninstall the cluster as root or sudo

2. User data backup

2.1 Backup HDFS data

hdfs data backup

Use distcp for data replication between clusters for hdfs data backup. The backup operation is as follows:

hadoop distcp hftp://namenodeA:port/xxx/ hdfs://namenodeB/xxx

Note: This command needs to be executed in the target cluster to ensure that the target cluster has enough space. The above data directory should be modified accordingly according to the real environment of the cluster.

namenodeA: ip address of namenode node of source cluster

port: source cluster port, default 50070

namen: ip address of namenode node of target cluster

xxx: Data directory corresponding to hdfs

namenode node metadata backup

Log in to the namenode server and do the following:

#Enter safemode mode [root@ip-172-31-3-217 ~]# sudo -u hdfs hadoop dfsadmin -safemode enteDEPRECATED: Use of this script to execute hdfs command is degraded.Instead use the hdfs command for it. Safe mode is ON#Flush all edits to fsimage[root@ip-172-31-3-217 ~]# sudo -u hdfs hadoop dfsadmin -saveNameSpaceDEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.Save namespace successful

Backup the namenode metadata and perform the following operations according to your cluster namenode directory:

[root@ip-172-31-3-217 ~]# mkdir namenode_back[root@ip-172-31-3-217 ~]# cd namenode_back/[root@ip-172-31-3-217 ~]# cd /dfs/nn/#compress all files in nn directory to/root/namenode_back/nn_back.tar.gz directory [root@ip-172-31-3-217 ~]# tar -czvf/root/namenode_back/nn_back.tar.gz.././nn current/./ current/fsimage./ current/fstime./ current/VERSION./ current/edits./ image/./ image/fsimage

2.2 Backup MySQL metadata

On the server where Mysql is installed, do the following to back up hive metadata information

Note: If there is a hue, entry, Navigator database can also be backed up

2.3 Backup CDH cluster configuration data

Through the API interface provided by Cloudera Manager, export a JSON document containing the configuration data of Cloudera Manager instance, which can be used to backup or restore Cloudera Manager deployment.

Backup cluster configuration data Log in to Cloudera Manager's server Run the following command: [root@ip-172-31-3-217 ~]# curl -u admin_username:admin_pass "http://cm_server_host:7180/api/v16/cm/deployment" > path_to_file/cm-deployment.json

admin_username: username logged into ClouderaManager

admin_pass: password corresponding to admin_username user

cm_server_host: is the host name of the ClouderaManager server

path_to_file: path to save configuration file

Modify the information corresponding to the current cluster by modifying the four parameters mentioned above

Export screenshot:

Restore cluster configuration data

Note: This feature is only available with Cloudera licenses

1. First, enter Cloudera Manager management platform and enter the following operations

Note: If the cluster is not stopped before the API call operation, the API call stops all cluster services before running the job, and any running jobs and data are lost.

2. Log in to the server where Cloudera Manager resides

3. execute the following command

curl --upload-file path_to_file/cm-deployment.json -u admin_uname:admin_pass http://cm_server_host:7180/api/v16/cm/deployment? deleteCurrentDeployment=true

admin_uname: User name logged into ClouderaManager

admin_pass: password corresponding to admin_uname user

cm_server_host: is the host name of the ClouderaManager server

path_to_file: Path to the JSON configuration file

2.4 Zookeeper Data Catalog Backup

Back up all Zookeeper server data directories. Take 172 - 31 - 3 - 217 as an example: [root@ip-172-31-3-217 ~]# mkdir zookeeper_back[root@ip-172-31-3-217 ~]# scp -r /var/lib/zookeeper/ /root/zookeeper_back/zookeeper_1

2.5 Backup user data directory

The following path contains user data for each component in Cloudera's default installation directory:

/var/lib/flume-ng/var/lib/hadoop*/var/lib/hue/var/lib/navigator/var/lib/oozie/var/lib/solr/var/lib/sqoop*/var/lib/zookeeper #Synchronized data #data_driver_path is the directory set during cluster deployment, which can be adjusted according to your own environmentdata_drive_path/dfsdata_drive_path/mapreddata_drive_path/yarn

If you need to backup the data of related components, please refer to 2.4 for data backup.

3. stop all services

3.1 Open Cloudera Manager Console

3.2 Close cluster

Stop CDH cluster, as shown in the following figure

Click to confirm the operation, as shown in the figure below

Wait for all servers to stop successfully, as shown in the following figure

3.3 Close Cloudera Management Service

Stop Cloudera Manager Service as shown below

Click the confirmation dialog box, as shown in the figure below

Waiting for the service to stop successfully, as shown in the following figure

4. Disarm and remove Parcel

In the Cloudera Manager management interface, do the following

Click the logo function in the above picture to enter the following interface

Click the logo in the figure above, select [Deactivated status only], and click [OK].

After deactivation, the status changes to "Active", as shown in the following figure

Click the menu next to "Activate" and select "Delete from host", as shown in the following figure

Click OK, as shown below

After successful deletion, the following is displayed

5. delete a cluster

Click Cloud Manager to enter the home page, as shown below

Delete the cluster as follows

Click "Delete" operation, as shown in the following figure

After successful deletion, the following is displayed

6. Uninstalling Cloudera Manager Server

6.1 Stop Cloudera Manager Server and Database

Execute the following command on the cluster master server

[root@ip-172-31-3-217 ~]# service cloudera-scm-server stop#If you need to stop the service using built-in db postgresql, ignore [root@ip-172-31-3-217 ~]# service cloudera-scm-server-db stop if you don't

6.2 Uninstall Cloudera Manager Server and Database

Use yum to uninstall cloudera-scm-server and cloudera-scm-server-db-2 with the following command

[root@ip-172-31-3-217 ~]# yum remove cloudera-manager-server#If you need to remove this service using built-in db postgresql, ignore [root@ip-172-31-3-217 ~]# yum remove cloudera-manager-server-db-2

7. Uninstalling Cloudera Manager Agent and Managed Software

On all machines in the cluster, uninstall Cloudera Manager Agent and Managed Software as follows.

7.1 Stop Cloudera Manager Agent

Stop the Cloudera Manager Agent service on all servers using the following command

[root@ip-172-31-3-217 ~]# sudo service cloudera-scm-agent hard_stop

7.2 uninstall software

Do the following on all nodes of the cluster:

[root@ip-172-31-3-217 ~]# yum remove 'cloudera-manager-*' avro-tools crunch flume-ng hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue-beeswax hue-hbase hue-impala hue-pig hue-plugins hue-rdbms hue-search hue-spark hue-sqoop hue-zookeeper impala impala-shell kite llama mahout oozie pig pig-udf-datafu search sentry solr-mapreduce spark-core spark-master spark-worker spark-history-server spark-python sqoop sqoop2 whirr hue-common oozie-client solr solr-doc sqoop2-client zookeeper

7.3 Run purge command

Do the following on all nodes of the cluster:

[root@ip-172-31-3-217 ~]# yum clean all

8. Remove Cloudera Manager and User Data

8.1 Kill Cloudera Manager and Managed Processes

Perform the following operations on all nodes of the cluster to kill Cloudera Manager and Managed processes on all servers, as follows:

8.2 Remove Cloudera Manager data

Delete all Cloudera Manager data by executing the following command on all nodes of the cluster

umount cm_processesrm -rf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/x86_64/6/cloudera* /var/log/cloudera* /var/run/cloudera* /etc/cloudera* /usr/lib64/cmf

8.3 Remove Cloudera Manager Lock File

Execute the following command on all nodes in the cluster to delete Cloudera Manager Lock File

rm -rf /tmp/.scm_prepare_node.lock

8.4 Remove user data

This step permanently deletes all user data. To back up data, copy it to another cluster using the distcp command before starting the offload process. Execute the following command on all nodes in the cluster to delete all user data:

8.5 Stop and remove external database

Operating on the server where mysql is installed.

Stop mysql database, operate as follows: service mysqld stop Uninstall mysql database, operate as follows yum remove mysql* Delete mysql data directory, operate as follows rm -rf /var/lib/mysql

9. One-click uninstall script description (GitHub address)

Description of uninstall script:

autouninstall.sh: automatic uninstall script

components.list: List of all installed components of the cluster

delete.list: List of directories to be deleted. Most CDH installation default directories have been configured in the list. Users need to adjust the hdfs directory at the end of the list according to their cluster environment, as shown in the following figure:

node.list: all nodes of the cluster, node configuration according to the cluster environment

user.list: User name used when installing all components of the cluster

script uses

Note: The script is to perform a one-click uninstall on the namenode node after the fifth step is completed.

run the screenshot

Source code address:

https://github.com/javaxsky/cdh-shell

Drunken whip famous horse, youth more pompous! Lingnan Huanxi Sand, vomiting wine shop! Friends refused to put, data play flowers!

Tips: To see the HD uncoded set of pictures, please use your mobile phone to open and click on the picture to enlarge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report