Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Big data builds the HDP environment, taking three nodes as an example (part two-- expanding nodes, deleting nodes, and deploying other services)

2025-01-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Please pay attention to the deployment of the master node and the installation service

Https://blog.51cto.com/6989066/2173573

(8) expansion of nodes (take HDFS as an example)

Preparation of ① slave node

 installs a new Linux

Slave nodes only need to install l Server With GUI

L Development Tools, no need to install MariaDB Server

, turn off the firewall

Systemctl stop firewalld.service

Systemctl disable firewalld.service

 configuration hostname: edit / etc/hosts file (master node also needs to add slave node information)

 configuration password-free login (between nodes, configuration is required)

 install JDK

 mount CD: mount / dev/cdrom / mnt

 deletes all original repo files

 creates the source file for yum: vi / etc/yum.repos.d/my.repo

[centos-yum]

Baseurl= file:///mnt

Enabled=1

Gpgcheck=0

 enables NTP service

All nodes need to operate

Yum install ntp

Systemctl is-enabled ntpd

Systemctl enable ntpd

Systemctl start ntpd

 creates a directory on the new node: mkdir-p / var/lib/ambari-agent/data

② adds a new node, as shown in the following figure:

PS: all Ambari services have been stopped at the time of screenshot, so there are a lot of alerts

③ configures the host information and private key file of the new node

The hostname is consistent with the linux virtual machine host

The private key is in the .ssh / id_rsa directory, which can be viewed using cat .ssh / id_rsa.pub.

④ confirms host information

! []

⑤ deploys a new DataNode to a new node

⑥ confirms the deployment information and deploys

⑦ deployed successfully and execute the jps command on the slave node to check for the new DataNode

IX) enable HA (take NameNode as an example)

① follows the steps in the previous section to add a new node to the cluster and deploy DataNode.

② deploys the ZooKeeper service to three nodes and starts it.

Select: "Service Actions"-> "Add ZooKeeper Server"

③ restarts all ZooKeeper services

Note: if you encounter a node that cannot be started, restart all services on Console. Under normal circumstances, all services should start normally.

④ adds HA services to HDFS NameNode

⑤ enter a NameService

⑥ configuration NameNode HA

⑦ checks configuration information

What ⑧ needs to configure manually, execute the following command:

1.Login to the NameNode host mydemo71.

2.Put the NameNode in Safe Mode (read-only mode):

A) sudo su hdfs-l-c 'hdfs dfsadmin-safemode enter'

3.Once in Safe Mode, create a Checkpoint:

A) sudo su hdfs-l-c 'hdfs dfsadmin-saveNamespace'

4.You will be able to proceed once Ambari detects that the NameNode is in Safe Mode and the Checkpoint has been created successfully.

⑨ begins to configure HA:

What ⑩ needs to configure manually, execute the following command:

Sudo su hdfs-l-c 'hdfs namenode-initializeSharedEdits'

⑪ starts HA

What ⑫ needs to configure manually, execute the following command:

⑬ performs the final installation and configuration

⑭ verify HA: (optional step) if a NameNode goes down, verify that an automatic switchover occurs.

(X) Delete nodes and services (unnecessary steps, delete when there is an actual need)

(1) stop all services on the node (hdp23) to be deleted

(3) delete HDFS, Yarn and MapReduce2 services

(4) Delete node hdp22 and keep only hdp21

! []

(5) stop all services and change the memory of hdp21 to 8g (optional)

Note: due to the need to restart the virtual machine, be sure to remount the CD

6) redeploy HDFS, Yarn and MapReduce2 services. Note that the following directories are cleared:

Namenode / root/training/bigdata/namenode

Datanode / root/training/bigdata/datanode

Yarn.nodemanager.local-dirs / root/training/bigdata/nodemanager/local

Yarn.nodemanager.log-dirs / root/training/bigdata/nodemanager/log

All passwords password

Xi) install and deploy other services (8g memory)

Note: make sure that the sources of httpd service and yum are available.

(1) deploy Hive and Pig

Note:

The execution engine of  Hive needs to be selected: MapReduce, as shown below

(2) deploy Flume and Sqoop: it's simple.

(3) deploy Spark: it's simple.

(4) deploy Kafka: it's simple.

(5) deploy Storm: it's simple.

(6) deploy Mahout: it's simple.

The deployment of HDP is completed here. If you have any comments or suggestions, you are welcome to leave a message below.

If this blog helps you, you are welcome to like it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report