In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Yarn Cluster Resource Management system
The role and concept of Yarn
Yarn is a general resource management system of Hadoop.
Yarn role
-Resourcemanager
-Nodemanager
-ApplicationMaster
-Container
-Client
ResourceManager
-handle client requests
-start / monitor ApplicationMaster
-Monitoring NodeManager
-Resource allocation and scheduling
NodeManager
-Resource management on a single node
-processing commands from ResourceManager
-processing commands from ApplicationMaster
Container
Abstraction of the task runtime environment, encapsulating CPU, memory, etc.
-allocation and scheduling of multi-dimensional resources and information resources related to the operation of tasks such as environment variables and startup commands
ApplicationMaster
-data segmentation
-request resources for the application and assign them to internal tasks
-Task monitoring and fault tolerance
Client
-client programs for users to interact with YARN
-submit applications, monitor application status, kill applications, etc.
Yarn structure
The core idea of YARN
Separate JobTracker from TaskTacker, which consists of the following major components:
-ResourceManager a global resource manager
-NodeManager each node (RM) agent
-ApplicationMaster represents each application
-each ApplicationMaster has multiple Container running on NodeManager
System planning
Host role software
192.168.4.1 master Resource Manager YARN
192.168.4.2 node1 Node Manager YARN
192.168.4.3 node2 Node Manager YARN
192.168.4.4 node3 Node Manager YARN
Yarn installation and configuration
For specific experimental preparation, please refer to https://blog.51cto.com/13558754/2066708.
# ssh 192.168.4.1
# cd / usr/local/hadoop/
# cd etc/hadoop/
# cp mapred-site.xml.template mapred-site.xml
# vim mapred-site.xml
Mapreduce.framework.name
Yarn / / configuration uses yarn resource management system
# vim yarn-site.xml
Yarn.resourcemanager.hostname
Master / / configure the Resource Manager role
Yarn.nodemanager.aux-services
Mapreduce_shuffle / / A java class real environment communicates with developers
After the configuration is completed
# for i in node {1.. 3} / / synchronize configuration files to all hosts
> do
> rsync-azSH-- delete / usr/local/hadoop/etc/hadoop/ ${I}: / usr/local/hadoop/etc/hadoop-e 'ssh'
> done
# cd / usr/local/hadoop/
Start the yarn service
#. / sbin/start-yarn.sh
Execute jps on all hosts to see if it starts successfully
# for i in master node {1..3}
> do
> echo ${I}
> ssh ${I} "jps"
> done
Master
3312 Jps
3005 ResourceManager
Node1
3284 Jps
3162 NodeManager
Node2
2882 NodeManager
3004 Jps
Node3
2961 Jps
2831 NodeManager
Show all available compute nodes
#. / bin/yarn node-list
18-01-31 06:41:56 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.4.1:8032
Total Nodes:3
Node-Id Node-StateNode-Http-AddressNumber-of-Running-Containers
Node3:46007 RUNNING node3:8042 0
Node2:54895 RUNNING node2:8042 0
Node1:51087 RUNNING node1:8042
Resourcemanager
Nodemangager
Verify Yarn
# bin/hadoop fs-ls / input
Found 3 items
-rw-r--r-- 2 root supergroup 84854 2018-01-29 21:37 / input/LICENSE.txt
-rw-r--r-- 2 root supergroup 14978 2018-01-29 21:37 / input/NOTICE.txt
-rw-r--r-- 2 root supergroup 1366 2018-01-29 21:37 / input/README.txt
Use yarn to count the occurrence frequency of words in sample files
#. / bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount hdfs://master:9000/input hdfs://master:9000/output
View the result
#. / bin/hadoop fs-cat hdfs://master:9000/output/*
Yarn node management
[root@master ~] # cat / etc/hosts
192.168.4.1master
192.168.4.2 node1
192.168.4.3 node2
192.168.4.4 node3
192.168.4.5 newnode
[root@newnode] # rsync-azSH-- delete master:/usr/local/hadoop / usr/local
[root@master hadoop] #. / sbin/start-yarn.sh
Add nod
[root@master hadoop] #. / bin/yarn node-list
18-01-28 21:06:57 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.4.1:8032
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
Node1:33596 RUNNING node1:8042 0
Node2:53475 RUNNING node2:8042 0
Node3:34736 RUNNING node3:8042 0
[root@newnode hadoop] # sbin/yarn-daemon.sh start nodemanager
[root@master hadoop] #. / bin/yarn node-list
18-01-28 21:07:53 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.4.1:8032
Total Nodes:4
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
Newnode:39690 RUNNING newnode:8042 0
Node1:33596 RUNNING node1:8042 0
Node2:53475 RUNNING node2:8042 0
Node3:34736 RUNNING node3:8042 0
Delete nod
[root@newnode hadoop] # sbin/yarn-daemon.sh stop nodemanager
/ / will not be deleted immediately
[root@master hadoop] #. / bin/yarn node-list
18-01-28 21:11:31 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.4.1:8032
Total Nodes:4
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
Newnode:39690 RUNNING newnode:8042 0
Node1:33596 RUNNING node1:8042 0
Node2:53475 RUNNING node2:8042 0
Node3:34736 RUNNING node3:8042 0
/ / Service needs to be restarted
[root@master hadoop] #. / sbin/stop-yarn.sh
[root@master hadoop] #. / sbin/start-yarn.sh
[root@master hadoop] #. / bin/yarn node-list
18-01-28 21:12:46 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.4.1:8032
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
Node1:42010 RUNNING node1:8042 0
Node2:55043 RUNNING node2:8042 0
Node3:38256 RUNNING node3:8042 0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.