In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about the relationship between the nodes in storm and zookeeper. The article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.
1. Nimbus
Nimbus needs both to create metadata in Zookeeper and to obtain metadata from Zookeeper.
As shown in Arrow 1 above:
1. For the path, aQuery Nimbus will only create the path, and will not set the data, which will be set by Worker later.
2. For roadbed b and cMagneNimbus, the data will be set when they are created.
3. Paths an and b will be created only when a new Topology is submitted, and the data in b will not change after it is set; c will be created when the task is assigned to the Topology for the first time, and Nimbus will update its content if the task assignment plan changes.
As shown in Arrow 2 above:
1. Nimbus needs to read the running status of the Worker that has been assigned from path a. Based on this information, Nimbus can know which Worker states are normal and which need to be rescheduled. At the same time, all the Executor information on the Worker is obtained, which is presented to the user through the UI.
2. You can get all the Supervisor status in the current cluster from path b. Through this information, you can know which Supervisor is still available and which Supervisor is no longer active. You need to assign tasks that have been assigned to it to other nodes.
3. All the current error messages can be obtained from path c and displayed to the user through UI.
II. Supervisor
Supervisor also needs to create and retrieve metadata through Zookeepr. In addition, Supervisor monitors the running status of all Worker launched by it by monitoring specified local files.
1. Arrow 3 indicates that the path created by Supervisor in Zookeeper is / storm/supervisor/. When a new node joins, a znode node is created under that path. It is worth noting that this node is a temporary node, which is automatically deleted once the connection between Supervisor and Zookeepr times out or is disconnected. The list of znode nodes in this directory represents the current active Supervisor, which ensures that Nimbus can know the status of the machines in the current cluster in time, which is the basis on which Nimbus can assign tasks, and is also the basis for Storm to be fault tolerant and scalable.
2. Arrow 4 indicates that the path where Supervisor needs to obtain data is / storm/assignments/. This path is the task assignment information written by Nimbus to Topology, from which Supervisor can get all the tasks assigned to it by Nimbus. Supervisor saves the last allocation information locally, and you can see whether the allocation information has changed by comparing the information between the two parts. If there is a change, the task needs to be removed and started.
3. Arrow 9 indicates that Supervisor will get the heartbeat information of all Worker initiated by it from LocalState. Supervisor checks these heartbeat information at regular intervals, and if a Worker does not update the heartbeat information during this period of time, it indicates that there is something wrong with the current running state of the Worker. At this point, Supervisor will kill the Worker (Worker is essentially a process), and the tasks originally assigned to the Worker will be reassigned.
III. Worker
Worker also needs to use Zookeeper to create and obtain metadata, and it also needs to use local files to record its own heartbeat information.
1. Arrow 5 indicates that the path created by Worker in Zookeeper is / storm/workerbeats//node-port. When Worker starts, a corresponding znode node is created, which is equivalent to registering itself. It is important to note that Nimbus only creates the path / storm/workerbeats/, when the Topology is submitted and does not set up the data, which is created by Worker after the Worker starts. One of the purposes of this arrangement is to avoid conflicts when multiple Worker create paths at the same time.
2. Arrow 6 indicates that Worker needs to get the data of the / storm/assignments/ path, which contains the task information assigned to it.
3. Arrow 8 indicates that Worker saves heartbeat information in LocalState. LocalState actually saves this information in a local file, and Worker uses this information to keep the heartbeat with Supervisor and needs to update the heartbeat information every few seconds. Because Worker and Supervisor belong to different processes, Storm uses local files to deliver heartbeats.
IV. Executor
Executor only uses Zookeeper to log its own run error messages. Arrow 7 represents the path created by Executor in Zookeeper, and each Executor records errors that occur while it is running.
Fifth, heartbeat maintenance
As can be seen from the above, the heartbeat information must be maintained between Nimbus, Supervisor and Worker, and their heartbeat information is as follows:
1. The heartbeat is maintained between Nimbus and Supervisor through the data corresponding to the / storm/supervisor/ path. This node is a temporary node. As long as the Supervisor dies, the data of the corresponding path will be deleted, and the Nimbus will reassign the tasks originally assigned to the Supervisor.
2. The heartbeat is maintained between Worker and Nimbus through the data in the / storm/workerbeats//node-port path. Nimbus will get the data under this path at regular intervals, and Nimbus will keep the last information in its memory. If it is found that the heartbeat information of a Worker has not been updated for a period of time, the worker is considered dead, and the Nimbus will reassign the task and assign the task assigned to that Worker to another Worker.
3. The heartbeat is maintained between Worker and Supervisor through the local file (LocalState).
The above is the relationship between storm and zookeeper nodes shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.