In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
Our official Rancher technology community has been established for some time, and I believe that through our offline meetup and online preaching work, many friends have mastered the use of Rancher very well. Some advanced users begin to microservice their business and migrate to Rancher. In the process of migration, because of the complexity and particularity of the business itself, it may be necessary to make use of some advanced features of Rancher or even expand Rancher, which requires some understanding of the implementation mechanism of some components of Rancher.
This sharing will introduce the event mechanism of Rancher. Due to the extremely lack of relevant content documents, I only analyze its principle through practice and code reading. If there are any errors, please correct them.
In large-scale system architecture, event mechanism is usually message-driven, which is of great help to improve the fault tolerance and flexibility of distributed architecture, and is also a sharp tool for decoupling between components. Rancher can manage more than N agent and split out various service components at the same time, event mechanism is essential. In order to implement event mechanism, we usually use middleware such as RabbitMQ, ActiveMQ, ZeroMQ and so on. On the other hand, Rancher adopts a very lightweight implementation method based on websocket protocol, and its advantage is that it greatly simplifies the deployment of Rancher, and Rancher does not need to maintain an additional MQ cluster. After all, the implementation of websocket messaging is very simple and can be supported by various language libraries.
Here we will consider a question: after all, websocket is not a real industrial-grade MQ implementation, and messages cannot be persisted. Once there is a problem with the processing of a certain event, or if a message is lost, how can Rancher ensure the atomicity and consistency of various resources? There is a concept of processpool in Rancher, which can be seen as an execution pool for all event. When the API/UI/CLI has an operation, the Rancher will break the operation into multiple event and put it into the processpool. For example, when deleting a container, the compute.instance.remove will be put into the processpool, and the event will be sent to the corresponding host agent. When the agent processing is completed, the reply will be sent to rancher-server. If, during this process, the rancher-server does not receive the reply message due to the loss of the network problem message or the execution problem on the agent, cattle will put the event back into the processpool to repeat the above process, and the status of the container will not be updated in the DB until the compute.instance.remove completes the operation, otherwise the container status will always be in lock and cannot be updated by other services. Of course, cattle does not repeat these event over and over again, usually setting the TIMEOUT to be exceeded and then not executing it (some resources do not have a TIMEOUT mechanism).
As stated above, we can actually see this process on UI, and you can see this information in real time on the Running Tab page of Processes on RancherUI. Processes is very useful in troubleshooting some Rancher-related problems. You can form the good habit of "check Processes first":
So how to set the URL for listening to event? Very simple:
Ws://:8080/v1/projects//subscribe?eventNames=xxxx
In addition, you need to add the header information of basic-auth.
Authorization: Basic + base64encode (:)
If it is an agent component on Host, you also need to add the agentId parameter
Ws://:8080/v1/projects//subscribe?eventNames=xxxx&agentId=xxxx
AgentId is generated when Host is registered, and if there is no agentId parameter, any irrelevant event will be sent to all Host agent, which will have a similar "broadcast storm" effect.
There are many components running on Host agent, of which python-agent is responsible for receiving and receiving event information, and its running log can be viewed in the / var/log/rancher/agent.log file on Host.
Careful friends may wonder, when we add Host, we do not specify cattle-access-key and cattle-secret-key when we execute the agent container, that is to say, how does the python-agent runtime obtain these two key information?
In fact, there are two kinds of apikey in Rancher: one is the well-known apikey; created manually on UI, and the other is agentApikey, which is system-level and is specially set for agent. When you add Host, you will first send agentApikey to Host. You can query the relevant information in the credential table of cattle:
What does eventNames define? The following two files can be referenced:
Event definition at the system level of https://github.com/rancher/cattle/blob/master/code/iaas/events/src/main/java/io/cattle/platform/iaas/event/IaasEvents.java
Https://github.com/rancher/cattle/blob/master/code/packaging/app-config/src/main/resources/META-INF/cattle/process/spring-process-context.xml details the event definition of each resource (host, volume, instance, stack, service, and so on).
In addition, once an operation on UI/CLI/API is decomposed into multiple event to perform, each event information will be saved in mysql, and each event will be set to purged state after successful execution, so the records in the table will not really be deleted, which will cause the corresponding tables (container_ events, service_ events, process_instance tables) to expand indefinitely.
Rancher provides a periodic cleanup mechanism to solve this problem.
Events.purge.after.seconds can clean up container_event and service_event, every two weeks; process_instance.purge.after.seconds can clean up process_instance, once a day. Both configurations can be dynamically modified in http://:8080/v1/settings.
Let's put it into practice and see how to monitor Rancher event in the program.
Rancher provides the resource.change event, which does not need reply, that is, it will not affect the operation of the Rancher system, it is specially open to developers to implement some of their own customized functions, so let's take resource.change as an example to practice.
Most of the components of Rancher are written based on Golang, so we use the same language.
In order to implement this program quickly, we need to know some gadgets that assist in rapid development.
Trash,Golang package management gadget, which can help us define the path and version of the dependent package, is very lightweight and convenient
Dapper, which is a container-based tool for Golang compilation, can help us unify the compilation environment
Go-skel, which can help us quickly create a Rancher-style micro-service program, can save us a lot of basic code, but also integrates two practical gadgets, Trash and Dapper.
For more information, please refer to an article I wrote earlier:
Pick up the gadgets in the Rancher community.
Going back to the topic of this article, first of all, we create a project named scale-subscriber based on go-skel (the name is very random). The execution process requires patience:
After the execution, we put the project into the GOPATH and began to add the relevant logic code.
Before this, we can consider adding a service port for healthcheck. In fact, all the micro-service components of Rancher will basically add healthcheck port except the main program. This is mainly to cooperate with the healthcheck function in Rancher, and to ensure the reliability of micro-services by setting a mechanism for checking this port.
We use the goroutine mechanism of Golang to add the main service and healthcheck service respectively:
The core of the main service is to listen to the information of resource.change and register handler to obtain the payload information of event, so that you can customize and expand your own logic. Here, you need to use the https://github.com/rancher/event-subscriber library provided by Rancher to achieve it quickly.
As shown in the following figure, you can implement your own logic in reventhandlers.NewResourceChangeHandler (). Handler:
Here we only demonstrate the mechanism of listening to event, so we don't do too much business logic processing, just print out event information.
Then executing make,make under the root directory of the project automatically calls dapper and generates scale-subscriber under the bin directory. We can listen to the information of resource.change by executing scale-subscriber:
Here we can see that the healthcheck function and event listener are enabled, respectively. Then you can delete a stack,scale-subsciber at will on UI to get event information:
If you want to deploy it in Rancher, you can package the scale-subsciber executor into an image and start it through compose.
But we know that CATTLE_URL, CATTLE_ACCESS_KEY and CATTLE_SECRET_KEY need to be specified to start scale-subsciber. The normal practice is that we need to set the corresponding environment variable when we start service after we have completed the apikey.
There is a drawback, that is, private information such as apikey has to be exposed to the public, and it is very inconvenient to maintain these apikey manually.
Rancher provides a very convenient way to add two label to service
Io.rancher.container.create_agent: trueio.rancher.container.agent.role: environment
After setting these two label, the Rancher engine will automatically create the apikey and set the corresponding values to the ENV of the container. As long as your program reads these values through the system environment variables, it will run very smoothly.
Original source: RancherLabs
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.