Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the continuous release and deployment of XXOps?

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article is to share with you about the continuous release and deployment of XXOps, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Why do continuous releases and deployments first?

First of all, the root cause is to improve the delivery efficiency of the code (which seems to be the right nonsense), and technically, the main reason is that it is split from a single project into a service-oriented application.

The historical reason of single project is also easy to understand. In the early stage of start-up, when the technical staff is limited, in order to quickly find the right direction of business development, the technical staff must devote all their energies to the realization of business requirements. At this time, the technical or architectural aspects are secondary, and in the early stage of start-up, the number of users and business models are not so complex, and there is no need to develop a complex architecture. LNMP is enough. Judging from our practice, this technology choice is incomparably correct.

However, with the increase of business volume and business complexity (such as e-commerce users, merchants, stores, goods, transactions and payment systems), the quality of service requirements become more and more stringent (low latency, high availability). However, the delivery cycle of the requirements is getting shorter and shorter (mindset to make up for the scene where the product manager stands behind the programmer), and the number of engineers has reached a few hundred. At this time, the single project exposed a lot of problems, such as serious code logic coupling, the underlying public methods do not dare to move, a large amount of code, no one can be familiar with the whole project code, hundreds of engineers submit code to the same project at the same time, code maintenance takes a lot of energy. The final picture takes a look at the scene at that time:

Therefore, the solution is evolving towards Java services (the specific reason is not explained), which is simply shown as follows:

After a single application is split into service-oriented applications, the situation of the application becomes many application modes, large and small. The following figure shows:

The thorniest problem encountered at this time is the release problem. The corresponding codes of so many applications are different, and Java services involve a series of problems such as compilation and construction, elegant online and offline services, and so on. There are also a lot of links (as we will see later), which simply rely on manual execution of scripts and artificial concatenation has become impossible. This directly affects the efficiency of the whole R & D system of development, testing, operation and maintenance, so it must be solved first.

OK, then we will start to do the publishing system. To refine it, what the release does is to compile and package the submitted application code, then publish it to the specified directory of the corresponding IP host of the application, and make the application service go up and down gracefully (or elegant start and stop).

It is not complicated to understand, in fact, we can abstract the three most important steps of release, and we are constantly improving around these three steps, as follows:

The following three main links are described in detail in stages:

Code submission link (branch merger management)

The code management tool is Gitlab, and the most important thing in the code submission process is the management of branch merging (here it involves the development of standard specifications). Our strategy is roughly as follows:

The description is as follows:

1. Master branch, keep synchronized with online application code, that is, it can be released to online deployment and run at any time.

2. The development branch is usually expressed as feature/defect. For example, to develop a new requirement, it will take the current master as the baseline and pull out a feature branch for development. The same application can have multiple feature branches for parallel development.

3. The release branch, which is indicated by release, will merge all the submitted integrated code commit at the time of release to form the release_ environment _ timestamp as the branch name. For example, release_dev_01_29_20_52_10 means that the branch is a temporary release branch created when it is released in the dev environment at 20:52:10 on January 29th.

4. When entering online from pre-release, a release_online branch will be created based on release_pre_xxxx, the release branch of the current pre-release environment. As an online release branch, the release_online branch will be merged into master after online release, which ensures that the code released online will eventually be consistent with the code of master.

Compilation and construction link

After talking about the code submission process above, the next step is to build. Take Java as an example, we used two tools to build, Docker and Maven. Docker is mainly used to provide a clean and independent compilation environment, and Maven is our dependency management and packaging tool. The whole build process is as follows:

Take Java as an example, it is briefly described as follows:

1. First prepare the compilation image of JDK, which is consistent with the online environment. When a new build task comes in, create a corresponding Docker instance to compile the code.

2. The construction task will clone the code to the specified compilation directory according to the Git address in the application configuration management. After the Docker instance is started, the compilation directory will be hung in the Docker instance.

3. For Java applications, in this Docker instance environment, you can execute mvn package commands to package, and eventually generate a software package that can release xxx.war.

4. Similarly, similar compiled images will be prepared for Clippers cmake&make,Go go and NodeJs, except that when packaged, a distributable software package will eventually be generated for cmake&make,Go, go install, and so on.

5. After the construction is completed, the Docker instance is terminated.

Docker plays an important role in this, that is, it provides a clean, non-interference compilation environment, and in the case of concurrent packaging, Docker quickly creates multiple parallel instances to provide a compilation environment, and after using the destruction, this efficiency advantage is also very great.

6. With regard to configuration management, it was relatively simple to consider when designing at that time. Our approach was to make three configuration files at the same time, dev_pom.xml,pre_pom.xml,online_pom.xml, corresponding to three different environments of development, pre-release and online, and replace different configuration files according to the released environment. In fact, this is not scalable enough. More configuration files will be created in the case of multiple server rooms and groups, and the configuration items with sensitive information will not be kept secret enough (but these configurations have been encrypted and moved to the configuration center of the middleware). A better way can consider using the auto-config solution that Ali has already opened up, which will not be discussed in detail here.

Deployment link

Above, after the code is submitted and compiled and built, it is time to go to the deployment process of publishing the code online, that is, publishing the code to the specified directory of the corresponding IP host of the application, and providing elegant online and offline application services. It seems very simple, but see the following figure:

There are still many links in this process, and there will be a lot of details inside these links, so the whole deployment link is very complicated. Let's introduce the overall idea:

0. From the CMDB, get the application-host IP correspondence, and then start from 1. The later process can be done for a single machine, or in batches or in groups of multiple machines at the same time. (starting from step 0, the reason is that the foundation of CMDB and application configuration management mentioned in our last article should be laid first. Without this foundation, the following links cannot be implemented smoothly.)

1. Check whether the service on each machine is running normally. If it is a normal service, it can be released, but if the service itself is abnormal, it should be recorded or skipped.

2. Download the war package to the specified directory (directory information of the application, application configuration management has played a role again)

3. Turn off the monitoring to avoid false alarms when the service goes offline and stops.

4. Elegant offline, including that RPC service is logged off from LB or Web service is logged off from Nginx (not involved if http service is not provided). All offline actions are implemented by calling API API.

5. Automatically detect no new traffic after going offline, stop the application, release the code, and then start the application (when the application configuration management, start and stop commands, etc.)

6. Go online gracefully, carry out health monitoring, check whether the process and application status are normal. If all the monitoring passes, start the online service and enable the monitoring.

7. Release in batches, just to mention here, suppose we have 100 hosts in an application, and it is impossible to stop publishing all of them at this time, so the service will be interrupted, but it cannot be done one by one, and the efficiency is too inefficient. So we can split the wholesale into five batches, 20 in each batch, or 10 in each batch, which can ensure online upgrade. It can also ensure that the efficiency can keep up. Of course, it is a little more complicated, and there are groups and batches, or in batches according to step by step, the first batch is 5 units, the second batch is 10 units, the latter batch is 20 units, and so on. Examples are as follows:

Throughout the process, we can see:

The main contents are as follows: 1. Start with the scene, comb the business process in detail, subdivide the links, automate each step, and connect the processes in series.

2. CMDB and the content of application configuration management play a fundamental role in this part all the time.

The above is what the continuous release and deployment of XXOps is like, and the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report