In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "how to deploy continuously through the Docker container". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to deploy continuously through Docker containers".
Continuous deployment of pipes
Continuous deployment of the pipeline is a series of steps that are performed each time the code is submitted. The purpose of the pipeline is to perform a series of tasks to deploy a fully tested functional service or application to a production environment. The only manual operation is to perform a check-in to the code repository, and all subsequent steps are done automatically. This process can (to some extent) eliminate the factors of human error, thereby increasing reliability. And it allows machines to do what they do best (running repetitive processes rather than innovative thinking), thereby increasing system throughput. The reason why every submission needs to go through this channel is because of the word "continuous". If you choose to delay the execution of this process, such as running before the end of a sprint, the entire testing and deployment process is no longer continuous.
If you choose to delay the deployment of testing and production environments, you are also delaying the identification of potential problems in the system, resulting in more effort to fix them. If you try to fix a problem a month after it occurs, it is much more expensive to fix it than within a week of it. Similarly, if a bug notification is issued within minutes of code submission, the time required to locate the bug is negligible. However, the point of continuous deployment is not only to save money on maintenance and bug fixes, but also to release new features to the production environment faster. The shorter the time between the development of the feature and the final use by the user, the faster you can benefit from it.
Instead of listing all the build steps that should be included in the pipeline, let's start with a smallest subset and explore a possible solution. This minimum subset allows you to test, build, and deploy services. These tasks are essential. Without testing, there is no guarantee that the service will work properly. Without a build, there is nothing to deploy. Without deployment, users will not be able to benefit from the new release.
test
The traditional way of software testing is to unit test the source code. Although this method can bring high code coverage, it does not necessarily guarantee that the features work in the expected way. There is no guarantee that the behavior of individual code units (methods, functions, classes, etc.) conforms to the design. In order to verify the features, you need to do functional testing, which is biased towards ink cartridge testing and is not directly related to the code. One problem with functional testing is the dependence of the system. Java applications may require a specific JDK, while Web applications may need to be tested on a large number of browsers. It is likely that the same test set needs to be tested repeatedly in different combinations of system conditions.
The regrettable fact is that many organizations are not sufficiently tested to ensure that a new release can be deployed to a production environment without human intervention. Even if the tests themselves are reliable, they are often not run under the same conditions that may occur in a production environment. The reason for this problem is related to the way we manage our infrastructure. The cost of setting up infrastructure manually is very high. How many servers do you need to set up in order to configure all the browsers that users may use? Ten or 100? What should you do if the runtime dependency of a project is different from that of other projects?
Most enterprises need to use many different environments. For example, one environment needs to run Ubuntu and another needs to run Red Hat, or one environment needs JDK8 and another environment needs JDK7. This is a very time-consuming and laborious approach, especially when these environments are static (as opposed to the "create and destroy" approach hosted by cloud computing). Even if you set up enough servers to meet various combinations, you will still encounter problems of speed and flexibility. For example, if a team decides to develop a new service or refactoring an existing service with different technologies, a lot of time will be wasted from the request to build a new environment until the environment is fully operational. During this process, the continuous deployment process will come to a standstill. If you add microservices to this environment, the time wasted will grow exponentially. In the past, developers usually only need to focus on a limited number of applications, but now they need to focus on dozens, hundreds, or even thousands of services. After all, the benefits of microservices include the flexibility to choose the best technology for a use case, as well as high-speed release. You don't want to wait for the entire system to be developed, but you want to release it immediately after completing a function that belongs to a single microservice. As long as there is one such bottleneck, it may greatly reduce the overall speed. In many cases, infrastructure is the bottleneck.
You can easily handle all kinds of testing problems by using Docker containers. The various elements required to test a service or application need and should be able to be set in a container. Please refer to this Dockerfile.tests file, which is a container for testing microservices, which is developed using Scala at the back end, Polymer at the front end, and MongoDB for the database. It tests a fully autonomous service aspect, which is separated from other services in the system. I don't intend to delve into the details of the definition of Dockerfile, but simply enumerate what is included in it. The definition includes Git, NodeJS, Gulp and Bower, Scala, SBT for the front end, and MongoDB for the back end. Some tests require Chrome and Firefox. The source code and all dependencies of the service are also included. I'm not saying that your service should choose the same technology stack, but I want to point out that in many cases, testing requires a lot of runtime and system dependencies.
If you need to build such a server, it means a lot of waiting time until all the elements are set up. When other services make similar requests, you are likely to encounter all kinds of conflicts and problems. After all, the server itself does not exist to host countless potentially conflicting dependencies. You can also choose to create a VM for testing a service, but this means a huge waste of resources and a slow initialization process. By using the Docker container, this work no longer belongs to the infrastructure team and is transferred to the developer. Developers select the components required by the application during testing, define them in Dockerfile, build and run the container through the continuous deployment tools used by the team, and have the container perform the required tests. After the code has passed all the tests, you can move on to the next phase of building the service itself. The containers used for testing should be registered in the Docker registry (optionally private or public) for later reuse. In addition to the benefits we have already mentioned, you can destroy the container and return the host server to its original state after the test execution is over. In this way, you can use the same server (or service cluster) to test all the services you develop.
The process in the following figure is starting to look a little complicated.
Construction
After you have performed all the tests, you can begin to create the container and eventually deploy it to the production environment. Since you are likely to deploy it to a different server than the one you build, you should also register it in the Docker registry.
When you have finished testing and built the new release, you are ready to deploy it to the production server. All you have to do is get the corresponding image and run the container.
Deployment
After the container has been uploaded to the registry, you can deploy your microservices after each check-in and deliver new features to users at an unprecedented speed. The head of the business will be very happy and reward you, and you will feel that your work is great and meaningful.
But the process currently defined is far from a complete continuous deployment pipeline. It also misses many steps, what to consider, and the necessary paths. Let's find out these problems in turn and solve them one by one.
Secure deployment through blue-green processes
Perhaps the most dangerous step in the entire pipeline is deployment. If we get a new release and start running, Docker Compose will replace the old one with the new one. In other words, there will be a certain degree of downtime in the process. Docker needs to stop the old release and start the new one, and your service needs to be initialized. Although this process may only take a few minutes, seconds or even microseconds, it still results in downtime. If you implement microservices and continuous deployment practices, they will be released more frequently than before. Eventually, you may deploy multiple times in one day. No matter how often you decide to publish, the interference to users should be avoided.
The solution to this problem is blue-green deployment. If you are new to this topic, welcome to read my article "Blue-Green deployment". In a nutshell, this process will deploy a new release to run in parallel with the old release. You can call one version "blue" and the other version "green". Because the two run in parallel, there is no downtime (at least not due to the deployment process). Running the two versions in parallel brings us some new possibilities, but it also creates some new challenges.
The first thing to consider when implementing a blue-green deployment is how to redirect a user's request from an old release to a new release. In the previous deployment, you simply replaced the old release with the new one, so they would run on the same server and port. The blue-green deployment will run two versions in parallel, each using its own port. It is possible that you have already used some proxy services (NGINX, HAProxy, etc.), so you may face a new challenge that these proxies cannot be static. The configuration of the agent needs to be continuously changed with each new release. If you deploy in a cluster, the process becomes more complex. Not only the port needs to be changed, but the IP address also needs to be changed. In order to use the cluster effectively, you need to deploy the service to the most appropriate server at that time. Determine the conditions that best suit the server, including available memory, disk, type of CPU, and so on. In this way, you can distribute services in the best way and greatly optimize the utilization of available resources. This creates new problems, the most pressing of which is how to find the IP address and port number of the service you are deploying. The answer to this question is to use service discovery.
Service discovery consists of three parts. First of all, you need to use a service registry to save the service information. Second, you need a process to register the new service and undo the aborted service. Finally, you need to get information about the service in some way. For example, when you deploy a new release, the registration process needs to keep the IP address and port information in the service registry. The agent can then discover this information and reconfigure itself with the information. Common service registries include etcd, Consul, and ZooKeeper. You can use Registrator to register and undo services as well as confd, and to implement service discovery and template creation with Consul Template.
Now that you have found a mechanism to save and obtain service information, you can use this mechanism to reconfigure the agent, and the only unanswered question is which version (color) to deploy. When you deploy manually, you naturally know which color you deployed before. If you deployed green before, of course you have to deploy blue now. If everything is automated, you need to save this information so that the deployment process can access it. Since you have established the service discovery feature in the process, you can save the deployment color with the service IP address and port information so that you can obtain that information if necessary.
After completing the above work, the pipe changes to the state shown in the following illustration. As the number of steps has increased, I have divided these steps into three groups: pre-deployment, deployment, and post-deployment.
Run pre-integration and post-integration testing
You may have noticed that the first step in deploying the pipeline is to run the test. While running the test is critical and gives you confidence that your code will (probably) work as expected, it cannot verify that the service you are deploying to a production environment will actually run as expected. Errors can occur in many links, either because the database is not installed correctly, or because the firewall hinders access to the service. There are a variety of reasons why services do not work properly in a production environment. Even if the code works as expected, it does not mean that you have verified that the deployed service has been configured correctly. Even if you set up a pre-release server to deploy your service and do another round of testing, you can't be completely sure that you can always get the same results in a production environment. In order to distinguish between different types of tests, I call them "advance department" tests. I deliberately avoid using more accurate names because the types of tests you run in the early stages are different for each project. They may represent unit tests, functional tests, or other types of tests. Whatever the type of tests, what they have in common is that you run them before building and deploying the service.
The blue-green process presents you with a new opportunity. Because the old release runs in parallel with the new release, you can test the new release and then reconfigure the agent to point to the new release. In this way, you can safely deploy the new release to the production environment and test it, while the agent will still redirect your users to the old release. I tend to refer to this phase of testing as "pre-integrated" testing. This name may not be the best, because many developers are more familiar with its other name, "integration testing". But in this particular scenario, it means the tests you need to run before integrating the new release with the agent service (before the agent is reconfigured). These tests allow you to ignore the pre-release environment (which is never exactly the same as the production environment) and test the new release with exactly the same configuration that users will use after reconfiguring the agent. Of course, the word "complete" is not very accurate, the difference is that you will test services without using proxies, and users will not be able to access them. As with pre-deployment tests, the results of pre-integrated tests will indicate whether you want to continue through the workflow or abort the process.
Finally, when we reconfigure the agent, we need to do another round of testing. This round of testing is called "post-integration" testing, and this process should be done quickly, because the only thing to verify is that the agent is indeed configured correctly. In general, it is sufficient to test ports 80 (HTTP) and 443 (HTTPS) with just a few requests.
After you implement Docker, you should run all of these tests as containers, in the same way I recommend that you do pre-deployment tests, with the same benefits. And in many cases, the same test container can be used for all test types. I tend to use an environment variable to indicate the type of test being run.
Rollback and cleanup
Before we review the success or failure of this set of test steps, let's first define the ideal state of the production environment. Our logic is simple: if any part of the process goes wrong, the entire environment should remain the same as it was before the process was initialized. What we want to achieve is not complicated, just trigger some form of notification of problems that occur, and establish a culture in the team: fixing problems that break the process is a top priority. The problem is that rollback is not as simple as it sounds. Fortunately, the Docker container makes this easier than any other approach, because it has very few side effects on the environment itself.
It is very easy to decide the next step based on the results of the pre-deployment tests, because you have not made any actual deployment at this time, so you are free to decide whether to continue or abort the process. On the other hand, the steps performed before you get the results of the pre-integrated test will leave the production environment in a non-ideal state. You have deployed a new release (blue or green), and if there is an error in the release, you must undo it. After undoing the release, you also need to delete any service data generated by these releases. Because the agent still points to the old release, users are actually still using the features of the old release without noticing that you are trying to deploy new features. The final testing process (that is, post-integration testing) will pose some additional difficulties because the agent has been redirected to the new release and you need to restore it to the original settings. The registration of the deployment color (version) also needs to be the original setting.
Although I only talked about failures caused by tests earlier, it doesn't mean that other steps in the process will not fail. These steps can also go wrong, and similar logic should be used to solve these problems. No matter which step fails, the system environment needs to be restored to its previous state.
Even if the whole process goes as smoothly as planned, there is still some clean-up work to be dealt with. You need to stop the old release and delete its registration information.
So far, I haven't mentioned databases, which tend to pose the biggest challenge to the rollback phase. This topic is beyond the scope of this article, so I'm just going to describe what I think is the most important rule: always make sure that schema changes made in new releases are backward compatible, and confirm this through a large number of tests. You must run these tests during the pre-deployment test phase. People often complain to me that it is not feasible to guarantee backward compatibility. In some scenarios, this is true. But in most cases, this view always comes from the waterfall development process, where the team releases the product only once a month or even once a year. If the team can keep the pipeline cycle short and execute at each check-in (assuming we deploy at least once a day), then the changes that affect the database are generally relatively small. in this case, backward compatibility can be achieved more easily.
Determine the execution environment for each step
It is critical to determine the environment in which each step is performed. As a general rule, try not to execute it on the production server. This means that except for deployment-related tasks, all tasks should be performed in a separate cluster dedicated to continuous deployment. In the following figure, I mark these tasks as yellow and the tasks that need to be performed in a production environment as blue. Note that even blue tasks should not be performed directly in the production environment, but through the API of the tool. For example, if you use Docker Swarm for container deployment, instead of directly accessing the service where the master node is located, create a DOCKER_HOST variable that notifies the local Docker client of the final destination address.
Complete the entire continuous deployment flow
Now we can reliably deploy each check-in to the production environment, but we are only half done. The other half of the job is to monitor the deployment and operate accordingly based on real-time and historical data. Since our ultimate goal is to automate all operations after the code is checked in, human interaction will be minimized. Creating a self-resilient system is a big challenge, and it requires you to make continuous adjustments. You not only want the system to recover from failures (responsive recovery), but also want to prevent the possibility of these failures in the first place (preventive recovery).
If a service process stops running for some reason, the system should initialize it again. If the cause of the failure is that a node has become unreliable, the initialization process should run on another (healthy) server. The key point of responsive recovery is to collect data through tools, continuously monitor services, and take action in the event of a failure. Preventive recovery is much more complex, which requires recording historical data in a database and evaluating various patterns to predict whether some anomalies will occur in the future. Preventive recovery may find that traffic is on the rise and the system needs to be expanded within hours. It could also be that every Monday morning is the peak of traffic, during which the system needs to expand and then shrink to its original size when traffic returns to normal.
At this point, I believe you have a deeper understanding of "how to deploy continuously through the Docker container". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 298
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.