Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the thinking and understanding of the use of Docker

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

What is the thinking and understanding used by Docker? in view of this question, this article introduces the corresponding analysis and answer in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible way.

First, share the background:

After sharing continuous integration and release in the group last time, there were many students in the group asking me some questions about docker application and release deployment. We had some discussion, coupled with a video we saw yesterday, it felt that we didn't have a clear understanding of Docker application.

In addition, I was also perplexed by this problem for some time before, and had some of my own thinking and understanding. So at noon, I sorted out some of my previous thoughts and records a little bit, and I would like to share with you here that when throwing a brick to attract jade, time is tight, maybe what I said in some places may not be accurate, and the content is not too much. You can clap bricks and discuss at will.

First of all, let's make it clear that this sharing is a personal point of view and does not represent any organization or organization. At the same time, I'm just talking about my understanding from the perspective of Docker usage. Therefore, this is not a discussion of Docker technology, principle, implementation and other related details, such as "Hitler Docker" video mentioned in the Kernel Panic, network problems, isolation, such as Docker purely technical issues are not discussed in this scope.

This sharing should be regarded as another dimension of interpretation. Let's get down to business:

First of all, the concept of Docker is to directly deliver APP to users, rather than physical or virtual machine resources, which can greatly improve the efficiency of system integration, release and deployment, and can be Run Anywhere, so once this idea comes out, coupled with the official release of Docker, the degree of popularity can no longer be described as hot. We seem to have seen a messiah and another subversive technology.

However, after some practice and reflection, I often ask myself, my team or my peers a few questions:

A, the current problems we encounter, without Docker, is there nothing we can do about the existing infrastructure and system?

B, the low efficiency of release and deployment, and a lot of human intervention, can these problems be solved by using Docker?

B. If Docker is introduced to solve these problems, what is the cost of the transformation of the original system, and can it really maximize the benefits?

You can also ask yourself what you are thinking.

With our own team in the process of continuous integration and release system, we are also thinking, discussing and colliding internally, through a stage of thinking and summary, we have some of our own understanding. So following the idea of sharing last time, let's take a look at what it would be like to introduce Docker:

Take the continuous integration and release system that I shared in the group last time as an example, the process of integration and release involves many links and influencing factors, such as application configuration management, multi-environment management, version branching and merging strategy, packaging strategy, release granularity, service registration, and so on. If it is a web service, it is also necessary to switch traffic on Proxy or other soft loads, for elastic expansion and reduction. We also rely on the capacity evaluation system (some students said that just look at the system load, in fact, this is only the simplest indicator, can not meet the needs of the assessment).

In fact, we can see that operation and maintenance is a work or type of work that attaches great importance to systematic construction. If these things are done in place, the operation and maintenance will not low.

When we come back, the construction of this set of processes and systems depends on the formulation of strict standardization norms and standardization processes. At the same time, the formulation of norms and processes cannot rely solely on the unilateral wishes of the operation and maintenance staff, but also to collaborate with the development, testing, SCM, and other teams to develop them, and then we have to be able to consciously comply with them, and then develop corresponding systems and platforms to automate all the above processes.

With regard to the content shared last time, I will not repeat it here. For details, you can refer to the content shared last time.

All right, let's go back to Docker, every step of the above continuous integration and release, and ask ourselves with the question just now, let's find a few links to make a few comparisons, such as:

A, apply configuration management, we now manage each configuration item through the system, and other systems obtain the configuration through API. Docker is introduced, which corresponds to configuration through DockerFile. By the way, does DockerFile need to be managed? How to manage it? How to deal with it in multi-environment?

B, package the construction process, the existing way is to pull the code to take the configuration, and then generate the war package. Docker is introduced, which corresponds to the generation of an image. Do you need to pull the code and get the configuration to generate the image?

C, standardization and process specification construction, this seems to have nothing to do with the use of Docker, no matter whether you need it or not, only after the source is straightened out, the later development will be smooth.

D, existing code management uses Git, each submission is identified by commit, and each company has its own management strategy for version branch merger. Docker is introduced to use the image repository, but does the image repository also need to be managed? Does this management platform also need to be customized? Version branching strategy varies according to the actual situation of each company, which is also a management strategy issue, which seems to be outside the scope of Docker management.

D, other, do not compare one by one.

I think through the above comparison and rhetorical questions, we can draw the following conclusion:

Basic work, carding of release links and details, standardization, release process specification, application configuration management, multi-environment management, release strategy, service up and down, and so on, regardless of whether we use Docker or not, we have to do, this is the foundation.

A, there are some links that have nothing to do with the use of Docker, in which Docker will not improve any efficiency, nor can it solve any problems, such as standardization, process agreement, version branch merger strategy, and so on, which is a matter within the technical management system.

B, in some links, the difference between using or not using Docker is just realized in different ways, to be honest, it has to be realized line by line of code or script. Only for the release, the vast majority of the links, there will not be any omission, as to whether it can improve efficiency, has yet to be evaluated.

Let's take another case to compare:

In my last sharing, I showed you the complete implementation of a continuous integration and release process by releasing the system. The following is a case of publishing with Docker, which is shared by a service vendor on QCon (note: for comparison only). You can see that the judgment of multiple environments, the setting of ports, the way of startup, the path of configuration, and so on, are all implemented line by line of script.

When I mention this case, I just want to show that the basic things, regardless of whether we use Docker or not, need to be done. This is by no means what we understand. These problems do not exist with Docker, and the workload of using Docker is not necessarily smaller.

Perhaps at this time, you should mention that the release may not be more efficient than before, but the expansion deployment can, yes, I strongly agree with this view, but there will be some prerequisites to meet, which will be mentioned later.

Well, there is a chestnut above, which may be suspected of being partial to the whole, and there is another one behind it. But first, let's talk about my personal understanding of Docker:

To write this article or to share this is not to hack Docker or reject Docker, but to expect us to use Docker more reasonably and rationally. never think that Docker is a silver bullet and that Docker is a means to solve all efficiency problems. at the same time, we should not think that Docker has so many holes and can not solve my problems, so we firmly reject them. These two views are narrow and not open enough. It is also a common superficial understanding of Docker at present.

OK, let me talk about my suggestions on the use of Docker (just my personal understanding, not necessarily correct and reasonable, welcome to clap the bricks and discuss):

Most fundamentally, we need to learn from the advanced concepts of Docker, continuous integration and release & deployment, Run Anywhere.

A, continuous integration and release & deployment, how to better and faster delivery of applications to the business, how to quickly put business requirements online to produce real value, this is our goal, we must not go too far. The emergence of Docker is actually to develop towards this goal, so the idea and essence understand that as to how to achieve it, it is a matter of way choice. Never Docker for the sake of Docker, or Docker for the sake of being cool.

B, the key point of Run Anywhere,Docker is the standardization of applications, which shields personalized differences in various applications through Docker encapsulation. For example, DockerFile is to unify and standardize the configuration of applications, such as Docker start and stop, destroy, update, and unified interface standards, such as image repositories, unified image standards, and so on, so standardization is the core value of Docker.

So you see, Docker attaches great importance to basic standardization and standardization, but does everyone understand this essence? Do you meet the standards and norms in your daily work?

So we have to realize that Docker only provides such a set of unified application standards, as to how to use it, we need to customize, develop and implement it according to the current situation of our respective business, and we have to do this ourselves.

At the same time, adapting based on this set of standards will pay the cost and price of transformation. For example, the points mentioned above, there will also be a series of technical transformation and adaptation problems for technical details, such as isolation, network, IO, and so on. This cost and cost is the most fundamental reason for the contradiction between Docker's hot Fans and Hitler's opponents. (note that I want to emphasize here that everything we do has a cost and a price, and Docker is no exception. )

Therefore, the use of Docker should or must be a gradual evolution process, not overnight. The reason for saying this:

A, or because the foundation needs to be laid, such as standardization, the formulation of process specifications, application configuration management, the construction of CMDB, and so on, these basic things are immutable. Only when the foundation is done, can we better develop and transform based on the foundation. For connecting to the Docker, there is no such thing, but simply using Docker, everything is in vain. Therefore, once we are able to generate images conveniently and efficiently based on these basic work, will the efficiency of expansion and deployment naturally be improved, and will the power of Docker be brought into full play?

B. From the point of view of operation and maintenance, Docker is based on APP management and operation and maintenance mode, and its only label is container ID, while the current operation and maintenance mode of most systems is based on resources and IP, and its only label is IP, so the two modes are basically incompatible. Moreover, around the operation and maintenance mode of IP, a series of systems have been derived, such as release, deployment, monitoring, stability, capacity evaluation, and so on, so if you want to use Docker, it means that these systems have to be modified accordingly, and the cost and cost will be very high, so you can only do it step by step.

C, at the same time, a complete set of container-based ecosystem also needs to be polished and built step by step, such as security, permissions, storage and so on.

However, many manufacturers in the industry are also considering this issue and constantly improving the Docker ecology. I think this will be the biggest driving force for Docker to continue to flourish, and we will continue to pay close attention to it. For example, Rancher, the picture is from the InfoQ website.

Here is another actual case. Before Singles Day holiday last year, we spent about three weeks developing a Docker-based Web deployment system. The whole idea is to use container ID as the only tag, instead of IP.

The actual effect at that time was to achieve minute-level deployment and rapid launch, mainly to cope with the rapid expansion when peak traffic was greatly promoted. The double 11 promotion line also ran part of the Docker, but because of the sufficient capacity, there was no expansion during the promotion period. We later wanted to extend it to online daily use, but in the end, it was not widely used because of the incompatibility with the original system or the high cost of transformation. For example, monitoring, inconsistent command output during daily maintenance, current-limiting strategy, and so on, of course, now we have a special team to do this side of the construction, when a series of foundations continue to improve, we will still re-use the system.

Some time ago, when we communicated with our peers on this issue, we reached a consensus that "the introduction of a new thing or new technology should start with solving the pain points of the existing system. Only when it can really help R & D and business solve practical problems and pain points, can such technology have vitality."

Finally, I would like to end with a sentence from an article on cloud computing that I have seen to share with you:

-- "to solve problems in a down-to-earth way is to innovate, and the reason why we fail is often due to haste for success.

I have directly attached several typical questions discussed earlier:

Q1: now there are many successful cases in the industry, especially cloud service vendors and BAT have launched products, why they are very successful, there are also successful cases.

We still have to talk about the importance of infrastructure. We have a lot of communication with Ali's classmates. Ali's standardization and the perfection and strictness of process standardization have always been our benchmark. I think every company and team that can do this must have a lot of infrastructure work in the early stage, so we should not just see the surface of the strong and brilliant, there must be a very solid foundation to play a role, similar to the foundation of high-rise buildings and skyscrapers, brain repair will understand.

At the same time, each company's technical staff strength and technical reserves are different, which is a very big advantage in excellent Internet companies, so they can quickly develop applicable products.

There are also many strategic considerations, such as the major cloud service manufacturers, who are unwilling to lag behind in this field and are bound to invest heavily in R & D and layout.

Q2: there are some small or start-up companies that also have very successful cases. How to explain this?

As far as I understand it, it is very normal for small-scale or start-up companies, which have no historical burden or small, to be based on Docker as the infrastructure from the very beginning, and to be able to build and achieve some successful and excellent cases around this foundation. But for a slightly larger company, as I mentioned in this article, it will inevitably involve the cost and cost of transformation. At this time, ROI should be considered in many aspects. If the starting point is different, there will be different decisions, but there will certainly be some trade-offs.

Is Q3:Docker mature for use in production environments? Does Mogujie use docker in all his test environments?

To mention this again, or that question, if we do a good job in the basic work around the construction of docker, the conditions for improving the production environment will be ripe. But in terms of stability, it depends on the actual operation online, which should be expressed in terms of facts. Running in a large-scale production environment, but also a strong technical reserve, critical moments must be able to hold live. Our test environment has part of docker, and the next step will consider deploying through docker, which is the advantage of docker in terms of deployment efficiency.

Q4: what is the biggest pit encountered based on Docker?

I think from many aspects to understand it, if from the application mode, for the docker application posture is incorrect and thinking mode problems, this is the biggest pit, this is to start from the actual problem to sum up and think, the key is to see what problem to solve.

Technically, I am no longer showing off here. You can take a look at the content shared by another of my colleagues, Guo Jia. There are also articles on InfoQ.

Q5: deployment based on docke is only for consistency of deployment environment. Is this study time worth it? Or is there anything else worth learning from docket? can you elaborate on it?

Ensuring the consistency of the environment through Docker has its advantages, as long as you pull the corresponding Image, his ideas and principles can be studied well. But I still want to say that Docker is just a means, not an end.

Any new technology is worth learning and studying, but when it is really applied, it is best to start from the problem. Our goal and goal must be to solve the problem.

Q6: whether the specific content of Mogujie standardization and process specification can be shared.

Last time, we shared an outline, but the specific content depends on the characteristics of their respective products and business. I suggest you make a summary and think about it. This must also be based on reality, even if a full set of specifications and details are published. but it doesn't necessarily apply to every production environment. We can discuss specific problems, but we won't go into them here.

About the use of Docker thinking and understanding of the answers to what questions are shared here, I hope the above content can be of some help to you, if you still have a lot of doubts to solve, you can follow the industry information channel to learn more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report