In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This article is transferred from | programming is unbounded
[editor's words]:
Although Docker has its advantages, there are a lot of unreasonable designs behind it. The purpose of this article is to explain the disadvantages of Docker and point out the relevant basis.
Whenever I criticize Docker, I get a lot of resentful responses. I wrote an article six months ago, "Why would someone choose Docker over large binaries?" And finally saw some wise comments in the indignant criticism on Hacker News. Therefore, today's article is mainly to further elaborate on the design flaws of Docker and respond to the criticism that may be faced in the future.
Compared to HFT Guy's experience, I think I am lucky enough. HFT Guy once said angrily that it must have paid a heavy price because of the failure caused by the use of Docker in the production phase of financial companies. The original quote is as follows:
I received a very rude email from an obvious member of the amateur community. "only idiots run Docker on Ubuntu," the guy said angrily in his email. The email also comes with a list of software packages and advanced system tuning scenarios needed to run Docker on Ubuntu. In addition, the email specifically said: "anyone can find the relevant content in the list within 5 seconds through Google."
People tend to vent their anger online-although I don't know why, it is. And when developers find someone attacking their beloved technology, they often fight back, sparking a war of words. However, this bad mood blinded them so that they could not recognize the long-term significance of the relevant schemes in terms of practical use.
Docker promises to provide portability, security, and resource management and orchestration capabilities. So sensible people naturally want to know: "is Docker really the best way to gain portability, security, and resource management and orchestration capabilities?"
Before that, I would like to respond to some of the responses I have received. Although the views held in most of the responses are not worth discussing at all, there is an argument that has caught my attention-I will elaborate on it at the end of the article.
Some comments are as follows:
Because the use of Docker involves fewer resources, and has nothing to do with whether the relevant resources need to be processed and the existing processing capacity of the user.
Right. But what is the reference object? Is it for developers to write scripts to standardize construction, deployment, orchestration, and resource usage? What the above comment seems to be trying to say is, "I'm not saying that developers should choose this kind of operation, because the results already exist objectively; what I mean is that they can be standardized."
The reason why developers and managers prefer Docker is that the functionality it provides is limited, less confusing, longer-lasting and more standardized-or at least there is an opportunity to achieve such an effect. In fact, however, what Docker has brought so far is an incredible mess (see "Docker in production: a History of failure" for more details). However, many people are willing to believe that the current problems with Docker will be resolved soon, and that Docker will then be able to provide a stable and consistent standard for containerization scenarios. This is undoubtedly a big bet about Docker! Almost all the companies involved in this "gamble" have paid a heavy price for it, but some companies continue to bet on it in the hope that they can one day get a good return.
Over the past two years, every company I've worked with has been either using Docker or drawing up plans for using Docker. In fact, these companies have paid a high price for obtaining standardized solutions.
To make matters worse, I have not heard any technical manager make a clear statement on the current situation of this kind of "gambling". Most groups that support Docker continue to emphasize the portability, security, or orchestration and configuration capabilities it brings. Yes, Docker does provide us with portability, security, or orchestration and configuration capabilities, but at a considerable cost. In most cases, it is easier to write specific scripts.
In what I personally consider to be the best Docker essays, the author highlights the trade-offs between the pros and cons of using Docker:
It is best to think of Docker as an advanced optimization scenario. Yes, it's cool and powerful, but it also adds complexity to the system, and only professional system administrators can understand how to safely use Docker in production and deploy it in mission-critical systems.
For now, you need more system expertise to use Docker. In fact, almost all Docker-related articles show overly simplistic use cases while ignoring the complexity challenges of using Docker on multi-host production systems. This undoubtedly gives people the wrong impression that the actual production and application of Docker is very simple and easy.
In the field of computer programming, there is a wise saying:
Immature optimization is the root of all evil.
However, most of our clients have insisted so far this year
"We have to implement Dockerization from the initial stage."
Therefore, unlike the traditional initial best practice, which requires the establishment of a working system, putting it into production, and finally considering whether Docker can bring comparative advantages, customers begin to blindly promote the standardized development and deployment of Docker.
Many friends may have encountered the following scenarios:
Me: "We don't need to use Docker early in the project."
The other side: "this app requires Nginx, PostGres and Redis and a lot of environment variables. If you don't use Docker, how do you set it up?"
Me: "I can write a Bash script, use the make command, or use some other type of installer. Anyway, there are a variety of options, and we have been able to do so for so many years."
The other side: "this is crazy. If the Bash script can break, you have to take the time to debug it; and it doesn't necessarily work on other machines compared to the container. With Docker, we can write build scripts and make sure it works on any machine."
Obviously, this is a typical sales rep scenario-they only emphasize the most attractive features of Docker. As a development tool, Docker does seem to have a better level of consistency and is not as messy as other scenarios. But in the process of actual use, especially in the production environment, Docker often brings only pain.
Your development team may have a developer using a Windows machine, a Mac user, a developer who already has Ubuntu installed, and a developer using Red Hat Linux. For you as a team leader, full control of the machines they use is an almost impossible task. At first glance, Docker seems to be able to ensure that the same development environment is provided on all platforms. (but I'd like to mention here that never use Docker-- in a Windows environment. Anyone who tells you that Docker can simplify the Windows machine development process is digging holes for you. )
But in the actual production process, you will have complete control over the machines running in the production environment. If you want to standardize on CentOS, there's no problem at all. You can control thousands of CentOS servers and use old technologies such as Puppet to ensure that they have the same operating environment. This means that Docker is not as important as legend. By contrast, when developing with Docker, developers tend to assume that the actual production environment will also be built from Docker-- but this is not the case.
I can certainly enumerate more practical cases related to Docker problems, but these examples are really boring. There are already a lot of blog posts about Docker design flaws, and I'm sure you've heard of them-unless you're a big fan of Docker and turn a blind eye to all objections. For this group of friends, they will directly ignore such articles, or after barely reading, they will say, "the Docker ecosystem is rapidly maturing, and it is believed that it will be very stable and suitable for actual production by next year." This has happened every year for the past five years. Yes, perhaps such a claim will eventually become a reality, but back to the beginning-it is a dangerous gamble.
Despite the above problems, Docker seems to have been successful-and we have made almost every company eager to join Docker. Why? As far as I know, standardization is the core reason.
Again, in the comments on the article published on Hacker News, the netizen "friend-monoid" commented on Docker:
We have a large number of HTTP services written in different languages. Managing it as a whole in [super] binary mode is a chore-the writer has to provide a solution for port settings and address listening, and I have to track each port setting. With the net namespace and smarter iptables routing, Docker can help me with this task.
Netizen notyourday gave a response that won my heart:
Of course, if you use other methods and follow the same writing methods and rules, you can achieve the same effect. In fact, the latter may work better because it provides smarter namespaces and iptables, and even allows you to be completely undistracted by the state of your application.
Netizen anilakar responded to notyourday:
In my opinion, this is at the core of the transferability of Docker skills. In other words, in this way, newcomers who have just been hired can quickly start new jobs. Many enterprises are still using build / deployment systems that can effectively meet the underlying business needs, but cannot provide practical experience that can continue to play a role outside the enterprise.
As far as I know, this basically sums up the real advantages of Docker. Yes, the advantage of Docker is not that it makes it easier to deploy, secure, or orchestrate applications-this is complete nonsense. In fact, the only reason Docker becomes the standard is that people can apply the experience they have gained in one company to another. Docker is not as messy and unique as custom Bash scripts. However, is it worth betting everything?
But to put it bluntly, I think the above argument is actually a confusion between container technology and Docker. And a considerable number of Docker supporters are deliberately confusing the issue. Container technology itself is indeed an advanced concept, but Docker, which is controlled by a single company, inevitably has some problems. Let's move on to HTF Guy's point of view:
Docker does not have a business model and cannot be monetized. To be fair, Docker is desperately pushing it to all platforms (Mac/Windows) and Swarm all features to:
Ensure that no competitor has any significant uniqueness
Make sure everyone uses Docker and Docker tools
Lock customers completely in their ecosystem
In the process, a large number of news, articles and reports are published to increase the hype.
To prove the rationality of its valuation.
It is almost impossible for Docker to scale horizontally and vertically for a variety of products and markets. Here, regardless of whether this expansion is appropriate or sustainable business decision, this is another issue. )
At the same time, competitors such as Amazon, Microsoft, Google, Pivotal and Red Hat compete in a variety of ways and earn far more than Docker from containers. In fact, CoreOS is also developing an operating system and competing with its Rocket containerization technology.
So even if you agree with the great power of container technology and highly praise its ability to set up consistency, it doesn't change the fact that Docker itself is unstable.
Well, for the time being, let's think of Docker and container as the same thing. In this position, let's review my previous articles and see which criticisms are still in line with reality.
I mistakenly used the expression "large binary" in the previous article, which led to a lot of misunderstandings. A few hours later, I issued the following disclaimer:
In this article, I use "large binaries" to express a binary that contains all dependencies. I didn't mean the entire 32-bit to 64-bit conversion. If I were solely based on Java and JVM, I would use the word "uberjar". But I didn't use it because I wanted to include the commendable GO language and its ecosystem in my presentation.
If I do it again, I will definitely use the word "super binary". Although it means that my personal work is mainly concentrated in the field of JVM, it is relatively more accurate. This is the best expression I can think of, so in this article, I turn to the term "super binary".
I hope developers will realize that their favorite way of computer programming may not be suitable for cloud-based distributed computing. I would like to repeat this point here, and I believe many developers have felt that cloud computing systems need to adapt more and more to the PHP code, Ruby code, or Python code they write, but they will never adapt to this new world at the code level.
If you are starting a new project and expect it to grow in size-- you may need to consider its scalability, or simply focus on high availability-- you can use a modern language designed for cloud environments. At present, many new languages offer unprecedented new features, and the ones I have personally used and impressed are Go and Clojure. Personally, I don't have much experience with Rust, Scala and Kotlin, so I can't discuss it too much. Of course, they should also be excellent (in previous articles, many readers thought that I had deliberately ignored some languages. I do not mean to insult these words, but limited to the scope of personal contact, I will only mention the solutions I have actually used). I have read some articles with the Scala Akka framework and found that they contain a lot of wonderful ideas. Therefore, although it has not been actually used, I think it is of a very high level of design and full of modern feeling.
In my earlier article, netizen btown wrote:
Anyone who thinks that all modern Web applications are developed using Golang or JVM lives in a strange cycle of self-awareness.
Again, many languages are excellent; but it is impossible for me to know all of them, so I can only comment on some of the languages I have come into contact with personally. In addition, I will criticize the old languages I have used-including PHP, Ruby and the recent popularity of Python. They are derived from an old paradigm, so they are not suitable for cloud-based micro-services and distributed architectures. I discussed this in the article "Docker is protecting the programming paradigm we need to get rid of as much as possible." Now it seems that people have not seriously considered this issue. I recall that around the year 2000, the craze for object-oriented programming reached its peak. At that time, few people dared to stand up against this model. The technology industry has always thought of itself as completely open, but in fact it is still full of drivers. Over the next few years, participants retreated, leaving chicken feathers and the mockery of onlookers. Combined with the purpose of this article, there was excessive hype in XML and object-oriented programming in 2000, and today's Docker is in the same state.
I use the Clojure language a lot, and it seems to be very suitable for writing applications and creating binding dependencies of uberjar. I also know how to build a system like Jenkins, so my Clojure build is fully automated. I've heard that many companies have chosen the incredibly extreme path of binding databases, etc., to uberjar when building applications without external dependencies. Please refer to the article "600x growth in 7 months with Clojure and AWS":
This promoted the release of decision two. Because the collective product of data is small, we can add the entire content database to one of our versions. Yes, you read it correctly. We use an embedded example of Solr to build our software, standardize and clean the non-relational database as a hotel inventory management solution, and plug it into the application deployment solution.
We have reaped huge benefits from this unconventional treatment. First, we eliminate the important point of failure-the mismatch between code and data. Any version of the software will work-even if we move it to another system years later and make various changes to the content database during that time, this result will not be affected. In this case, deployment and configuration management for different environments becomes extremely easy.
Secondly, we achieve horizontal sharing-free scalability in the user-oriented layer. Its expansibility is considerable.
If you can achieve this large-scale extension without introducing new technologies, such as Docker, then there is certainly no need to bother. Solving the problem in the simplest way is the principle that we should adhere to. If you move from Ruby/Python/Perl to a new language and ecosystem, and you can achieve scale with fewer technologies and mobile parts, then you should even say that this approach must be adopted. Again, our principle is to achieve your goals in the simplest way possible, and in most cases this means that you should minimize the number of technologies you use. Inspired by Rich Hickey, I distinguished "simple" from "easy". It is "easy" to use a language you already know, and it is difficult to learn a new language-but the latter is more "simple", or can reduce the total amount of technology in the system. The "simplicity" here means that your system will eventually be able to operate in a simpler way-- smaller code, simpler configuration, or fewer technologies.
In the comments in previous articles, many readers thought that my practice of always mentioning Jenkins means that my mentality is out of date-but sometimes I have to admit that boring things are not necessarily bad:
The advantage of boredom (with a high level of restriction) is that the functions of these things are relatively easy to understand. And more importantly, we can more easily understand its failure mode. But for glittering new technologies, it is important that there are much more unknowns.
In other words, software that has been around for a decade is easier to understand and has fewer unknowns. The fewer unknown factors, the lower operating expenses, which is by no means a bad thing.
I realized that many developers have strong biases. When reading such articles, they tend to hold on to one reason to refute the whole article. So if I mention any technical solutions such as Jenkins, Ansible, Go, Clojure, Kotlin or Akka, they will focus on finding flaws in these technologies and devalue the article itself as worthless. Apart from adding a disclaimer, I don't know how to really communicate with this group of readers-in fact, I think even if a statement is added, readers who don't want to hear it will still not listen.
In the previous article, netizen tytso wrote:
The claim that the Go language is a pioneer of large binaries is clearly wrong. For decades, people have been using static binaries (sometimes in conjunction with built-in data resources) to solve this problem. MacOS is one of the representatives.
Here, I would like to apologize for my ambiguity. What I really want to express is to simplify the overall system resource mechanism by stuffing all dependencies, all necessary configurations, and even all necessary resources into the same binary file. This practice is more extensive in conceptual scope than simple static library links. Modern continuous integration systems can be configured to ensure that each binary is given a specific configuration, and this configuration information may uniquely correspond to a specific instance of the binary. So even if you need to run thousands of instances of the same application at the same time, you can build thousands of instance variants with slightly different configurations. Of course, you can do this using a relatively old build system like Jenkins. These systems are easy to understand, but on the other hand they are really boring. But boredom should not be your reason for choosing Docker-it can pose daunting complexity challenges. (of course, don't worry about the situation where I use Jenkins as an example here, you can also choose Travis, TeamCity or Bamboo according to your preference. I am often surprised that many developers give up their approval or even read the whole article because of the choice of one solution. )
Some readers said:
Docker protects users from lock-in problems with cloud service providers.
Of course not. Any standardized deployment DevOps tool can help us solve the vendor lock-in problem. Moreover, such tool options are not uncommon at all:
Netizen friend-monoid wrote:
If my colleagues don't need to know how to deploy the application correctly, their work will be significantly simplified. If I can put their work in a container without being affected by language or style, then my work will be significantly simplified. I no longer have to be distracted by dealing with crashes, infinite loops, or other code errors.
Netizen notyourday's response coincides with my attitude:
Of course, because you just move this logic to the "orchestration" and "management" layers. You still need to write code to execute correctly. Even if you paint lipstick for a pig, a pig is still a pig.
What's more, if you are in a small company and there are only a few developers, you will inevitably deal with each other's code results. If you are in a large enterprise, you will have DevOps teams that are independent of the development team-- who have been focused on such issues for years, including the use of various health check schemes. Health checks also involve coding, which is not covered by the standardized scope of Docker. As an application developer, you need to create a corresponding endpoint to send 200 responses to DevOps for regular testing. In this case, Docker doesn't really work. Most DevOps teams have scripts specifically to test whether the application is active, and if the target application does not respond, it will be terminated and restarted.
Netizen Lazare wrote:
For example, I'm using scripting languages such as Ruby, PHP, or Node.js to build an existing code base based on legacy scripting patterns.
. I can think about how to package all the existing code into a Docker container, match some orchestration, and then release it.
According to this article, I find that the author thinks that using (super) binaries is better. But here's the thing: you can Docker your PHP application, which is not difficult. But how do we build (super) binaries? If your answer is "rewrite the entire code base with Golang," it's obviously pointless; first of all, we don't have enough resources to do the job, and second, even if we do, we're not going to take such a cost-effective approach. Finally, anyway, we don't like the Golang language.
In this example, the enterprise has been using the PHP application for a long time, and now the goal is to Dockerize the application. Why did you come to such a conclusion? What changes have taken place that have led to the need for Dockerization of the application? It's like a man-made example. How did your application work before Dockerization? What is the problem with the old method? Do you think Docker can solve these problems?
Can the ultimate goal of this application be orchestrated and coordinated with other (or newer versions of its own) applications? Docker plus choreography usually means Docker plus Kubernetes. (of course, you can also choose to use Mesos or Nomad instead of Kubernetes.) This is a very complex combination scheme, and enterprises should think carefully before choosing this path. See "is K8s too complex? "if your application is small, it's actually a good choice to rewrite it as a whole, and if your application is large, you should consider whether there are other simpler ways to implement it-- such as writing some Chef or Ansible scripts.
I have come across a large monolithic application that is written in PHP and uses the Symfony framework. The app has a bad performance problem. We began to split it step by step-leaving the Symfony used for HTML template rendering intact and rewriting the parts that really determine performance in other languages. I have already stated this in the article "Architectural issues for small applications". At the time, the DevOps team was using Puppet and some custom scripts to handle the deployment. That's enough for us. Boring and stable technology can really work well.
Please remember that your time is limited. No matter how much time you put into Dockerizing your PHP code, your time is being taken up-- which means you can't use it to make other modernization transitions. Therefore, please make sure that the investment can bring a real return.
What's strange is that when people talk about Docker, they often don't cite examples of applications that really need to be choreographed. I have seen long-running data analysis scripts on Spark that are choreographed using Nomad. I realized that this was a huge system that added several TB data to Kafka every day, then to Apache Storm, and finally to ElasticSearch. The system has a complex set of running state checking and monitoring functions, but for most of the work, Storm itself can fully orchestrate the tools. Web applications need to deal with a large number of business processes before dealing with large amounts of data. Twitter is faced with a sizeable amount of data, which is choreographed using Aurora. Is your scale comparable to that of Twitter? If you use Docker and Kubernetes on a regular website, please stop right away-we have many easier ways to run a website. I have 25 years of experience in building websites, and believe me, most websites don't need Docker.
Netizen mwcampbell wrote:
"I really don't approve of putting a lot of effort into keeping the old technology working properly."
I strongly object to this part. In order to make real progress in technology and civilization, we need to ensure that we do not waste time recreating existing solutions-in other words, we should keep old technologies working. So it would be great if Docker could get the Rails application written in 2007 up and running. In fact, we should still use Rails, Django, and PHP to develop new applications. Even if it is no longer fashionable, we still have reason to use these very mature platforms and tools.
The word "fashionable" reminds me of a widespread misconception in the field of technology. Can we stop this blind pursuit of fashion? For example, confusing technology with popular culture is like thinking that all the music that emerged in the golden age of rock music is no longer necessary. On the contrary, good technology is not pop music; programs that were successfully implemented in 1993 may still be suitable for today's demand scenarios.
This is an obviously ironic comment. Obviously, sober realists think we should Docker everything, but lunatics like me think we should continue to use old DevOps tools and combine them with new languages. In its view, my choice is driven by "fashion", while their love for Docker is driven by a great desire to "really make progress in technology and civilization".
My view is that as long as old and boring technologies can still play their part and do not incur additional costs by continuing to use them, we should continue to use them. However, when there is a major change in the way technologies operate, we should ask ourselves whether some technologies no longer meet the needs of the new environment. More specifically, with the rise of cloud computing and micro-service architecture in the cloud environment, we must rethink our choice of technology solutions. The guiding principle should be, "what is the easiest way for us to achieve our current goals?" If the old technology can achieve the goal and belongs to the easiest way, then it should be the best choice. But if the new technology can help us simplify the system, then we should actively accept this new solution.
Netizen chmike wrote:
Containers are not only a solution to dependency relationships, but also effectively protect boundaries.
Netizen neilwilson responded:
This is just a fancy enhanced version of chroot. Don't believe so much Docker hype. Wise administrators have been doing similar things for years, but we don't spend a lot of money on public relations promotion.
This also represents all my personal opinions.
In the previous article, I asked, "Why not use super binaries that don't have external dependencies?" You might say, "that's what Docker does! it's a large binary that contains everything an application needs to run! and it doesn't depend on any language! you can use it with Ruby, PHP, Python, Java, Haskell, and even any language!"
That's true, but I suggest that companies should look for opportunities to reduce the number of technology solutions they use. Maybe you can use modern languages and ecosystems to implement this super binary design in a native way.
Many people think that opponents of Docker criticize it because it does not have the ability to change the technology solutions in its enterprise. But our assumption is that enterprises are actually spontaneously using heterogeneous hybrid technologies, including some old technologies that are not suitable for distributed computing in the cloud. Therefore, Docker is actually an "invisibility spell" that hides the reality that enterprises are unable to adapt their languages and ecosystems to cloud-based distributed computing architectures.
In the long run, this refusal to improve can destroy a business. I do not blindly pursue the latest trends in the field of science and technology, but I support continuous re-evaluation of what is best for enterprises to achieve their own technological achievements. At present, the overall implementation of computing is changing, and passive acceptance of traditional applications will eventually become a pain point for enterprises, which will slow down the development of enterprises over time. And when the traditional application can no longer run, rewriting will bring greater risk to the enterprise. It would be great if enterprises could find ways to split and modernize existing applications. In fact, the most important feature of the microservices architecture is that it allows legacy applications to be modernized in an incremental manner. I have already talked about this in the article "gradually improved Coral Reef Model".
In the previous article, I mentioned an analytics company that needs to add TB data to Kafka every day, then distribute it to Apache Storm, and finally to ElasticSearch. The company has long used Python. When they encounter performance problems, they want to use Python concurrency systems such as Tornado to achieve large-scale concurrency in their systems. They handed over the project to a very good engineer (who previously worked for Intel) and gave him three months to build the new system. As a result-- a complete failure. They can't get the performance improvement they want, and Tornado can't even bring the fine-grained concurrency and level of concurrency they expect. In the end, they finally accepted the fact that they couldn't stick to Python at all levels. Now they are turning to Go and Elixir and admit that this is the real solution to the problem. (I'm sure this company is a little hurt-they are real Python supporters and a bunch of idealists. )
I said it was caused by the reassessment method they adopted, but I think enterprises should incorporate this link into the operating system and implement it on a regular basis.
The following is the strongest Docker support argument from netizen pmoriarty:
You can also make almost identical criticisms of all configuration management tools (Ansible/Chef/Salt/CFEngine/Puppet). They are a lot of messy products, you can have fun when they work, but they can also bring terrible nightmares when they don't work.
All of these tools will take at least a few decades to fully mature.
Yes, you're right.
Need a set of recipe for running WordPress? Ansible can provide it, so can Docker. Similarly, Docker can provide this help when running Drupal, MySQL, or countless other tasks. Docker has done a really good job, at least in providing the default settings for the general requirements of DevOps.
If Chef or Ansible could become more mature, we wouldn't need to have such a heated debate about Docker today. As far as I know, a start-up focused on building a framework for Chef in 2013 (they wanted their framework to be the Ruby on Rails of DevOps). They encountered some problems and found other more profitable directions in the process. In the end, they were distracted and failed. In spite of this, their unfinished work may succeed one day. Such a framework would have the advantage of relying directly on the functionality of the operating system without having to cover up the underlying operating system as completely as Docker does.
Both Chef and Ansible have promised thousands of scripts for all potential machine types and all common DevOps tasks. It is clear that neither of them has fully fulfilled their promises. In 2005, every company would have a DevOps staff to write customized scripts for all DevOps tasks in the company-which seems quite normal. However, by 2010, it is clear that there should be a more centralized general task script library, which should be more specific than the Perl CPAN library, and pay close attention to the consistency of resource management and orchestration (to achieve portability) and security issues. Also from this time, Chef soared, Ansible then caught up, and Docker caught up a few years later. Many developers believe that Docker ultimately brings exactly the effect that Chef and Ansible have promised but failed to achieve for a long time. But for a long time, Docker has only been applied to internal development work. Until 2015, using Docker in a production environment was still tantamount to suicide. Even in 2016, companies trying to use Docker to support production systems still encountered painful shackles. All this has not changed significantly in the past year.
In my opinion, Docker will one day be characterized as a huge mistake. The strongest argument is that even if it eventually becomes the standard and matures, Docker is just a band-aid for the current problems facing the technology industry. Due to the lack of the ability to get to the root of the problem, the fate of Docker is unlikely to come to a good end.
I suspect that five years from now, there will be a less complex way to standardize cloud distributed computing-please join me in witnessing this conjecture. Of course, for now, Docker is still ubiquitous and popular.
Later words
As the author said, most of the time, developers' attitude towards Docker is because of the trend of new technology, using it for use and fashion for fashion. Even so, we still can't resist the enthusiasm for new things, and we still want to be able to learn and try to use it in our own project or even in our field. After all, technology is not limited to a certain rule, as long as you think it is suitable for your project, it can improve your efficiency or performance, it is appropriate.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.