In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Author Sun Jianbo (Tianyuan) Alibaba Technical Expert
DevOps was first proposed in 2007, when the concept of cloud computing infrastructure was just proposed, and with the gradual popularity of the Internet, the demand for application software exploded, and the concept of software development gradually shifted from waterfall model to agile development.
The traditional software delivery model (application developers focus on software development and IT operations personnel are responsible for deploying software to servers) can no longer meet the needs of rapid iteration of Internet software. As a result, DevOps has gradually become popular as a cultural concept and best practice that breaks the gap between R & D and operations, speeds up software delivery processes, and improves software delivery quality.
The State of DevOps
DevOps 'popularity benefits from the industry's demand for agile development and high-quality delivery of application software, so it opens up a "common space" for development and operation and maintenance, where both parties can work closely together. At that time, software research and development still belonged to an emerging industry. People were used to learning from mature manufacturing. The way manufacturing solved mass production was to build an assembly line and standardize the content of each step through the assembly line. The workers on the assembly line only needed to perform their duties and quickly and skillfully complete their own production content.
Therefore, DevOps draws on the experience of manufacturing industry and starts to build continuous integration/continuous delivery (CI/CD) pipeline, which leads to a series of automated/semi-automated tools (such as puppet, chef, ansible, etc.), combined with the scalable ability of scripting, to standardize a large number of R & D and O & M operations, so as to achieve the goal of mutual collaboration. But eventually someone had to invest in building these tools, and DevOps teams emerged. DevOps teams build tools and platforms that help R & D get closer to the production environment, allowing R & D to be deployed with one click and quickly trial-and-error during continuous integration and continuous delivery, thus largely exposing and avoiding software problems in the actual operation process.
DevOps is essentially for operations. It provides the operation and maintenance process of the production environment through automated tools, shields the infrastructure details, and at the same time makes the problems of the software itself easier to expose, so as to hand over these problems to R & D to solve in advance. All of these are actually helping to reduce the burden of operation and maintenance.
The model worked well at first, but problems began to emerge over time. DevOps itself does not bring direct profits to the enterprise, nor does it add features to the product. They are the cost center of the enterprise, so many enterprises are reluctant to invest too much in DevOps. Over time, DevOps capabilities fail to match the growing needs of developers, unwilling to continue to evolve with cloud and open source communities, and become a bottleneck in software development. How many tech people in large companies are satisfied with their own R & D effectiveness tools?
the popularity of cloud computing
Smart enterprises can always find common needs in the industry from their own needs. AWS was born in this way. As early as 2006, they first provided the network, computing, storage and other infrastructure required for software deployment as services to users, allowing anyone to build Internet applications without purchasing physical hardware such as servers. Scale makes the overall cost lower than that of users. The concepts of cloud computing IaaS, PaaS, and SaaS began to become clear that year.
At the beginning of cloud computing, users mainly use IaaS services, such as virtual machines and storage. Enterprises using cloud computing services still need operation and maintenance to manage this kind of infrastructure, but the objects of operation and maintenance management are switched from physical machines to virtual machines, and there is no essential difference.
With the rapid development of cloud computing, cloud capabilities continue to be supplemented and enhanced, gradually transforming all aspects of capabilities originally provided by O & M into services on the cloud, which naturally includes various services for managing the complete lifecycle of software, from code management, continuous integration, continuous delivery, to monitoring, alarm, automatic capacity expansion and contraction, etc. A series of capabilities can be found on the cloud. The variety and quantity are staggering.
But DevOps still has its uses. Cloud docking is too difficult, involving many cloud services, different cloud vendors provide services are not unified, in order to use the cloud products have to invest a lot of time to learn, and in order to prevent the binding of cloud vendors and have to do multi-vendor adaptation, DevOps still need to shield the complexity of the actual environment for development as in the past, but this time they have to manage the infrastructure into cloud resources.
Kubernetes that changed everything
Kubernetes is essentially a modern application infrastructure that focuses on how to integrate applications naturally with the cloud and maximize the value of the cloud. Kubernetes emphasizes making infrastructure work better with applications and "delivering" infrastructure capabilities to applications in a more efficient way, rather than vice versa. In this process, Kubernetes, Docker, Operator and other open source projects that play a key role in the cloud native ecosystem are pushing application management and delivery into a completely different situation than before: Kubernetes users only describe what the end state of their applications is in a declarative way, and then everything is over. Kubernetes takes care of everything from here.
This is why Kubernetes places a strong emphasis on declarative APIs. In this way, the more infrastructure Kubernetes itself has access to, the richer the end state Kubernetes users can declare and the simpler their responsibilities. Now, we can declare not only the end-state of an app through Kubernetes, such as "This app needs 10 instances," but also a lot of end-states of an app, such as "This app uses canary release policy to upgrade," and "When its CPU usage is greater than 50%, please automatically expand 2 instances."
This challenges traditional DevOps tools and teams: If a business developer only needs to declare all the end-states of his application through declarative APIs, even including complete SLAs, and everything behind will be done automatically by Kubernetes, then what reason does he have to interface and learn various DevOps pipelines?
In other words, DevOps has long served as the glue between R & D and infrastructure. Kubernetes, with its robust declarative API and unlimited access to application infrastructure capabilities, is perfectly serving as this glue layer. This also reminds us that the last "glue layer" that is being strongly challenged by Kubernetes is actually called "traditional middleware": it is suffering a huge impact from Service Mesh.
Will DevOps disappear?
In recent years, the Kubernetes project has often been described as DevOps '"best partner." A similar argument is that Kubernetes, like Docker, solves run-time problems. This means that Kubernetes is more like a "hip" IaaS, but the runtime has changed from a virtual machine to a container. So, as long as you can connect existing DevOps ideas and processes to Kubernetes, you can enjoy the lightness and elasticity brought by container technology. This is clearly the best combination for DevOps, which advocates "agility."
However, Kubernetes 'path, at least for now, isn't an IaaS like role. While it focuses on access to underlying infrastructure capabilities, it is not itself a provider of infrastructure capabilities. Furthermore, Kubernetes seems to care more about software lifecycle and state flow than about software runtime. Not only that, but it also provides a mechanism called a "controller model" to approximate the actual state of the software to the desired state, which is obviously beyond the scope of a "software runtime."
The Kubernetes project's "extra focus" on the application itself makes it distinct from an IaaS infrastructure and makes its "glue" positioning more obvious. And if Kubernetes is powerful enough, is DevOps necessary as an existing "glue layer" between R & D and infrastructure? In the so-called cloud-native era, will application development and delivery really move towards "one announcement" and "let go", thus making DevOps disappear completely?
However, at least for now, Kubernetes project has a lot of difficulties to overcome from this vision.
Limitations of the Platform for Platform API
Kubernetes is a typical "Platform for Platform" project, so its API is still very far away from pure R & D perspective. For example, a Deployment object includes not only the image concerned by the R & D side, but also the resource configuration on the infrastructure side, and even the container security configuration. In addition, the Kubernetes API does not provide a description and definition of "operation capability," which also makes the "hands-off" after the declaration out of reach. This is why DevOps is still needed today: most fields in Kubernetes must be populated through a collaborative process of R & D and operations.
No more cloud resources can be described
The K8s native API contains only a small portion of cloud resources, such as PV/PVC for storage and Ingress for Load Balancer, but this is completely insufficient for a fully declarative application description. For example, R & D is confused when it wants to find a concept on K8s to express the requirements of database, VPC, message queue, etc. All existing solutions rely entirely on cloud vendor implementations, creating a new vendor lock-in puzzle.
Operator systems lack interoperability
Kubernetes 'Operator mechanism is the open secret to the project's ability to grow indefinitely. Unfortunately, the current relationship between all operators is like one chimney after another, with no possibility of interaction and collaboration between them. For example, we extend RDS on the cloud to the K8s declarative API system through CRD and Operator, but when a third party wants to write a CRD Operator that regularly backs up RDS persistent files to cooperate, it often has no way to start. This in turn requires DevOps systems to step in to solve the problem.
The future?
Obviously, Kubernetes projects today still need to rely on DevOps systems to truly complete efficient iteration and delivery of software. This is inevitable: although Kubernetes claims to be an "application-centric" infrastructure, its own design and work hierarchy as a system level project derived from Google Borg lingers in the more infrastructure domain. But on the other hand, we can never deny that Kubernetes has always maintained its pursuit of "NoOps" on the R & D side on its critical path. This desire has been "obvious" since its first day when it put forward the theory of "declarative application management," and the establishment of CRD and Operator system finally gives this application-level concern a chance to land. We've seen a lot of DevOps processes sinking into declarative objects and control loops in Kubernetes, such as the Tekton CD project.
If the future of Kubernetes is 100% declarative application management, there's reason to believe DevOps will eventually disappear from the technology landscape and become a culture. After all, the operations engineer at that time may have become the writer or designer of Kubernetes Controller/Operator. And R & D? They probably wouldn't have known that Kubernetes had existed so prominently.
OAM
In October 2019, Alibaba Cloud and Microsoft jointly released the Open Application Model(OAM), breaking the traditional software delivery model. OAM is a standard specification focused on describing the application life cycle, which can help application developers, application operation and maintenance personnel and infrastructure operation and maintenance teams better collaborate.
At present, OAM is still in an early stage. Alibaba team is contributing and maintaining this technology upstream. If you have any questions or feedback, you are welcome to contact us upstream or nail.
Participation:
Nail scanning code enters OAM project Chinese discussion group
Directly participate in the discussion of OAM open source implementation address through Gitter (star!)
Looking forward to everyone's participation!
Cloud Native Technology Open Course
This course is a series of technical open courses jointly launched by CNCF official and Alibaba, with "cloud native technology system" as the core and "technical interpretation" and "practice landing" as the equal emphasis.
"Alibaba Cloud focuses on microservices, Serverless, containers, Service Mesh and other technical fields, focuses on cloud native popular technology trends, cloud native large-scale landing practices, and is the technical circle that understands cloud native developers best. "
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.