In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
What is a microservice?
First of all, there is no official definition of micro-service, so it is difficult to describe micro-service directly. We can understand what micro-service is by comparing traditional WEB applications.
The core of traditional WEB application is divided into business logic, adapter and API or WEB interface accessed through UI. Business logic defines business processes, business rules, and domain entities. The adapter includes database access components, message components, access interfaces and so on. The architecture of a ride-hailing software is as follows:
Although they also follow modular development, they will eventually be packaged and deployed as monolithic applications. For example, Java applications are packaged as WAR and deployed on Tomcat or Jetty.
This monomer application is more suitable for small projects, and its advantages are:
Development is simple, direct and centralized management
Basically no repetitive development.
The functions are local, and there is no distributed administrative overhead and invocation overhead.
Of course, its shortcomings are also obvious, especially for Internet companies:
Inefficient development: all developers change the code in one project, the submitted code waits for each other, and the code conflicts constantly.
Code maintenance is difficult: code functions are coupled together and newcomers don't know where to start.
Deployment is inflexible: it takes a long time to build, and any minor changes have to rebuild the entire project, which is often a long process
Low stability: a trivial problem that can cause the entire application to fail
Insufficient scalability: unable to meet business requirements in cases of high concurrency
Therefore, the mainstream design now generally uses the micro-service architecture. The idea is not to develop a huge monolithic application, but to decompose the application into small, interconnected microservices. A microservice performs specific functions, such as passenger management and order management. Each microservice has its own business logic and adapter. Some microservices also provide API interfaces for other microservices and application clients to use.
For example, the system described earlier can be broken down into:
Each business logic is decomposed into a micro-service, which communicates through REST API. Some microservices also develop API interfaces to end users or clients. In general, however, these clients do not have direct access to the backstage micro service, but pass the request through API Gateway. API Gateway is generally responsible for service routing, load balancing, caching, access control, authentication and other tasks.
Advantages of microservice architecture
Micro-service architecture has many important advantages.
First, it solves the problem of complexity. It decomposes a single application into a set of services. Although the total amount of functionality remains the same, the application has been broken down into manageable modules or services. These services define explicit RPC or message-driven API boundaries. The micro-service architecture strengthens the level of application modularity, which is difficult to achieve through a single code base. As a result, microservice development is much faster and easier to understand and maintain.
Second, this architecture allows each service to be developed independently by a team focused on the service. Developers are free to choose the development technology as long as the service API contract is met. This means that developers can use new technologies to write or ReFactor services, which does not have much impact on the overall application because the services are relatively small.
Third, the micro-service architecture enables each micro-service to be deployed independently. Developers do not need to coordinate the deployment of service upgrades or changes. These changes can be deployed immediately after the test passes. So the micro-service architecture also makes CI/CD possible.
Finally, the micro-service architecture makes each service scalable independently. We only need to define the configuration, capacity, number of instances and other constraints that meet the requirements of service deployment. For example, we can deploy CPU-intensive services on EC2 computing optimization instances and in-memory database services on EC2 memory optimization instances.
Shortcomings and challenges of microservice architecture
In fact, there is no silver bullets, and the micro-service architecture will bring us new problems and challenges. One of them is similar to its name, the microservice emphasizes the size of the service, but in fact there is no uniform standard. According to what rules business logic should be divided into micro-services, this in itself is an empirical project. Some developers argue that a microservice should be built on 10-100 lines of code. Although the establishment of small services is advocated by the micro-service architecture, keep in mind that micro-services are a means to an end, not an end. The goal of microservices is to fully decompose applications to facilitate agile development and continuous integration deployment.
Another major disadvantage of micro-services is the complexity brought by the distributed characteristics of micro-services. Developers need to implement the invocation and communication between micro-services based on RPC or messages, which makes the discovery between services, the tracking of service invocation chain and quality problems become very difficult.
Another challenge for microservices is the partitioned database architecture and distributed transactions. Business transactions that update multiple business entities are quite common. These types of transactions are very simple to implement in single applications, because single applications often have only one database. However, under the micro-service architecture, different services may have different databases. Due to the constraints of the CAP principle, we have to abandon the traditional strong consistency and pursue the ultimate consistency, which is a challenge for developers.
The micro-service architecture also brings great challenges to testing. Traditional single WEB applications only need to test a single REST API, while testing micro-services requires starting all the other services it depends on. This complexity should not be underestimated.
Another major challenge for microservices is changes across multiple services. For example, in traditional monomer applications, if there are three services A, B, C that need to be changed, A depends on B, B and C. We just need to change the appropriate module and deploy it all at once. But in the micro-service architecture, we need to carefully plan and coordinate the change deployment of each service. We need to update C first, then B, and finally A.
Deploying microservice-based applications is also much more complex. Single applications can be simply deployed on the same set of servers, and then the front end can use load balancing. Each application has the same underlying service address, such as database and message queue. On the other hand, micro-services are composed of a large number of different services. Each service may have its own configuration, the number of application instances, and the underlying service address. Different configuration, deployment, extension, and monitoring components are required here. In addition, we need a service discovery mechanism so that the service can discover the addresses of other services it communicates with. Therefore, the successful deployment of microservice applications requires developers to have a better deployment strategy and a high level of automation.
The above problems and challenges can be summarized as follows:
API Gateway
Inter-service call
Service discovery
Service fault tolerance
Service deployment
Data call
Fortunately, there are many micro-service frameworks that can solve the above problems.
The first generation micro-service framework
Spring Cloud
Spring Cloud provides developers with tools to quickly build a common model of distributed systems (including configuration management, service discovery, fuses, intelligent routing, micro-agents, control buses, one-time tokens, global locks, leadership elections, distributed sessions, cluster state, etc.). Major projects include:
Spring Cloud Config: centralized external configuration management supported by the Git repository. Configuration resources are mapped directly to Spring
Environment, but can be used by non-Spring applications if needed.
Spring Cloud Netflix: with various Netflix
OSS components (Eureka,Hystrix,Zuul,Archaius, etc.) integration.
Spring Cloud Bus: an event bus used to associate services and service instances with distributed messaging. Used to propagate state changes (such as configuration change events) in the cluster.
Spring Cloud for Cloudfoundry: integrate your application with PivotalCloudfoundry. Provide a service discovery implementation, and you can easily protect resources through SSO and OAuth3, and create Cloudfoundry service proxies.
Spring Cloud-CloudFoundry Service Broker: provides a starting point for building service proxies that manage services in an CloudFoundry.
Spring CloudCluster: leadership election and common state model (based on ZooKeeper,Redis,Hazelcast,Consul abstraction and implementation).
Spring Cloud Consul: service Discovery and configuration Management combined with Hashicorp Consul
Spring Cloud Security: provides support for load-balanced OAuth 2 hibernation clients and authentication header relays in the Zuul agent.
SpringCloud Sleuth: distributed tracing for SpringCloud applications, compatible with Zipkin,HTrace and log-based tracing such as ELK.
Spring Cloud DataFlow: cloud native orchestration service for composable microservice applications at modern runtimes. Easy-to-use DSL, drag-and-drop GUI and REST-API work together to simplify the overall orchestration of micro-service-based data pipelines.
Spring Cloud Stream: a lightweight event-driven micro-service framework that can quickly build applications that can be connected to external systems. A simple declarative model that uses ApacheKafka or RabbitMQ to send and receive messages between Spring Boot applications.
Spring Cloud Stream Application Starters:Spring Cloud task application initiators are SpringBoot applications that can be any process, including Spring Batch jobs that will not run forever, and they end / stop after limited time of data processing.
Service discovery and configuration management for Spring Cloud ZooKeeper:ZooKeeper.
Spring Cloud for Amazon WebServices: easily integrate managed Amazon WebServices services. It easily integrates AWS services, such as caching or message API, by using Spring's idioms and APIs. Developers can build applications around managed services, regardless of infrastructure.
Spring CloudConnectors: makes it easy for PaaS applications to connect to back-end services such as databases and message brokers (formerly known as "Spring Cloud" projects) on various platforms.
Spring Cloud Starters: as a SpringBoot-based startup project, reduce dependency management (no longer as a stand-alone project after Angel.SR2).
Spring Cloud CLI: plug-ins support the rapid creation of Spring Cloud component applications based on Groovy prophecy.
Dubbo
Dubbo is a distributed service framework opened by Alibaba, which is committed to providing high-performance and transparent RPC remote service invocation solution, as well as SOA service governance solution. Its core part includes:
Remote communication: provides abstract encapsulation of a variety of long-connection-based NIO frameworks, including multiple thread models, serialization, and information exchange in the "request-response" pattern.
Cluster fault tolerance: provides transparent remote procedure calls based on interface methods, including multi-protocol support, as well as soft load balancing, failure fault tolerance, address routing, dynamic configuration and other cluster support.
Automatic discovery: based on the registry directory service, the service consumer can dynamically find the service provider, make the address transparent, and enable the service provider to smoothly increase or decrease the number of machines.
But it is obvious that both Dubbo and Spring Cloud are only suitable for specific application scenarios and development environments, and they are not designed to support versatility and multilingualism. And they are just the framework of the Dev layer, lacking the overall solution of DevOps (which is what the micro-services architecture needs to focus on). And then comes the rise of Service Mesh.
The next generation of microservices: Service Mesh?
Service Mesh
Service Mesh is also translated as "service grid" as the infrastructure layer of inter-service communication. If you use one sentence to explain what Service Mesh is, it can be compared to the TCP/IP between applications or micro-services, which is responsible for network calls, current restrictions, circuit breakers and monitoring between services. For writing applications, there is generally no need to care about the TCP/IP layer (such as RESTful applications through the HTTP protocol). Similarly, there is no need to use Service Mesh to deal with things between services that were originally implemented through applications or other frameworks, such as Spring Cloud and OSS, which can now be handed over to Service Mesh.
Service Mesh has the following characteristics:
The middle layer of communication between applications
Lightweight network agent
Application program is not aware
Decouple application retry / timeout, monitoring, tracking, and service discovery
The architecture of Service Mesh is shown in the following figure:
Service Mesh runs as a Sidebar, which is transparent to applications, and all traffic between applications passes through it, so the control of application traffic can be achieved in Service Mesh.
At present, the popular Service Mesh open source software are Linkerd, Envoy and Istio, and recently Buoyant (the company of open source Linkerd) released the Kubernetes-based Service Mesh open source project Conduit.
Linkerd
Linkerd is an open source network agent designed to be deployed as a service grid: a dedicated layer for managing, controlling, and monitoring service-to-service communication within an application.
Linkerd is designed to address problems found by companies such as Twitter, Yahoo, Google, and Microsoft when they operate large production systems. According to experience, the source of the most complex, surprising and urgent behavior is usually not the service itself, but the communication between the services. Linkerd solves these problems, not only by controlling the communication mechanism, but by providing an abstraction layer on top of it.
Its main features are:
Load balancing: Linkerd provides a variety of load balancing algorithms that use real-time performance metrics to distribute load and reduce tail latency across the application.
Circuit breakers: Linkerd includes automatic circuit breakers that will stop sending traffic to instances that are considered unhealthy, giving them a chance to recover and avoid a chain reaction failure.
Service discovery: Linkerd integrates with various service discovery backends to help you reduce code complexity by removing specific (ad-hoc) service discovery implementations.
Dynamic request routing: Linkerd enables dynamic request routing and rerouting, allowing you to set up segmented services (staging) with a minimum of configuration
Service), canary (canaries), blue-green deployment (blue-green deploy), cross-DC failover and dark traffic (dark)
Traffic).
Number of retries and due date: Linkerd can automatically retry the request in the event of certain failures and can time out the request after a specified period of time.
TLS:Linkerd can be configured to send and receive requests using TLS, which you can use to encrypt traffic across host boundaries without modifying existing application code.
HTTP agent integration: Linkerd can be used as a HTTP agent and is widely supported by almost all modern HTTP clients, making it easy to integrate into existing applications.
Transparent proxy: you can use iptables rules on the host to set a transparent proxy through Linkerd.
GRPC:Linkerd supports HTTP/2 and TLS, allows it to route gRPC requests, and supports advanced RPC mechanisms such as two-way flow, process control, and structured data payload.
Distributed tracking: Linkerd supports distributed tracking and measurement instruments to provide uniform observability across all services.
Instrumentation: Linkerd supports distributed tracking and measurement instruments to provide unified observability across all services.
Envoy
Envoy is designed for a service-oriented architecture of the L7 agent and communication bus, and the project was born with the following goals:
For applications, the network should be transparent, and when network and application failures occur, it is easy to locate the source of the problem.
Envoy provides the following features:
External process architecture: can work with applications developed in any language; can be quickly upgraded.
Based on the new Category 11 coding: can provide efficient performance.
L3/L4 filter: the core is a L3/L4 network agent that can be used as a programmable filter to implement different TCP agent tasks and insert into the main service. Support a variety of tasks by writing filters, such as raw TCP agents, HTTP agents, TLS client certificate authentication, and so on.
HTTP L7 filter: support for an additional HTTP
L7 filter layer. As a plug-in, the HTTP filter is inserted into the HTTP link management subsystem to perform different tasks such as buffering, rate limiting, routing / forwarding, sniffing Amazon's DynamoDB, and so on.
Support HTTP/2: in HTTP mode, support HTTP/1.1, HTTP/2, and support HTTP/1.1, HTTP/2 two-way proxy. This means that HTTP/1.1 and HTTP/2 can be bridged in any combination of the client and the target server.
HTTP L7 routing: when running in HTTP mode, it supports path-based routing and redirection based on content type, runtime values, and so on. The front / edge agent that can be used for the service.
Support for gRPC:gRPC is a RPC framework from Google that uses HTTP/2 as the underlying multiplexer. Both gRPC requests and replies carried by HTTP/2 can use the routing and LB capabilities of Envoy.
Support for MongoDB L7: support for obtaining information such as statistics and connection records.
Support for DynamoDB L7: support for obtaining information such as statistics and connections.
Service discovery: supports a variety of service discovery methods, including asynchronous DNS parsing and service discovery through REST requests.
Health check: contains a health check subsystem, which can carry out active health check on the upstream service cluster. Passive health examination is also supported.
Advanced LB: including automatic retry, circuit breaker, global speed limit, blocking request, anomaly detection. It is also planned to support request rate control in the future.
Front-end agents: can act as front-end agents, including TLS, HTTP/1.1, HTTP/2, and HTTP L7 routing.
Excellent observability: provides reliable statistical capabilities for all subsystems. Statsd and compatible statistical libraries are currently supported. You can also view statistics through the management port and support third-party distributed tracking mechanisms.
Dynamic configuration: provides hierarchical dynamic configuration API that users can use to build complex centrally managed deployments.
Istio
Istio is an open platform for connecting, managing, and protecting micro-services. Istio provides a simple way to establish a deployed service network, with load balancing, inter-service authentication, monitoring and other functions, without changing any service code. To add support for Istio to the service, you only need to deploy a special side car (sidecar) in the environment and use the Istio control panel function to configure and manage the agent to intercept all network traffic between the micro services.
Istio currently supports only service deployment on Kubernetes, but other environments will be supported in future releases.
Istio provides a complete solution to meet the diverse needs of micro-service applications by providing behavioral insight and operational control for the entire service grid. It provides many key functions uniformly in the service network:
Traffic management: controls the flow of traffic between services and the flow of API calls, making calls more reliable and making the network more robust in bad situations.
Observability: understanding the dependencies between services, as well as the nature and flow of traffic between them, provides the ability to quickly identify problems.
Policy enforcement: apply organizational policies to the interaction between services to ensure that access policies are enforced and resources are well allocated among consumers. Policy changes are made by configuring the grid rather than modifying the application code.
Service identity and security: provide verifiable identity for services in the grid and the ability to protect service traffic so that it can flow over networks with different credibility.
Istio service grid is logically divided into data panel and control panel:
The data panel consists of a set of intelligent agents (Envoy), which are deployed as side cars to mediate and control all network communication between microservices.
The dashboard is responsible for managing and configuring agent routing traffic and enforcing policies at run time.
The following figure shows the different components that make up each panel:
Conduit
Conduit is an ultra-lightweight service grid service designed for Kubernetes, which transparently manages the runtime communication of services running on Kubernetes, making them more secure and reliable. Conduit provides visibility, reliability, and security capabilities without changing the code.
Conduit service mesh also consists of a data panel and a control panel. The data panel carries the actual network traffic of the application. The control panel drives the data panel and provides a northbound interface.
Contrast
Linkerd and Envoy are similar, both are service-oriented communication network agents, and both can achieve functions such as service discovery, request routing, load balancing and so on. Their design goal is to solve the problem of communication between services, so that applications are not aware of service communication, which is also the core concept of Service Mesh. Linkerd and Envoy are like distributed Sidebar, and multiple proxy like Linkerd and Envoy are connected to each other to form service mesh.
On the other hand, Istio stands from a higher point of view, dividing Service Mesh into Data Plane and Control Plane. Data Plane is responsible for all network communications between microservices, while Control Plane is responsible for managing Data Plane Proxy:
And Istio inherently supports Kubernetes, which also bridges the gap between the application scheduling framework and Service Mesh.
There is little information about Conduit. According to the official introduction, its location and function are similar to Istio.
Kubernetes + Service Mesh = complete micro-service framework
Kubernetes has become the de facto standard of container scheduling, and containers can be used as the minimum unit of work of micro-services, thus giving full play to the greatest advantages of micro-service architecture. So I think the future micro-service architecture will revolve around Kubernetes. Service Mesh such as Istio and Conduit are designed for Kubernetes, and their emergence complements Kubernetes's shortcomings in service communication between microservices. Although Dubbo and Spring Cloud are mature micro-service frameworks, they are more or less bound to specific languages or application scenarios, and only solve the problems at the Dev level of micro-services. To solve the Ops problem, they also need to be combined with resource scheduling frameworks such as Cloud Foundry, Mesos, Docker Swarm, or Kubernetes:
However, due to the initial design and ecology, there are many applicability problems to be solved.
Kubernetes, on the other hand, is a general-purpose container management platform independent of the development language, and it can support running cloud native and traditional containerized applications. And it covers the Dev and Ops stages of micro-services, combined with Service Mesh, it can provide users with a complete end-to-end micro-service experience.
So I think the future micro-service architecture and technology stack may be in the following form:
The multi-cloud platform provides the resource capacity (computing, storage and network, etc.) for the micro-service, the container is scheduled and orchestrated by Kubernetes as the minimum work unit, and the Service Mesh manages the service communication of the micro-service. Finally, the service interface of the micro-service is exposed through API Gateway.
I believe that in the future, with the popularity of micro-service frameworks based on Kubernetes and Service Mesh, the cost of implementing micro-services will be greatly reduced, and finally provide a solid foundation and guarantee for the landing and large-scale use of micro-services.
If you feel fruitful, you can also add Java Advanced Communication Group: 725633148 will share some videos recorded by senior architects: Spring,MyBatis,Netty source code analysis, the principles of high concurrency, high performance, distributed, micro-service architecture, JVM performance optimization, distributed architecture and so on have become the necessary knowledge system for architects. You can also get free learning resources and benefit a lot at present!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.