In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what are the problems of micro-service architecture design". The content of the explanation in this article is simple and clear, and it is easy to learn and understand. let's study and learn "what are the problems of micro-service architecture design"?
Whether the database is split and whether the micro-service must be 1-to-1
We talked about it when we first talked about the micro-service architecture. To ensure that each micro-service is autonomous and loosely coupled, the micro-service is split vertically from the database to the logical layer to the foreground.
That is, the splitting of the database is a key content in the whole micro-service architecture design.
And a key thought here is that in micro-service practice, you will see that the split of your upper-level micro-service components is actually more detailed, and it is normal for a not-so-complex business system to split into 20 to 30 micro-service components.
It is obviously unreasonable for the database to be split into 20 separate DataBase.
On the one hand, it increases the management complexity of the database itself, at the same time, because the database is too fine split also introduces more distributed transaction processing problems, cross-database data association query is not convenient and so on. Therefore, the best suggestion here is to introduce the concept of a business domain, namely:
The database can be split by business domain, and each business domain is quite independent and corresponds to an independent database, but there can be multiple upper micro-service module components in the business domain. Micro-services in the same business domain are still accessed and invoked through the registry.
In other words, the micro-services in the same business domain are decoupled in the logic layer and can only be accessed and called through the API interface to facilitate distributed deployment, but the database layer itself does not split and share the same database.
For example, in our project, we will share a database of 4A and process engine microservices, and independent micro services such as expense reporting, travel reporting, loan reporting and other independent micro services to share an accounting database.
Although the database is not completely decoupled in this way, we can achieve more fine-grained management of the micro-service deployment package through the split and decoupling of the micro-service in the logic layer. At the same time, when there is a change in business logic, we only need to change the corresponding micro-service module to minimize the impact of the change.
Whether or not to use SpringCloud family buckets
You can see that if SpingCLoud micro-service technology is used to develop the framework, then all the capabilities of corresponding service registry, current limit circuit breaker, micro-service gateway, load balancing, configuration center, security, and declarative invocation are provided.
All you need to do is use the SpringBoot framework to develop micro-service components.
Of course, there is another way in our project, that is, we only use the SpringBoot framework to develop a single micro-service component, and then combine and integrate the current mainstream micro-service open source technology component products.
Service registry: Ali's Nacos registry
Service configuration Center: Ctrip Open Source Apollo configuration Center
Current-limiting fuse: Ali's Sentinel
Service chain monitoring: directly integrate SkyWalking service chain monitoring
API Gateway: using Kong Gateway to realize API Integration and Management
Of course, you may not use SpringBoot at all, but use the more efficient Dubbo open source framework that supports RPC calls for the development and integration of micro-service components.
In terms of the construction and rapid development of the micro-service technology platform, of course, you directly choose the entire open source framework and components of SpingCLoud to achieve the simplest, and basically fully meet the needs. For day-to-day traditional enterprise applications, the performance is completely sufficient, and there is no case that the performance can not be satisfied. After all, not all projects are similar to the high performance requirements of massive data access and concurrency on the Internet.
If you use a variety of open source components, the technology framework integrates itself, then there must be early basic technology platform construction, integration verification and other related work. At the same time, it also increases the complexity of the entire infrastructure. For example, if you use a Nacos registry, you also need to cluster the registry to meet the requirements of high availability.
Summing up the above description, the simple summary is:
If the picture is convenient and there are no high performance requirements, use the overall framework of SpingCloud directly.
If you have high performance requirements and have sufficient technology reserves, you can integrate open source technology components on your own.
So in our actual micro-service architecture implementation project, we will see the third scenario.
For example, a group enterprise, its own plan management system is split into 10 micro-service modules after pre-architecture design, and three software developers need to be invited for customized development. The requirement for developers is to carry out architectural development of micro-services.
At this time, we found a key problem, it does not matter for each manufacturer to adopt the micro-service architecture, but from the perspective of the whole large-scale application, we actually have a unified micro-service governance and control requirement for 10 micro-service modules. Similar to API gateway, similar to service configuration center, etc.
These components are not suitable to use the technical components in SpingCLoud, but need to be extracted from a single micro-service architecture to form a shared service capability. At such times, our suggestion is to integrate and use other third-party open source technology components for governance as far as possible.
For example, in the above example, three vendors can retain the most basic configuration.
That is, when a supplier develops the three micro-service modules, Eureka+Feign+Ribbon can be enabled to complete the internal integration of the three components developed by themselves, API interface registration and invocation.
However, when the modules between the three suppliers want to cooperate, they will use the shared technical service platform built externally.
For example, the API interface is registered to a unified Kong gateway, and the Kong gateway is managed by the platform integrator.
For example, some common configurations involving the three families have been transferred from SpingCloud Config to Apollo configuration center.
So in a nutshell, when evaluating whether or not to adopt the full set of SpingCLoud solutions, you also need to assess whether there is team collaboration across clear boundaries. Or whether there is a situation similar to the large-scale integration of micro-services of multiple business systems in group enterprises. If it exists, then some common technical service capabilities must be drawn out and built independently.
How does the development team split up?
When we implement micro-services and cloud native transformation, you look like the IT system is divided into multiple micro-services, but more importantly, business organizations and teams themselves need to be decomposed into micro-services, decomposing highly independent and autonomous business teams.
Each team is configured with independent front and back end developers, requirements, and testers with a high degree of autonomy.
Then after splitting into mostly business teams, how to ensure the conceptual consistency or architectural integrity of the original large application and product. Here we propose that the overall product planning and overall architecture design still need to be centralized and unified, and then split and assigned to various micro-service development teams.
So what does the architectural design here include? The details are as follows:
Function list of each microservice module
Interface list of each microservice module
Split of database and Owner attribution of data table
The above three points are the most important points for architectural design that need to be carried out in advance. After this is clear, it can be assigned to each micro-service team, then the micro-service team is highly autonomous and flattened, and the teams coordinate and communicate with each other, without the need for architects to collaborate to increase the communication path.
That is, product planners and architects are very similar to the responsibilities of a registration control center in a microservice architecture. This is also what we often call the technical micro-service split, which is actually the structural adjustment and responsibility split of the business organization team that is really the first.
So how does the development team dismantle it?
First of all, it's impossible for you to split 20 microservices into 20 development teams. There is still the concept of domain division, that is, 20 micro-services are classified and split by aspect.
Method 1: classify by vertical business domain, refer to our previous database split method
Method 2: categorize by horizontal layering, such as platform tier team, middle tier team, foreground and APP application team
After the team split, we can see that each development team must configure front-end developers, back-end developers, and testers. The requirements can be configured uniformly and not split into development groups. Of course, it is also possible to configure one requirements refiner for each development group, and only configure the product manager for the product requirements throughout the large team. The refinement of product requirements is still completed within the development team.
Why does the development team put so much emphasis on dismantling?
Simply put, the work within each development team should be transparent and invisible to other development teams. There is a high degree of direct governance between development teams and can only be delivered through coarse-grained interfaces.
If the development team itself does not split up, you will see that when a development team manages multiple micro-service modules, the various micro-service development specifications and specifications we developed earlier can easily be destroyed. and these post-audit and changes will take a lot of time to change and rework.
To take a simple example, when split into two DataBase libraries and managed by the same developer, it is often easy to solve the problem through cross-library association queries between the two libraries, which is not allowed in the micro-service development specification.
Of course, from the software enterprise's own IT governance control, this is also the best solution, for a large project or large application system, not every developer can see all the project module source code, other non-Owner components can only consume and use interfaces, other content is not visible.
For service registry and API gateway selection
For service registries and API gateways, I have a special article to analyze in detail.
When do I need to use the API gateway?
Under a micro-service architecture, although other external applications will not interact and integrate, the whole application itself has an APP application, while the APP application is analyzed and developed through front and back end and needs to be accessed through the Internet at the same time. There is a need for a unified access API access portal, but also need to consider further security isolation from the internal micro-service module.
When we talk about this, you will find that the service proxy or transparent transmission capability of the API gateway is actually the same meaning as the Ngnix reverse proxy or routing that we often talk about.
If you only want to unify the access exit of the API interface, and consider the security isolation similar to the DMZ zone, then you don't need to implement the API gateway immediately in the early stage of your architecture, just use Ngnix as the service routing proxy. Because under this architecture, the consumer side and provider side of API interface are all developed by a development team, and it is quite convenient to analyze and troubleshoot all kinds of problems. Security access of API interface can also be realized through JWT,Auth3.0, and this process is not complicated.
Open capability or external integration of multiple applications is required for API management and governance
However, when we are faced with integrating with multiple external applications, or opening our API interface service capabilities to external partners, the requirements for API interface control and governance will naturally increase.
That is, on the basis of conventional service proxy routing, various capabilities such as load balancing, security, logging, current limit circuit breaker and so on need to be added, and we do not want these capabilities to be considered when the API interface is developed, but we hope that these capabilities will be configured flexibly when the API is connected to the gateway to achieve management and control.
Then the function of using API gateway at this time is reflected.
Multiple development teams work together, and service governance standardization needs
This is the second scenario I understand that requires an API gateway, which is somewhat similar to the need for an ESB service bus under a traditional IT architecture. When there are multiple development teams, we need to manage the API interface services registered and accessed by each development team, and at this time we need an API gateway to implement.
That is, the unified management delivered by the integration of API interfaces across development teams is replicated by the API gateway, including security, log audit, flow control, and so on. When multiple teams work together, it is no longer possible to rely on some technologies and development specifications within a single team, but to have a unified standard.
At the same time, multiple development teams cooperate and integrate, and there must be a unified integrator to solve the problems in collaboration. Even under the ServiceMesh service grid architecture, we can see that there is a control center for unified coordination.
In the selection of technical components after using the API gateway
Note that the API gateway itself has the capabilities of load balancing, current limiting circuit breaker and service proxy.
That is, under the registry, all Eureka+Feign+Ribbon+Hystrix can be transferred to the API gateway. However, a complete micro-service architecture of an application may have an API interface that not only satisfies the API consumption calls of internal components, but also needs to be exposed to external applications through the API gateway.
The Http Rest API interface is retained through the API gateway, and the traditional API consumption access is no longer similar to the Feign declarative way similar to the internal API interface call. As shown below:
As you can see, micro-service A not only needs to satisfy the internal micro-service B as the consumer to make consumption calls through the service registry, but also needs to meet the consumption calls of external APP through API gateway interface.
Then there is actually no uniform entry for the traffic into the micro-service A cluster.
In this scenario, if the enterprise has a Hystrix current-limiting circuit breaker, it only controls the consumption calls between the internal micro-service module components. For the external APP current limit, the current limiting circuit breaker function on the gateway still needs to be enabled.
Cluster and load balancing integrated by micro-service architecture and container cloud
Finally, we talk about the micro-service architecture and the service discovery and load balancing after the integration of Kubernetes+Docker container cloud.
As mentioned earlier, when using Eureka service registry, for the same microservice module A, we can start multiple microservice instances with different port numbers. Automatic registration and discovery of services are realized through Eureka after port startup. Then the load balancing of service access is realized through Ribbon.
In other words, we add and deploy the microservice module A node manually.
However, under the continuous integration of DevOps, after the implementation of Kubernetes+Docker container cloud, we can realize the dynamic expansion of micro-service node resources through k8s. The extended Pod resources are uniformly implemented by Kubernetes to achieve cluster load balancing, that is, external users only need to access the Node+ port number.
So there are actually two ways to do it at this time.
Practice 1: no longer use Eureka services for registration and discovery
At this time, instead of using the Eureka service to register discovery, the discovery is accessed through the VIP dynamically deployed by Kubernates, and the load balance of the background node is carried out by Kubernates.
At this time, we can only consume and call the interface in the same way as the Http Rest API API, which is similar to the original Feign declarative call. That is to say, in this scenario you only use SpringBoot to develop independent micro-services that expose Http Rest API interfaces. Eureka+Feign+Ribbon in the SpringCLoud framework is no longer used.
Practice 2: replace Service in Kubernetes with Eureka
In this scenario, instead of using the clustering function of Kubernetes itself, the dynamically deployed micro-service module is automatically registered with the Eureka service registry for unified management. In other words, it is still built according to the traditional SpringCLoud framework system.
In this way, the key capabilities of current limiting, fault tolerance and heartbeat monitoring under SpingCLoud can be further retained.
Practice 3: the further train of thought is ServiceMesh
In fact, we see further ideas similar to Istio's completely decentralized micro-service governance solution. In this mode, we can better achieve related service registration, discovery, current limit circuit breaker, security and other key service governance and control capabilities through Sidecar.
If the micro service modules are all deployed into the Docker container through Kubernetes, we can see that the contents of SideCar can be appended to the specific deployment package to achieve integration during image production and container deployment in K8s.
To put it simply, it is:
We do not need to consider too much distributed API interface integration interaction when developing micro-service module, but we have the ability of distributed interface call and integration after integration with Kubernetes and Service Mesh. At the same time, it also has the ability to manage the security, log and current-limiting circuit breaker of API interface.
As a result, it is often said that Service Mesh is the last piece of Kubernetes's jigsaw puzzle that supports microservices.
Thank you for your reading. the above is the content of "what are the problems of micro-service architecture design". After the study of this article, I believe you have a deeper understanding of the problem of micro-service architecture design. Specific use also needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.