Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to greatly improve the high availability of micro-services

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article focuses on "how to greatly improve the high availability of micro-services". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to greatly improve the high availability of microservices.

1. What technical means in the micro-service architecture must be planned at the design stage?

The three axes of the Internet: circuit breaker, message queue, cache, this must be considered to enter, in addition, in order to improve response time, parallelization operations also need to be considered in advance.

Circuit breaker: an important means to ensure the high availability of services. Users' requests will no longer access the service directly, but will be accessed through idle threads in the thread pool. If the thread pool is full, the user's request will not be blocked. At least one execution result (such as returning a friendly message) can be seen instead of endless waiting or seeing the system crash.

Message queuing: message queuing (MQ) is a method of communication between different applications (across processes). The purpose of introducing message queuing into microservices is to reduce links and dependencies.

Caching: in order to improve QPS and reduce database pressure, separate local and remote caching

2. Cache is a necessary component of every Internet application system. How to make good use of cache to improve the QPS of the system under the micro-service framework?

In the cache usage scenario, there is a saying that 20% of requests access DB (if less) and 80% of requests access cache. In the micro-service scenario, the interface invocation itself is now RPC remote invocation, which does improve the response time of a single interface to a certain extent. But from a global point of view, micro-services improve the QPS level of the system, so to some extent, the RT of a single interface can be ignored because of PRC. Of course, if it is for the best, you want to reduce the interface response time brought by PRC as much as possible. if multiple service calls are involved, you can call the service in parallel and cache the parallel results locally. Each atomic service in the back end will dock the cache, which can effectively improve the response time of the system.

Generally, the API API initiates a request to the back-end service. If multiple APIs are called, there will be an aggregation layer, which is usually cached at the aggregation layer. The cache is divided into local cache (such as JVM) or remote cache (such as Redis or Memcache). Because of the distributed architecture, the cache generally uses TTL automatic expiration to clear the cache. If the volume of business is very large, but there are relatively high requirements for data inconsistency, you can set 1 second. If the requirement is not high, you can set the level of 30 seconds or minutes. However, in software architecture, read-write separation is usually used to improve QPS. Because caching will lead to data inconsistency, some scenarios that require strong data consistency can be dealt with by version number. For example, when Li Si reads A data, version=1, and user Zhang San makes an operation to record A, then version=2. Li Si cannot change record An at this time.

3. How to use message queuing MQ in micro-service? what are the good skills? Do you have to consider idempotency when using MQ?

Message queuing (MQ) is a method of communication between different applications (across processes). Message queues mainly include: asynchronous processing-increasing throughput; peak-cutting and valley-filling-improving system stability; system decoupling-business boundary isolation; data synchronization-final consistency guarantee. The purpose of introducing message queuing into microservices is to reduce links and dependencies. For example, after the user registers, the system will give the user points, coupons and other initialization operations. If you don't use MQ, you need to rely on other services such as points service, coupon service and so on. But for user services, just register regardless of other derivative services, so other relying parties can consume it after sending MQ. As long as the microservice is configured with a retry mechanism, the write interface needs to be idempotent. Because the jitter of the network needs to be taken into account, packets will be repeatedly committed, and dirty data will appear if there is no idempotency. Idempotency is also required to use message queuing, because the consumer may not have commit after a link fails, causing the message to be delivered again.

4. What aspects need to be considered in the use of fuse degradation technology? Which parameters need to be adjusted?

The so-called downgrade or circuit breaker is aimed at the non-core business system. When the non-core business system is slow to respond due to excessive traffic, then some requests for this interface will be degraded, and will become a circuit breaker when a certain strategy is reached. After the fuse, it can be time to do processing asynchronously. Another situation is to manually specify an interface fuse, for example, an electricity will guess you like or recommend it to the shield during a big push. If there is no circuit breaker, then you need to write the code manually, which is a waste of time through the development-testing-pre-release-online link. All strategies are laid the groundwork for high availability. Generally use hystrix to do degraded fuse function, can configure a lot of parameters, but the key point to note: circuitBreaker.requestVolumeThreshold circuitBreaker.errorThresholdPercentage execution.isolation.thread.timeoutInMilliseconds hystrix.command.default.circuitBreaker.sleepWindowInMilliseconds in addition, the need to support manual open fuse, in order to prevent the need to actively open fuse in special cases.

5. In the face of excessive pressure, how can micro-services be adjusted automatically or flexibly increase services temporarily?

Basically, there are two types of traffic:

1. Normal business traffic, large traffic during operation (known in advance)

2. The peak value of a large number of user requests of the attacked traffic is basically predicted in advance, for example, the number of users will be estimated for operations and activities, and the capacity will be expanded in advance according to this estimated amount. If it is a sudden unusually large traffic, it is doubtful whether it has been attacked, and it needs to be protected at the network level. General systems will take basic protection measures, such as current restriction: for example, the blacklist for IP: the blacklist for IP can basically intercept a large amount of abnormal traffic in the above way, and then perform current limit and downgrade and circuit breaker on the interface at the system level to ensure the stability of the platform.

Elastic expansion is the order of magnitude of QPS that the system can support. Here are several core components that need to be stateless:

1. Cache: whether the cache supports dynamic growth

2. DB: the DB here refers to the read-only cluster library because microservices naturally support multi-instance deployment, and since it can achieve 1 and 2, you can initially estimate how many QPS the system will have during the peak period based on the number of users of the marketing campaign, and then calculate how many instances need to be added, how many read-only instances will be added to the cache, and how many read-only instances will be added to DB.

At the same time, we need to do the following three points:

1. Control the current limiting and circuit breaker strategies to prevent the flow from crushing the service.

2. Popular activity scenarios, local and remote cache are used together.

3. Fuse some non-core processes during the activity.

6. What are the main ways to ensure high availability of microservices? Hard load balancing equipment or soft load guarantee?

The micro-service framework itself supports the publication of multiple application instances of the same service, and the deployed applications and registries have heartbeat detection to ensure that the applications are online. At the same time, the framework itself will support load balancing and retry mechanism, which can ensure that the application will not be affected in the case of a single application downtime. It can be said that the micro-service framework ensures the high availability of services by means of soft load.

When it comes to high availability, you think of micro-services, why micro-services are difficult, in fact, it is not difficult in the development stage, and the overall requirements for the whole team are high. Including: operation and maintenance process, monitoring system.

OPS process: whether there is continuous integration, whether chain deployment is supported, and version rollback is supported.

Monitoring system: slow response, timeout, ERROR can timely alarm notice.

To improve the high availability of micro-services is not only a matter of research and development, but also the integration of development, operation and maintenance.

7. How to consider the business continuity when deploying the micro-service framework?

In recent years, the supervision of the financial industry, especially the banking industry, has become more and more stringent, and there is a higher requirement for business continuity. There is an urgent need for the banking system to migrate from the traditional architecture to micro-services. At present, in the actual deployment of the system, it is generally necessary to consider living in the same city or living in the same city or in different places to ensure business continuity. So in the process of migrating to micro-service architecture, how do you consider the need for double-live and multi-live in micro-service architecture? Are there any suggestions to solve the problems such as how to achieve fast non-disruptive handover in abnormal cases and data consistency between different centers?

Answer 1:

There is a large amount of information on this issue, and people pay close attention to the topic of double living in the same city or living in the same city, living in different places, and even more work across clouds at present. Micro-service is defined as service independence, dynamic expansion and other advantages, so from the micro-service point of view does not pay attention to multi-live, double-live, as long as the service can be allowed normally. However, from an architectural point of view, if you need to support multi-active and dual-active, then whether some infrastructure has these features, such as whether Redis,Mysql supports dual-master, whether it supports automatic switching between master and master, and whether MQ supports it. The workload is very large. Personally, I suggest that you should not engage in double work when there is a shortage of technical capacity and manpower. If you have to consider this factor, it is still recommended to implement the active / standby mode, that is, every computer room will deploy exactly the same system. When there is a problem with the master, cut the traffic to the standby.

Answer 2:

At present, there should be no appropriate technology for micro-services across data centers, which need to solve the problems of micro-service scheduling and service release, and also need to face the challenge of delay. Therefore, a better method is to deploy the micro-services of an application in one data center as far as possible, considering that disaster recovery can also deploy a unified set of applications in another data center, and the front end is diverted through load balancing or service governance.

Answer 3:

In fact, this problem is still very complicated. The bottom layer does not need to distinguish whether the upper layer is based on traditional service mode or micro-service mode, just pay attention to the problem of data synchronization. At the application level, after it is split into micro-services, the number of micro-service applications increases greatly, and micro-service distributed components such as configuration centers and registries will be used, which have high requirements for the network. Therefore, a better way is to complete a business request in a computer room, and at the same time, each computer room deploys its own micro-service framework to reduce cross-room traffic. At the same time, the corresponding micro-service monitoring module and fault self-healing are configured.

8. Does micro-service have to be Docker containerized? If so, what is the reason? What are the advantages and disadvantages?

Cost perspective: docker or virtual machines split the original single physical server into multiple virtual machines, and the resources between the machines are also isolated, so the cost is greatly reduced.

Deployment efficiency: to put it simply, both docker and virtual machines are the same concept. In a service-oriented scenario, the reason why docker is stronger than virtual machines is also very simple. For example, a very large impact activity needs to be done tomorrow. It is estimated that 10 clusters need to be added to each core node service. Currently, there are 12 core services, that is, 12 servers, that is, 120 machines need to be expanded. Virtual machine practice: turn on 120 virtual machines, configure environment, set IP, port, install application, start, debug. It takes 10 minutes for a skilled player to deploy one, then it takes a total of 1200 minutes, or about 20 hours of docker practice: since each application is made into an image at the time of deployment, you only need to release these 120 applications through scripts or commands, and the whole process can be completed in about 1 hour. Therefore, in the case of not a large number of services, using virtual machines can also meet the requirements.

9. How to implement the underlying data storage under the micro-service architecture?

The underlying data of micro-services are basically heterogeneous, such as MySQL, HBase, Redis, ES, hive and so on. Business processing will first write directly to MySQL, and then do data synchronization by subscribing to binLog. Generally, the data of binLog will be written to MQ. Consumers need to consider disorder when processing data. Be sure to carry the version number for data that requires strong consistency.

10. We are in the exploration period of micro-service + container transformation. How to choose the micro-service framework and link tracking?

Framework selection: microservice frameworks are currently popular with dubbo and spring cloud, each of which has its own advantages and disadvantages. Spring cloud family buckets have everything, but not many of them can be used well, nothing more than the stack of components. Dubbo has also shifted from Ali's own operation to the internationalization of apache. At present, most Internet companies use the dubbo framework, and problems can be quickly solved through the community. The response efficiency is relatively fast, and there are a variety of technology salons to learn.

Link tracking: there are many choices of APM, some open source, there are fees, the current market systems are basically developed with reference to Google's Dapper papers. If the budget is adequate, you can choose the paid enterprise version. The advantage is that it is relatively stable, and there is a special person responsible for answering the problems. If you have sufficient R & D resources and want to build your own wheels, you can choose an open source version, such as skywalking,Pinpoint,Zipkin,CAT. Skywalking,Pinpoint: basically no need to modify the source code and configuration files, as long as you specify the javaagent parameter in the startup command, which is the most convenient for operators; Zipkin: need to modify configuration files such as Spring and web.xml, which is relatively troublesome; CAT: need to be hard-coded in the program, which is more intrusive. Which one to choose depends on how familiar the business team is with the specific product.

At this point, I believe you have a deeper understanding of "how to greatly improve the high availability of micro-services". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report