Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize the underlying service in .NET

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to optimize the underlying services in .NET, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.

Analysis of problems

There are two main problems with the underlying services:

Code redundancy

Low time effect

Code redundancy

For example:

The method of accepting the award is not unified, one set at a time, another set can be written in a cycle.

Each type of task needs to implement the completion task JOB and award JOB of the task alone.

The above problems directly lead to the high cost of follow-up development and daily maintenance.

Because of the lack of communication in the early development, there is no encapsulation into a common method, and each developer of JOB has implemented a separate set, of course, they may not be so stupid, it may be that someone finished it first, and then modified it after the previous COPY.

Just imagine, if you don't have a new task type, you have to rewrite a JOB that completes the task and a JOB that automatically issues awards, which is a Ninten2 workload. If a rule is changed later, does every JOB go to change it once?

Low time effect

Since task completion is performed by the timing service in batches according to the business data source:

The low frequency of scheduled tasks results in too much processing in the data set.

High frequency of scheduled tasks will lead to a sharp increase in Job's pressure on the database or 99% of useless queries. No result does not mean there will be no consumption, because queries go through the following steps: establishing connection, lexical analysis, parsing, selecting execution method, reading data from storage engine, and returning to the client.

As the amount of data from the data source increases, the query time gradually increases.

The above problems directly lead to the fact that users cannot view the completed tasks and receive awards in time after completing the tasks. if you want to check the status in time, you need to update the operation after adding additional queries to the logic of the display page.

Optimized implementation flow chart

Option 1 (pull away from the common point)

Objective: to reduce code redundancy, improve maintainability and improve the development efficiency of follow-up new tasks

Specific implementation: from the business flow chart, it can be intuitively observed that the entire underlying business process is basically the same, only the differences in data sources, so we can optimize from the following aspects:

The only automatic award Job is extracted, and the award JOB is based on the result of task completion, and the original individual task automatic award Job is gradually removed.

If the specific implementation process of automatic award awarding is consistent with that of manual award acceptance, it can be encapsulated into a public method. It is called by H5 receiving button and receiving JOB respectively.

The task completion Job uses the template mode, and the business execution process is unified by the base class. Each task type only needs to inherit the task parent class, and then the subclass rewrites the query data source. Of course, it can also be simple and rude without using the design pattern, and encapsulate the method of completing the task after querying the data source into a common method for JOB of different task types to call.

From my point of view, there is no harm in dealing with this scheme. If you really want to worry, it is that the JOB involved will have to be changed, but from the above analysis, if you really want to refactoring, the workload is also writing parent class templates and encapsulating public methods, and the query data source code can be reused. But the benefit is good scalability and maintainability.

Option 2 (business location)

The scheme mainly changes the trigger mode of task participation, different task types send queue messages by the completion point of the corresponding business final process, and the task service (consumer) subscribes to the relevant messages to execute the task completion process.

In popular terms, it is called business burying point, of course, it can also be called event-driven.

Architecture diagram

Event-driven architecture

Services are highly cohesive, and their coupling should be very low. When services need to cooperate with each other, suppose that service "A" needs to trigger some logic in service "B". The usual way is to have service A call a method in service B directly. But the premise is that A must know the existence of B, if B is abnormal, it will affect the normal execution of A.

So there is a strong coupling between them, and A must depend on B. This makes the system more difficult to maintain and expand, so event-driven is introduced to reduce the coupling between services.

When service A needs to trigger the logic of service B, instead of calling it directly, we can send messages to the message queue, subscribe to the corresponding queue, and perform operations asynchronously when events occur. This means that both services An and B rely on middleware message queues, but they will not need to know about each other, so they are decoupled from each other.

If this scheme is introduced into our activity business, the benefits are mainly divided into short-term and long-term.

Short-term income

Reduce useless and repetitive queries: there is no need to query the data source repeatedly, only reliable messages are pushed by the business side to reduce the unnecessary pressure on the database

Good user experience: high timeliness, batch processing at the original centralized point in time, but now dispersed to different time points for execution

Excellent scalability: RabbitMQ has its own load balancing feature, which can achieve lossless dynamic scale-out in the case of insufficient consumption capacity. Although JOB can also do this, it needs to do special treatment to the physical table to add intermediate states.

Long-term income

Event-driven architecture is more profitable in the long run than in the short term. Take the RabbitMQ and investment business as an example. At the initial stage, we complete the core business investment and financial management. After the investment, we need APP to notify the user. No matter whether the investment is successful or not, send a message to RabbitMQ. The investment succeeds RouteKey=TZ.SUCCESS, and the investment fails RouteKey=TZ.FAILE. The APP notification service subscribes to the queue NoticeQueue binding RouteKey=TZ.#, which includes success and failure messages, and the service sends APP notifications based on the message status. When the business development needs to increase the investment success points, you only need to add the credit service subscription queue IntegrationQueue and bind the RouteKey=TZ.SUCCESS message. Then there are more task activities, credit consumption, and so on.

Thus it can be seen that, like the radio, I don't know which one of you wants it. If you need anything, just listen and turn a deaf ear.

Complex distributed transaction

Now that we use RabbitMQ middleware, distributed transactions choose a reliable message-based solution:

Message reliability: ensure the successful execution of local transactions on the business side as well as the normal release of queue messages

Message compensation: to ensure the normal consumption of the message consumer. If the consumption fails, it needs to be re-delivered. If the re-delivery fails, it can be compensated by the compensation service.

Idempotent processing: due to the existence of automatic retry mechanism to avoid unexpected problems caused by repeated execution of the business.

Model diagram

This reliable message-based solution, also known as the local message transaction table, can be developed according to its own situation, or it can be solved using a similar open source distributed transaction framework, CAP.

After reading the above, do you have any further understanding of how to optimize the underlying services in .NET? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report