Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of distributed Database Middleware DDM

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Distributed database middleware DDM example analysis, many novices are not very clear about this, in order to help you solve this problem, the following small series will explain in detail for everyone, there are people who need this to learn, I hope you can gain something.

In the era of cloud computing, traditional databases have been unable to meet the requirements of enterprises in terms of performance and capacity. With the continuous increase of data volume, database solutions that are easy to expand and split are particularly important for the cloud transformation of enterprises. In order to make it easier for enterprise applications to go to the cloud, Distributed Database Middleware (DDM) focuses on solving the database bottleneck problems faced by enterprises in the process of going to the cloud. It not only can easily meet the business requirements such as horizontal split, expansion, and read-write separation, but also has more cost performance than traditional solutions. Next, let's decrypt DDM at zero distance.

What is DDM?

DDM focuses on solving the problem of distributed database expansion. It breaks through the capacity and performance bottlenecks of traditional databases and realizes high concurrent access to massive data. DDM provides application transparent database read-write separation, automatic data fragmentation, flexible Auto Scaling and other distributed database capabilities.

How does DDM define read-write separation?

From the database point of view, for most applications, from concentration to distribution, the most basic requirement is not the bottleneck of data storage, but the bottleneck of calculation, that is, SQL query. On the system without read/write separation, it is likely that some complex SQL queries during peak hours will cause the database system to collapse. From the perspective of protecting the database, we should try our best to avoid single-node databases without master-slave replication mechanism. Traditional read-write separation solutions couple application code, expand read nodes or modify read-write separation policies, etc., requiring modification of application code and upgrading of applications, which is very complex. DDM implements transparent read-write separation, which does not require code modification. To ensure read consistency, all reads in a transaction are distributed to the master node by default. Read out of transaction distribution from node. Write distribution master node. When the application requirements are complex, DDM provides hint that can be independently controlled by the program to read and write SQL separation logic. In addition, if some nodes of the backend DB fail, DDM will automatically remove the failed nodes and automatically switch from master to slave, without awareness of the application.

(Attached is a comparison diagram of the framework before and after the transformation)

Under microservice architecture, services will be split more than before, and the number of connections to the database will also increase a lot. Is this also an important problem that distributed database middleware needs to solve?

That's right. For example, the maximum number of connections for an application is 2000. Before the service split, the application has 2000 data connections. Suppose it is split into 100 microservices. In order to ensure that the total number of connections does not exceed the maximum number of MySQL connections, the maximum number of connections that each microservice can configure is 20. This is almost unacceptable for applications. Many sub-database and sub-table middleware on the market, such as Cobar and Atlas, manage the connection pool of backend MySQL based on fragmentation, and do not have the sharing and interworking of the whole MySQL instance, and the anti-concurrency ability is seriously weakened. DDM is truly based on MySQL instance mode, all databases under a MySQL instance share a connection pool. For fragmentation, this can avoid the situation that some library connections are idle and some library connections are insufficient, and maximize parallelism. Among them, the attributes related to session level are automatically maintained by DDM, and the application program is unaware.

Is there an upper limit to the number of connections in this sharing mode?

DDM's front-end connections are relatively lightweight compared to MySQL connections and can support tens of thousands of connections relatively easily. Of course, in order to prevent a single user from abusing resources, it is supported to set the maximum number of connections at the front end.

(migration flow chart attached)

What are the DDM considerations for routing switching speed and content accuracy?

Regarding the switching route speed, although many in the industry claim to be millisecond, data verification is generally omitted, or only the number of checks is checked. It's called a sophisticated algorithm, and it's been tested pretty well. DDM believes that even if the tests are adequate, it is difficult to guarantee 100% that there will be no problems. Therefore, DDM designs a fast verification algorithm to verify the content of the data. Even if the data is a little different, the algorithm can verify it. At the same time, it makes full use of the calculation ability of RDS to improve the verification speed.

In general large-scale applications, some tables have a large amount of data, and some tables have a small amount of data and are not updated much. How does DDM support different types of scenarios?

DDM designs three table types for the actual scenarios encountered by the business: Fragmented tables: tables with large data volumes that need to be split into multiple fragmented libraries, so that each fragment has a part of data and all fragments constitute complete data; single tables: tables with relatively small data volumes and no need to join queries with other fragmented tables. Single table data is stored in a shard by default, which can be compatible with complex queries of the single table itself as much as possible; global table: the data volume and updates are relatively small, but there is a need to join with other shards. Each fragment of the global table stores exactly the same data, so that the join with the fragment table can be pushed down directly to RDS for execution.

Under distributed conditions, the primary key constraints in the original database will not be used. Is it necessary to introduce an external mechanism to ensure the unique identity of the data? How can this globally unique sequence DDM be guaranteed?

DDM is a globally unique sequence, similar to MySQL's AUTO_INCREMENT. At present, DDM can guarantee the global uniqueness and sequential increment of this field, but it does not guarantee continuity. Currently DDM designs two types of sequence mechanisms, DB and TIME. DB mode sequence refers to DB to achieve, need to pay attention to the setting of step size, step size directly related to the performance of the sequence, step size determines the size of a batch sequence. TIME sequence uses the time stamp plus machine number generation method, the advantage is that no communication can guarantee uniqueness.

What are the advantages of DDM in operation and maintenance monitoring?

DDM: traditional middleware operation and maintenance requires its own operation and maintenance. Generally, middleware focuses on core functions and rarely considers operation and maintenance and graphical interface operations. DDM makes full use of the advantages of cloud, providing a comprehensive graphical interface operation for instances, logic libraries, logic tables, fragmentation algorithms, etc. At the same time, you can view slow SQL and other monitoring content online, which is convenient for targeted performance tuning of the system.

Where will DDM go in the future?

The future direction of DDM is to enhance distributed transactions, distributed query capabilities, performance optimization, etc. Considering that some feature implementations will be more limited if they are implemented only from the middleware level. DDM works with underlying database modifications to provide better features to meet user business needs.

Did reading the above help you? If you still want to have further understanding of related knowledge or read more related articles, please pay attention to the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report