In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to query after the mysql sub-library, I hope you can learn a lot after reading this article, let's discuss it together!
The strategy of sub-database and sub-table depends on the requirements of the project. The conventional approach is adopted here: according to the way of taking the model, suppose we divide 2 libraries horizontally, and each library disassembles 2 tables horizontally. There are a total of 4 tables, and the query is not sorted according to other conditions by default. Suppose we want to query the data on page 41, and each page displays 10 pieces of data.
The first kind:
It is also the simplest one: by adding an additional associated table, there must be an id attribute in the attribute. As to whether there are library id attribute and table id attribute (that is, which database and table id attribute) are optional, because this can be obtained according to the id model, note that the data stored in this table is all data, but the advantage is that there are only a few attribute columns that provide indexes. In that case all we need is select * from brand_temp where. Limit 400quot 10 (insert the data on page 41, each page shows 5 pieces of data), and then after we have obtained the id, we can query it in the corresponding table.
The second kind:
The one that consumes the most performance is that if we want to query the records on the first page, the sql of a single database and single table is: select * from db limit 0d10; when we split the database and slice, the statement is still the same statement, but at this time we need to parse the records returned by the four tables in memory, and then use id to ascend the order to get the first 10 pieces of data returned. The amount of data is small, and it is very ok when the page number is small, but if we want to query the data on page 2, the case of sql monomer architecture is: select * from db limit 10 But this will not work in a distributed database, the data will obviously be lost, and the way to make up for it is to query all. The sql statement select * from db_x limit 010 records / / means that what needs to be queried is the number of records to be queried on the single schema plus the previous records, and then merge the records returned by all tables in memory and parse them, and finally take the 10th starting records. It can be seen that once the number of pages reaches n pages and the number of records displayed on each page is m records, the number of records that need to be queried in each table is: (nMuth1) * m+m=nm records, and the number of records that need to be parsed in memory is t * n * m records. If cpu does not explode, we will lose.
The third kind:
Adopt a business-based model: force users to skip pages. What do you mean, users can only click on the next page or the previous page to browse? the specific way is to query the number of records while recording the maximum value of the current unique id, and then add where conditions when querying again. Let's start from the beginning: query pageNum=1,pageSize=10 for the first time, maxId=0- > sql:select * from db_x where id > 0 limit 10 Then distribute it to the table of the corresponding library, merge the 4 + 10 pieces of data, parse and sort the first 10 pieces of data in memory, take the first 10 pieces of data, and separately take out the id=maxId of the 10th piece of data and render it to the front-end page to save, so that when you click on the next page, the maxId=10 is also submitted, and the sql becomes select * from db_x where id > 10 limit 10, and then continue to parse and save … The data returned in this way is stable and the data is coherent (sorted)
The fourth kind:
The legendary best way to support page skipping query, the core of this method lies in 2 sql queries, how to do it:
Premise assumption: query the data on page 1001, showing 10 records per page
1): let's first record the range of the number of records to be queried: (1001-1) * 10 million starting, 10010 ending-> 10000-10010 single sql is: select * from db limit 10000 sql 10; we have a total of 4 tables, which means that the start of each table should be 10000 from db_x limit 40002500, so the sql becomes: select * from db_x limit 2500 / / assuming it is evenly distributed, so we can share equally, and it doesn't matter if we don't share evenly. The subsequent operation will make up the records in four tables: (because I haven't written demo yet, so I wrote it by hand) T1: (1, "a"), .T2: (2, "b"), .T3: (3, "c"), .T4: (4, "d") . Page 1001 of the real data can't start with 1. We'll take a look at it. In a few days, we will talk about the ok released by rabbitMQ distributed consistency and this demo. The first phase of the sql query ends 2): id matching the records returned in the four tables (if id is non-integer, use hashCode matching by yourself). Because it is an ascending query, we only need to compare the id value of the first record of each table to get the minimum minId=1. And the largest value of each table, maxId Ok, change the idea of sql, here we use conditional query (the first step of the make-up operation): select * from db_x where id between minId and maxId so that we get the missing data (of course, there is extra data) so that we return records with a different number of possible records in our four tables, and the second step ends with 3): then record the location of minId, such as T1 is 2500, T2 is 2500-2, 048 The position of T3 is 2500-3 '2047, the location of T4 is 2500-3' 2047, the final number of records is 2500 '2048' 2047 '2047' 10 000-2-3-3 '9992, so the number of records we need to query needs to start with 8 records from 9992, and then take 10 more data. This method can query the data accurately, but the only disadvantage is that each query requires two sql queries to finish reading this article. I believe you have a certain understanding of how to query after the mysql sub-database, want to know more related knowledge, welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.