In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to understand DDL in MyCAT". The explanation in this article is simple and clear, easy to learn and understand. Please follow the ideas of Xiaobian and study and learn "how to understand DDL in MyCAT" together.
Today, the developer raised a requirement that he wanted to do DDL operations on a table in a certain time range. It seems that the complexity is not high.
However, I hesitated a little when I saw the information provided by the development students, because the port is 8066, which means that middleware is used. This is a MyCAT environment. There are 4 nodes in total. Each node is divided into 4 logical nodes, so there are 16 sharding fragments. It is exactly the same sentence: 100 libraries and 10 tables. Although the number of nodes does not seem to be large at present, if you look at the fragmentation logic of this table hisrecord, you will find that it is far richer than we think.
This table stores data according to date, that is, the storage unit of data is day. The table names are similar to rec20180301, rec20180302. Therefore, according to this growth trend, it can be continuously expanded according to the time dimension, and at the same time, fine-grained splitting of the daily table is done, and each daily table will have 16 shards for hashl routing.
The requirement of the development students is to add a field to the daily table after a certain day. To change the data of the first day, you need to add a default value to this field. From the business point of view, this attribute is required because the application layer is upgraded. If some businesses have not been migrated for the time being, there is one day to buffer and adjust and repair. Therefore, the benefit of the current demand is that the table we want to modify is not currently written, and changes do not have to consider the impact of writing online business.
According to my simple calculation, according to the current modification range, there are 177 affected daily tables.
mysql> select datediff('2018-11-01','2018-05-08');
+-------------------------------------+
| datediff('2018-11-01','2018-05-08') |
+-------------------------------------+
| 177 |
+-------------------------------------+
1 row in set (0.00 sec)
According to 16 fragments, this number is quite large, with more than 2800 tables.
mysql> select 177*16;
+--------+
| 177*16 |
+--------+
| 2832 |
+--------+
1 row in set (0.00 sec)
There are 2 DDL tables involved, that is, 2 DDL statements, so it is calculated to be more than 5600 tables. So if you look at a table, it can be broken down into more than 2000 tables, and there are about 5800 related tables in a year.
If you add to this the day's table structure changes, it gets more complicated.
Let's take a quick look at the schema.xml configuration in MyCAT.
There are 16 fragments configured inside, namely dn50-dn65,database is histrecord01-histrecord16
。。。
Fragmentation rules for tables are computed modulo hash.
。。。
To do this work, the possibility of manual completion is too low, so I prepared the following script, drawing on some ideas of my previous colleagues.
We enter two times, the start time and the end time. app_sql/create_sql.sql is the definition file for the table structure. The point of this script is to constantly process the table structure information, stamp it with time, write it to another script file, cycle 100 days according to the date, and write it 100 times.
startdate=`date -d "20180508" +%Y%m%d`
enddate=`date -d "20181101" +%Y%m%d`
#Define loop main function
function main(){
while [[ ${startdate}
< ${enddate} ]] do echo ${startdate} cat /home/mysql/app_sql/create_sql.sql >> /home/mysql/app_sql/alter_his_record.sql
sed -i "s/20180508/${startdate}/g" /home/mysql/app_sql/alter_his_record.sql
echo "" >> /home/mysql/app_sql/alter_his_record.sql
echo
startdate=`date -d "+1 day ${startdate}" +%Y%m%d`
done
}
#Execute main function
main
So the basic operations mentioned above were quickly completed. Of course, MyCAT does not support DDL statements. So we need to execute the corresponding change DDL on each node separately.
With minor changes to the script, it can be distributed to different sharding nodes. The whole process lasted less than half an hour, and a lot of time was spent in constant confirmation, because the impact of this change was indeed a bit large.
Of course, the premise of this problem is that we have already created the daily table. If there is no daily table, we still need to reconfigure it and reload some configurations on the MyCAT side.
Extending this task, we will find that data processing at the middleware level is more focused on TP services, and it is insertion-intensive services. If it is interactive distribution between nodes, this scheme is not suitable. At the same time, continuous fragmentation From a business perspective, there is still a need for archival retention of historical data and aggregation of data. Middleware level support may be limited at this point, and we may need other solutions to some extent.
Thank you for reading, the above is the content of "how to understand DDL in MyCAT", after learning this article, I believe everyone has a deeper understanding of how to understand DDL in MyCAT, and the specific use needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.