In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
summary
Looking back on the projects written before, I found that when it was standardized, it was still possible to do some coquettish operations.
If there are new projects that use MySQL in the future, then it must be practiced.
To prepare, create a test data table (utf8mb4 and utf8mb4_unicode_ci are used by default in the table creation statement, and interested readers can search for these two configurations themselves):
CREATE TABLE `student` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `no` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'number',`name` varchar(30) NOT NULL COMMENT ' name', PRIMARY KEY (`id`), UNIQUE KEY `unq_no` (`no`)) ENGINE=InnoDB DEFAULT CHARSET= utf8mb4 COLATE =utf8mb4_unicode_ci;
Update data when inserting conflicts
SQL inserts may fail due to various reasons, such as UNIQUE index conflicts. For example, a DBA who doesn't know inserts a wrong student record ("3", "Xiaoming"), the tragedy is that Xiaoming's number is 1. The general practice is to determine whether there is a Xiaoming record in the current database record, if there is, update its corresponding number, otherwise insert Xiaoming record. There are better ways:
INSERT INTO student(`no`, `name`) VALUES (3, "xiaoming");INSERT INTO student(`no`, `name`) VALUES (1, "xiaoming"), (2,"xiaohong")ON DUPLICATE KEY UPDATE `no` = VALUES(`no`);
That is to use ON DUPLICATE KEY UPDATE, which is mysql's unique syntax (multiple update conditions can be placed after the statement, each condition can be separated by commas). Note that VALUES(no) here updates the conflicting no value to no in the user-inserted data, so that each conflicting data can dynamically set a new value.
Ignore errors in bulk insert failures
Bulk insertion generally improves performance and reduces overall network overhead over individual data insertions. However, if there is a bug in the batch insertion data center, by default, this will cause the batch insertion to fail (no data insertion succeeds). Of course, we can choose to ignore, MongoDB can do things, MySQL naturally can do.
INSERT INTO student(`no`, `name`) VALUES (1, "xiaoming");INSERT IGNORE INTO student(`no`, `name`) VALUES (1, "xiaoming"), (2,"xiaohong"),(3, "xiaowang");
Only IGNORE needs to be inserted into the batch insertion statement, then the insertion failure of some data will be ignored, and the correct data can still be inserted into the library. However, I suggest that this feature be used with caution, the use of mysql database itself is to see the correctness of the data, there is no need to automatically give up the correctness of the data for the performance of batch insertion, if you really think this data is not important, then why not store this data in NoSQL? MongoDB is a good choice.
IGNORE also has some side effects, interested parties can check on their own.
Subquery replacement with JOIN
MySQL subquery optimization isn't very good, and it works a bit counterintuitive (we write code that sometimes goes against our intuition, which is probably one of the root causes of optimization). The worst of these are subqueries that contain IN in the WHERE clause (see Chapter 6.5 of High Performance MySQL for details, titled nice, a limitation of the MySQL query optimizer). In summary, MySQL may execute subqueries while executing external records one by one in some cases. If the number of external records is large, performance will be worried.
SELECT * FROM student WHERE no > (SELECT no FROM student WHERE `name`='xiaoming');SELECT s.* FROM student s JOIN (SELECT no FROM student WHERE `name`='xiaoming') t ON s.no > t.no;
Looking at the above code, you can see that it is easier to replace subloops with JOIN. Although the code is slightly obscure, it may prevent you from being called up to review your mistakes on a night with high concurrency. MySQL is always optimizing subqueries, which may be more efficient than JOIN under certain conditions, so choose the best SQL statement when there is time to validate.
Where AND Pit in JOIN
To better illustrate the pit, I need to create a new table here and add fields to the original student table:
CREATE TABLE `class`( `id` smallint(6) unsigned NOT NULL AUTO_INCREMENT, `no` int(10) unsigned NOT NULL COMMENT 'number',`name` varchar(30) COLLATE utf8mb4_unicode_ci NOT NULL COMMENT ' name', PRIMARY KEY (`id`), UNIQUE KEY `unq_no` (`no`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; ALTER TABLE `student` ADD COLUMN `cls_no` smallint(6) unsigned NOT NULL DEFAULT 0 AFTER `no`;
Fake some data, suppose there are 4 classes, 4 classes have no corresponding students. The difference can be found by using the following query:
select c.*, s.` name` from class c left join student s on c.no = s.cls_no and c.no
< 4 order by c.no asc; 查询结果如下图所示:It should be noted that www.example.com is set in the query condition herec.no
< 4 这一JOIN条件,但是明显的没有起到作用,查询结果中仍然显示了no=4的结果,这是因为此次查询使用的JOIN是LEFT JOIN,class作为左表,在匹配条件无法完全满足的情况下,亦会将左表的所有数据显示出来,引入了NULL值。 换成使用WHERE呢,参照下句: select c.*, s.`name` from class c left join student s on c.no = s.cls_no where c.no < 4 order by c.no asc; 查询结果如下图所示:Why is the same use of LEFT JOIN, query results are different? This is because SQL can be thought of as being executed in two parts (pseudo-SQL, meaning in place):
(1) select c.
, s. name from class c left join student s on c.no = s.cls_no as tmn;
(2)select c.
, s. name from tmp where c.no
< 4 order by c.no asc; 需要注意的是,此处首先执行JOIN部分查询,再对查询结果执行WHERE。在执行INNER JOIN时,以上问题还可以忽略,但是如果使用的是LEFT JOIN或者RIGHT JOIN,则需要加倍小心查询条件了。 获取资料: 最后给大家分享一份学习资料,里面包括:(BATJ面试资料、高可用、高并发、高性能及分布式、Jvm性能调优、Spring源码,MyBatis,Netty,Redis,Kafka,Mysql,Zookeeper,Tomcat,Docker,Dubbo,Nginx等多个知识点的架构资料)和Java进阶学习路线图。 免费领取加q群:478052716 即可! 分页查询优化 查询的优化,最初是在研究MongoDB的分页查询时学到的,只能说大多数的数据库都是差不多的(当然现在存在时序数据库,分页查询那是更加骚气的)。大多数的分页查询都是类似如下的写法: SELECT * FROM student WHERE cls_no >1 LIMIT 1000, 10 ORDER BY id;
There is a performance loss in this way of writing. The database will query all the data that meet the conditions, count to the 1000th record one by one, and finally select the first 10 records for cross-checking. The first 1000 pieces of data will appear to be very wasteful. In the case of a large LIMIT value, this performance loss is unbearable (Baidu will prohibit querying data after 76 pages by default).
Because pagination is generally page by page down (if it is a page skip query, then you can only use the above query statement to slowly query the search results), then each pagination can get the current maximum ID, we can determine our search starting point based on the ID, based on this point back query 10 meet the requirements of the results, changes as follows (let the front pass a maximum ID of the current page, this small requirement can of course be met):
SELECT * FROM student WHERE id > 1000 AND cls_no > 1 LIMIT 10 ORDER BY id;
The above is based on the current ID is a continuous ID (where several records have not been physically deleted), if the non-continuous ID, then based on the ID to determine the starting query point is inappropriate, then we can use JOIN:
SELECT s.* FROM student s JOIN (SELECT id FROM student LIMIT 1000, 10) t ON s.id = t.id;
In fact, here we are the index table of id, so we can quickly determine the ID, so the query is simplified to determine the data record according to the ID of the index table query (but it should be noted that the index table here cannot add the WHERE clause), so this writing is almost useless in the actual environment.
UPDATE/Delete changes multiple table records
At work, it is often necessary to modify associated records in multiple tables. The general practice is to query the records in the relevant tables and modify them one by one. If the logic of the modification is complex, this is fine, but if it is a simple modification (such as modifying boolean variables), it can be done with a SQL statement.
As long as multiple tables are mentioned in SQL, JOIN will appear roughly. We have a requirement to transfer the students of Class 3 to Class 5 (the original Class 3 is changed to Class 5). If you use the JOIN statement, you can complete the task as follows.
UPDATE student s JOIN class c ON c.no =3 AND c.no = s.cls_no SET c.no = 5, s.cls_no = 5;
JOIN allows you to accomplish this task, extend it to modify the data content in multiple tables, or extend it to Delete statements.
SELECT COUNT(*)/COUNT(1)/COUNT(column name)
Here is a brief summary:
SELECT COUNT(*): is the syntax for counting the number of rows defined in SQL 92 (so it must be optimized a lot);SELECT COUNT(1): queries the number of rows that meet the conditions;SLECT COUNT(column name): queries the number of rows that meet the conditions and the corresponding value of the specified column name is not NULL.
For SELECT COUNT(*)/COUNT(1), in MySQL's official documentation, the implementation idea is the same, there is no performance difference, then naturally it is recommended to write more standard.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.