In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This article comes from community users' contributions. Thank you for your technology sharing.
A brief introduction to the database architecture of giant sequoia
As a distributed database, giant sequoia database is a separate architecture of computing and storage, which is composed of database instance layer and storage engine layer. The storage engine layer is responsible for the core functions of the database, such as data read-write storage and distributed transaction management. The database instance layer, which is the SQL layer here, is responsible for the post-processing storage engine layer of the application SQL request processing, and feedback the response results of the storage engine layer to the application layer. Support structured instances such as MySQL instances / PG instances / spark instances, as well as unstructured instances such as Json instances / S3 object storage instances / PosixFs instances and so on. This architecture supports a large number of instance types, which facilitates the seamless migration from the traditional database to the giant sequoia database, reducing the development and learning costs. They also communicate with their counterparts in the database circle before, and they also agree with the architecture.
The SQL layer here uses MySQL instances, and the storage engine layer is composed of three data nodes and coordination node cataloging nodes. The data node is used to store data, and the coordinator node does not store data, but is used to route and distribute MySQL requests to database nodes. Catalog nodes are used to store cluster system information such as user information / partition information and so on. Here, a container is used to simulate a physical machine or cloud virtual machine. Here, the MySQL instance is placed in one container, the cataloging node and the coordination node are placed in one container, the three data nodes are placed in one container, and the three data nodes form three data groups, each with three copies. The massive data of Web application is distributed to different data nodes by slicing, for example, the data ABC here is scattered to three machines by slicing.
The data slicing here is realized through the distributed Hash algorithm DHT mechanism, and DHT is the abbreviation of distribute Hashing table. When writing data, first send the record to the coordination node through the MySQL instance, and the coordination node will hash according to the partition key of each record through the distributed Hash algorithm. After the hash, the coordination node determines which partition to send according to the partition key, so the data between each partition is completely isolated and independent of each other. In this way, we can split a large table into small tables in different subpartitions below to split the data.
Mysqldump and mydumper/myloader
Actual combat of import and export tools
SequoiaDB is fully compatible with MySQL, so some users will ask:
"since it is fully compatible, can MySQL-related tools be used?"
"how do I migrate data from MySQL to SequoiaDB?"
Let's describe how SequoiaDB uses mysqldump and mydumper/myloader to import and export data.
Mysqldump
1) create test data through stored procedures
# mysql-h 127.0.0.1-P 3306-u rootmysql > create database news;mysql > use news;mysql > create table user_info (id int (11), unickname varchar (100)); delimiter / / create procedure `news`.`user _ info_ PROC` () begindeclare iloop smallint default 0position iNum mediumint default 0cross uid int default 0cross unickname varchar default 'test';while iNum show tables +-+ | Tables_in_news | +-+ | user_info | +-+ 1 row in set (0.00 sec) mysql > select count (*) from user_info +-+ | count (*) | +-+ | 121 | +-+ 1 row in set (0.01sec)
3) execute the following mysqldump backup instructions
# / opt/sequoiasql/mysql/bin/mysqldump-h 127.0.0.1-P 3306-u root-B news > news.sql
Check that the corresponding file is news.sql
Then log in to the database to delete the original database data
Mysql > drop database news;Query OK, 1 row affected (0.10 sec) mysql > show databases;+-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | sys | +-+ 4 rows in set (0.00 sec)
4) use source to import new data
# / opt/sequoiasql/mysql/bin/mysql-h 127.0.0.1-P 3306-u root
Use the complete sql statement exported by mysqldump to log in to the database to perform import:
# / opt/sequoiasql/mysql/bin/mysql-h 127.0.0.1-P 3306-u rootmysql > source news.sqlmysql > use news;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with-ADatabase changedmysql > show tables;+-+ | Tables_in_news | +-+ | user_info | +-+ 1 row in set (sec)
You can see the results returned, and do support the mysqldump data export tool and the source import tool.
Mydumper and myloader use
This section introduces the use of mydumper and myloader tools.
Some students are a little confused about mysqldump and mydumper: mysqldump is the original factory of MySQL. Mydumper/myloader is a set of logical backup and recovery tools developed and maintained by MySQL / Facebook and other companies. DBA is commonly used and needs to be installed separately. The specific installation method can be queried on the network.
Use of mydumper/myloader for SequoiaDB
Let's check the mydumper version number first.
# mydumper-- versionmydumper 0.9.1, built against MySQL 5.7.17
1) mydumper export data
# mydumper-h 127.0.0.1-P 3306-u root-B news-o / home/sequoiadb
Delete the original database
Mysql > show databases;+-+ | Database | +-+ | information_schema | | mysql | | news | | performance_schema | | sys | +-+ 5 rows in set (0.00 sec) mysql > drop database news Query OK, 1 row affected (0.13 sec) mysql > show databases;+-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | sys | +-+ 4 rows in set (0.00 sec)
2) myloader imports data
You can see that the data has been deleted and imported using myloader
# myloader-h 127.0.0.1-P 3306-u root-B news-d / home/sequoiadb
Log in to the database to view
# / opt/sequoiasql/mysql/bin/mysql-h 127.0.0.1-P 3306-u rootmysql > show databases +-+ | Database | +-+ | information_schema | | mysql | | news | | performance_schema | | sys | +-+ 5 rows in set (0.00 sec) mysql > use news Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with-ADatabase changedmysql > show tables;+-+ | Tables_in_news | +-+ | user_info | +-+ 1 row in set (0.00 sec) mysql > select count (*) from user_info +-+ | count (*) | +-+ | 121 | +-+ 1 row in set (0.00 sec)
Mydumper and myloader import data is no problem, it seems that the giant sequoia database Sequoiadb does support MySQL compatible tools mydumper and myloader.
To migrate the MySQL database data, you only need to export the MySQL data using mydumper, and then import it into the Giant Sequoia database using myloader in the Giant Sequoia database.
Summary
The giant sequoia database adopts the architecture of computing-storage separation, which realizes 100% complete compatibility of MySQL. Through this article, we can also see that the Giant Sequoia database can support all the standard MySQL peripheral tools, while distributed scalability will greatly improve the scalability of existing applications and the overall data management capabilities. Therefore, the giant sequoia database SequoiaDB can be said to be a powerful replacement of the traditional single point MySQL scheme.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.