In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "how to write a simple demo to achieve the separation of reading and writing", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let the editor take you to learn "how to write a simple demo to achieve the separation of reading and writing".
Preface
I believe experienced students are aware that when the read and write volume of db is too high, we will back up one or more copies of the slave database for data reading, and then the master database is mainly responsible for writing (there is also a need for reading, but the pressure is not high). When the db divides the master and slave libraries, we also need to automatically connect the master and slave libraries in the project to achieve the effect of reading and writing separation. It is not difficult to achieve read-write separation, as long as the corresponding db service address is manually controlled in the database connection pool, but that will invade the business code, and there may be many places for a project to operate the database, which will undoubtedly take a lot of work if they are controlled manually, so it is necessary for us to transform a set of convenient tools.
In Java language, most projects are based on Spring Boot framework to build project architecture. Combined with Spring's own AOP tools, we can easily build annotation classes that can achieve the effect of separation of reading and writing. With annotations, we can achieve the effect of no intrusion to business code, and it is also convenient to use.
Let's simply write a demo.
Environmental deployment
Database: MySql
Number of libraries: 2, one master and one slave
There are many articles about the deployment of mysql master-slave environment on the Internet, which will not be introduced here.
Start the project
First, no doubt, start building a SpringBoot project, and then introduce the following dependencies into the pom file:
Com.alibaba druid-spring-boot-starter 1.1.10 org.mybatis.spring.boot mybatis-spring-boot-starter 1.3.2 tk.mybatis mapper-spring-boot-starter 2.1.5 Mysql mysql-connector-java 8.0.16 org.springframework.boot spring-boot-starter-jdbc provided org.springframework.boot spring-boot-starter-aop provided Org.springframework.boot spring-boot-starter-web org.projectlombok lombok true com.alibaba fastjson 1.2.4 org.springframework.boot Spring-boot-starter-test test org.springframework.boot spring-boot-starter-data-jpa directory structure
After introducing the basic dependencies, sort out the directory structure, and the completed project skeleton is roughly as follows:
Build a table
Create a table user, execute the sql statement in the master database and generate the corresponding table data from the slave database at the same time
DROP TABLE IF EXISTS `user` CREATE TABLE `user` (`user id` bigint (20) NOT NULL COMMENT 'user id', `user_ name` varchar' DEFAULT''COMMENT' user name', `user_ phone` varchar (50) DEFAULT''COMMENT' user Mobile', `address`varchar (255) DEFAULT''COMMENT' address', `weight`int (3) NOT NULL DEFAULT'1' COMMENT 'weight First', 'created_ at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT' creation time', `updated_ at` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'update time', PRIMARY KEY (`user_ id`) ENGINE=InnoDB DEFAULT CHARSET=utf8 INSERT INTO `user`VALUES ('1196978513958141952', 'Test 1Qing,' 18826334748', 'Guangzhou Haizhu District', '1Qing,' 2019-11-20 10V 28 VALUES 51, '2019-11-22 14VRV 2828'); INSERT INTO `user` VALUES ('1196978513958141953,' Test 2, '18826274230', 'Tianhe District, Guangzhou','2', '2019-11-20 10-2914', '2019-11-22 1414-2828') INSERT INTO `user` VALUES ('1196978513958141954', 'Test 3', '18826273900', 'Tianhe District of Guangzhou','1', '2019-11-20 10-30-19,' 2019-11-22 14-28 Vista 30'); master-slave data source configuration
Application.yml, the main information is the data source configuration of the master-slave library.
Server: port: 8001 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 datasource: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.cj.jdbc.Driver master: url: jdbc:mysql://127.0.0.1:3307/user?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&failOverReadOnly=false&useSSL=false&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true username: root Password: slave: url: jdbc:mysql://127.0.0.1:3308/user?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&failOverReadOnly=false&useSSL=false&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true username: root password:
Because there are one master, one slave and two data sources, we use enumerated classes instead, so it is convenient for us to correspond when we use them.
@ Getter public enum DynamicDataSourceEnum {MASTER ("master"), SLAVE ("slave"); private String dataSourceName; DynamicDataSourceEnum (String dataSourceName) {this.dataSourceName = dataSourceName;}}
Data source configuration information class DataSourceConfig, where two data sources, masterDb and slaveDb, are configured
@ Configuration @ MapperScan (basePackages = "com.xjt.proxy.mapper", sqlSessionTemplateRef = "sqlTemplate") public class DataSourceConfig {/ / main library @ Bean @ ConfigurationProperties (prefix = "spring.datasource.master") public DataSource masterDb () {return DruidDataSourceBuilder.create () .build () } / * from the library * / @ Bean @ ConditionalOnProperty (prefix = "spring.datasource", name = "slave", matchIfMissing = true) @ ConfigurationProperties (prefix = "spring.datasource.slave") public DataSource slaveDb () {return DruidDataSourceBuilder.create () .build () } / * Master-slave dynamic configuration * / @ Bean public DynamicDataSource dynamicDb (@ Qualifier ("masterDb") DataSource masterDataSource, @ Autowired (required = false) @ Qualifier ("slaveDb") DataSource slaveDataSource) {DynamicDataSource dynamicDataSource = new DynamicDataSource (); Map targetDataSources = new HashMap (); targetDataSources.put (DynamicDataSourceEnum.MASTER.getDataSourceName (), masterDataSource) If (slaveDataSource! = null) {targetDataSources.put (DynamicDataSourceEnum.SLAVE.getDataSourceName (), slaveDataSource);} dynamicDataSource.setTargetDataSources (targetDataSources); dynamicDataSource.setDefaultTargetDataSource (masterDataSource); return dynamicDataSource;} @ Bean public SqlSessionFactory sessionFactory (@ Qualifier ("dynamicDb") DataSource dynamicDataSource) throws Exception {SqlSessionFactoryBean bean = new SqlSessionFactoryBean () Bean.setMapperLocations (new PathMatchingResourcePatternResolver (). GetResources ("classpath*:mapper/*Mapper.xml")); bean.setDataSource (dynamicDataSource); return bean.getObject ();} @ Bean public SqlSessionTemplate sqlTemplate (@ Qualifier ("sessionFactory") SqlSessionFactory sqlSessionFactory) {return new SqlSessionTemplate (sqlSessionFactory) } @ Bean (name = "dataSourceTx") public DataSourceTransactionManager dataSourceTx (@ Qualifier ("dynamicDb") DataSource dynamicDataSource) {DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager (); dataSourceTransactionManager.setDataSource (dynamicDataSource); return dataSourceTransactionManager;}} set the route
The purpose of setting up the route in order to find the corresponding data source, we can use ThreadLocal to save the information of the data source to each thread, which is convenient for us to get when we need it.
Public class DataSourceContextHolder {private static final ThreadLocal DYNAMIC_DATASOURCE_CONTEXT = new ThreadLocal (); public static void set (String datasourceType) {DYNAMIC_DATASOURCE_CONTEXT.set (datasourceType);} public static String get () {return DYNAMIC_DATASOURCE_CONTEXT.get ();} public static void clear () {DYNAMIC_DATASOURCE_CONTEXT.remove () }} get route public class DynamicDataSource extends AbstractRoutingDataSource {@ Override protected Object determineCurrentLookupKey () {return DataSourceContextHolder.get ();}}
The function of AbstractRoutingDataSource is based on finding key routing to the corresponding data source, it maintains a set of target data sources internally, and maps the routing key to the target data source, and provides a method to find the data source based on key.
Comments on data sources
In order to easily switch data sources, we can write an annotation that contains the enumerated values corresponding to the data source. The default is the main library.
@ Retention (RetentionPolicy.RUNTIME) @ Target (ElementType.METHOD) @ Documented public @ interface DataSourceSelector {DynamicDataSourceEnum value () default DynamicDataSourceEnum.MASTER; boolean clear () default true;} aop switch data sources
At this point, aop is finally ready to appear. Here we define an aop class to switch data sources for annotated methods. The specific code is as follows:
@ Slf4j @ Aspect @ Order (value = 1) @ Component public class DataSourceContextAop {@ Around ("@ annotation (com.xjt.proxy.dynamicdatasource.DataSourceSelector)") public Object setDynamicDataSource (ProceedingJoinPoint pjp) throws Throwable {boolean clear = true; try {Method method = this.getMethod (pjp); DataSourceSelector dataSourceImport = method.getAnnotation (DataSourceSelector.class); clear = dataSourceImport.clear () DataSourceContextHolder.set (dataSourceImport.value (). GetDataSourceName ()); log.info ("= data source switch to: {}", dataSourceImport.value (). GetDataSourceName ()); return pjp.proceed ();} finally {if (clear) {DataSourceContextHolder.clear () } private Method getMethod (JoinPoint pjp) {MethodSignature signature = (MethodSignature) pjp.getSignature (); return signature.getMethod ();}}
At this point, our preparatory configuration work is complete, and let's start testing the results.
Write the Service file first, including two methods: read and update
@ Service public class UserService {@ Autowired private UserMapper userMapper; @ DataSourceSelector (value = DynamicDataSourceEnum.MASTER) public int update (Long userId) {User user = new User (); user.setUserId (userId); user.setUserName ("Lao Xue"); return userMapper.updateByPrimaryKeySelective (user);} @ DataSourceSelector (value = DynamicDataSourceEnum.SLAVE) public User find (Long userId) {User user = new User () User.setUserId (userId); return userMapper.selectByPrimaryKey (user);}}
According to the notes on the method, we can see that the reading method follows the library, the updated method goes to the main database, and the updated object is the data whose userId is 1196978513958141952.
Then we write a test class to test whether it can achieve the effect.
@ RunWith (SpringRunner.class) @ SpringBootTest class UserServiceTest {@ Autowired UserService userService; @ Test void find () {User user = userService.find (1196978513958141952L); System.out.println ("id:" + user.getUserId ()); System.out.println ("name:" + user.getUserName ()); System.out.println ("phone:" + user.getUserPhone ()) } @ Test void update () {Long userId = 1196978513958141952L; userService.update (userId); User user = userService.find (userId); System.out.println (user.getUserName ());}}
Test results:
1. Reading method
2. Update method
After execution, comparing the database, we can find that the master and slave libraries have modified the data, indicating that our read-write separation is successful. Of course, the update method can point to the slave library, so that it will only change to the data of the slave library, not the master library.
At this point, I believe you have a deeper understanding of "how to write a simple demo to achieve read-write separation". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.