Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use dynamic-datasource-spring-boot-starter to realize multiple data sources and Source Code Analysis

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article is about how to use dynamic-datasource-spring-boot-starter to achieve multiple data sources and source code analysis, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article. Let's take a look at it with the editor.

Dynamic-datasource-spring-boot-starter is a springboot-based launcher for fast integration of multiple data sources.

Github: https://github.com/baomidou/dynamic-datasource-spring-boot-starter

Documentation: https://github.com/baomidou/dynamic-datasource-spring-boot-starter/wiki

It is in the same ecosystem as mybatis-plus, and it is easy to integrate mybatis-plus.

Properties:

Data source grouping, suitable for multiple scenarios with pure multi-database read-write separation of one master and multi-slave mixed mode.

Built-in sensitive parameter encryption and initialization table structure schema database database.

Provides rapid integration of Druid,Mybatis-Plus,P6sy,Jndi.

Simplify Druid and HikariCp configuration and provide global parameter configuration.

Provides a custom data source interface (configured with yml or properties by default).

Provide the scheme of adding or subtracting data sources after the start of the project.

Provides a pure read-write separation scheme in Mybatis environment.

Use spel dynamic parameters to parse the data source, such as getting the data source from session,header or parameters. (multi-tenant Architecture artifact)

Provides multi-tier data source nesting switching. (ServiceA > > ServiceB > > ServiceC, each Service is a different data source)

Provides an experimental feature for switching data sources using regular or spel instead of annotations.

Distributed transaction support based on seata.

Real fuck

Throw out the coordinates first.

Com.baomidou dynamic-datasource-spring-boot-starter 3.1.0

Here are a few more useful application scenarios.

Basic use

The method of use is very simple, which is divided into two steps.

One: configure the data source through yml

Second: in the service layer, you can add @ DS annotations to the methods that you want to switch data sources, or you can add them to the entire service layer. Method annotations take precedence over class annotations.

Spring: datasource: dynamic: primary: master # sets the default data source or data source group. The default is master strict: false # sets strict mode, and the default false does not start. After startup, an exception is thrown when the specified data source is not matched. If it is not started, the default data source will be used. Datasource: master: url: jdbc:mysql://127.0.0.1:3306/dynamic username: root password: 123456 driver-class-name: com.mysql.jdbc.Driver db1: url: jdbc:gbase://127.0.0.1:5258/dynamic username: root password: 123456 driver-class-name: com.gbase.jdbc.Driver

This is the configuration of two different data sources, then write the service code

# Multi-master and multi-slave spring: datasource: dynamic: datasource: master_1: master_2: slave_1: slave_2: slave_3:

If it is multi-master and multi-slave, then use the data group name _ xxx, the data group name is preceded by the underscore, and the data sources with the same group name will be placed under a group. When switching data sources, you can specify a specific data source name, or you can specify a group name and then automatically use the load balancing algorithm to switch.

# Pure multi-libraries (remember to set primary) spring: datasource: dynamic: datasource: db1: db2: db3: db4: db5:

It's purely multi-storey, just add up one by one.

@ Service@DS ("master") public class UserServiceImpl implements UserService {@ Autowired private JdbcTemplate jdbcTemplate; public List selectAll () {return jdbcTemplate.queryForList ("select * from user");} @ Override @ DS ("db1") public List selectByCondition () {return jdbcTemplate.queryForList ("select * from user where age > 10");}} the annotation result does not have @ DS default data source @ DS ("dsName") dsName can be a group name or a specific library name

Through the log, we can find that the multiple data sources we have configured have been initialized. If you switch the data sources, you will also see the printing date.

Is it very convenient? this is an official example.

Integrated druid connection pool com.alibaba druid-spring-boot-starter 1.1.22

First, introduce dependency.

Spring: autoconfigure: exclude: com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure

Then rule out the native automatic configuration of druid

Spring: datasource: # configuration related to database links dynamic: druid: # the following are global default values You can change the filters filters of # monitoring statistics intercept globally: stat # configure initialization size / minimum / maximum initial-size: 1 min-idle: 1 max-active: 20 # get connection wait timeout max-wait: 60000 # how often is it checked Detect idle connections that need to be closed time-between-eviction-runs-millis: 60000 # minimum time for a connection to live in the pool min-evictable-idle-time-millis: 300000 validation-query: SELECT 'x' test-while-idle: true test-on-borrow: false test-on-return: false # Open PSCache and specify the size of the PSCache on each connection. Oracle is set to true,mysql and set to false. It is recommended that false pool-prepared-statements: false max-pool-prepared-statement-per-connection-size: 20 stat: merge-sql: true log-slow-sql: true slow-sql-millis: 2000 primary: master datasource: master: url: jdbc:mysql://127.0.0.1:3306/test? UseUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=GMT%2B8 username: root password: root driver-class-name: com.mysql.cj.jdbc.Driver gbase1: url: jdbc:gbase://127.0.0.1:5258/test?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&failOverReadOnly=false&useSSL=false&zeroDateTimeBehavior=convertToNull username: gbase password: gbase driver-class-name: com.gbase.jdbc.Driver Druid: # the following parameters can reset the druid parameter initial-size: validation-query: select 1 FROM DUAL # for each library. For example, for oracle, you need to reset this public-key: # (non-global parameter) setting to enable encryption. The underlying layer will automatically configure the relevant connection parameters and filter for you.

Once configured, the use of switching data sources is the same as above. Just annotate the service class or method with @ DS ("db1").

For detailed configuration, please see the configuration class com.baomidou.dynamic.datasource.spring.boot.autoconfigure.DynamicDataSourceProperties.

Service nesting

This is the ninth of the feature: provide multi-tier data source nesting switching. (ServiceA > > ServiceB > > ServiceC, each Service is a different data source)

Borrow the demo in the source code: implement SchoolService > > studentService, teacherService

@ Servicepublic class SchoolServiceImpl {public void addTeacherAndStudent () {teacherService.addTeacherWithTx ("ss", 1); teacherMapper.addTeacher ("test", 111); studentService.addStudentWithTx ("tt", 2);} @ Service@DS ("teacher") public class TeacherServiceImpl {public boolean addTeacherWithTx (String name, Integer age) {return teacherMapper.addTeacher (name, age) } @ Service@DS ("student") public class StudentServiceImpl {public boolean addStudentWithTx (String name, Integer age) {return studentMapper.addStudent (name, age);}}

The addTeacherAndStudent call data source switch is primary- > teacher- > primary- > student- > primary.

For other demo, you can see the official wiki, which contains a lot of usage, so I won't repeat it here. The key point is to learn the principle.

Why does the switch data source not take effect or the transaction does not work?

This kind of problem is common in the previous section service nesting, such as serviceA-> serviceB, serviceC,serviceA

Plus @ Transaction

To put it simply: in the service of nested data sources, if you operate multiple data sources, you cannot add @ Transaction to the outermost layer to start the transaction, otherwise switching the data source will not take effect, because it belongs to a distributed transaction and needs to be solved with seata solution. If it is a single data source (no need to switch data sources), you can use @ Transaction to start the transaction to ensure the integrity of each data source.

Here is a rough analysis of the reasons why the transaction does not take effect:

The principle of switching data sources is that it implements the DataSource interface and the getConnection method. As long as the transaction is opened in service, only the transaction-enabled data source will be used for other data source operations in service, because the open transaction data source will be cached, and the txObject can be seen in the doBegin method of DataSourceTransactionManager. If it is within a transaction, the Connection will be reused, so the data source cannot be switched.

/ * This implementation sets the isolation level but ignores the timeout. * / @ Override protected void doBegin (Object transaction, TransactionDefinition definition) {DataSourceTransactionObject txObject = (DataSourceTransactionObject) transaction; Connection con = null Try {if (! txObject.hasConnectionHolder () | | txObject.getConnectionHolder () .isSynchronizedWithTransaction ()) {/ / opening a new transaction will get a new Connection, so the getConnection method of the DataSource API will be called Thus switching the data source Connection newCon = obtainDataSource () .getConnection () If (logger.isDebugEnabled ()) {logger.debug ("Acquired Connection [" + newCon + "] for JDBC transaction");} txObject.setConnectionHolder (new ConnectionHolder (newCon), true) } txObject.getConnectionHolder (). SetSynchronizedWithTransaction (true); / / if the transaction has been opened, get Connection con = txObject.getConnectionHolder (). GetConnection () from holder;. }

Multiple data source transaction nesting

Look at the above source code, it is said that a new transaction will re-obtain Connection, will successfully switch data sources, then I add @ Transaction to the service method of each data source? (involving spring transaction propagation behavior)

Do a little experiment here, or the example above, serviceA-> (nested) serviceB, serviceC,serviceA

Add @ Transaction, and now add @ Transaction to the methods of serviceB and serviceC, that is, all methods called in service are annotated with @ Transaction.

@ Transactionalpublic void addTeacherAndStudentWithTx () {teacherService.addTeacherWithTx ("ss", 1); studentService.addStudentWithTx ("tt", 2); throw new RuntimeException ("test");}

Like this, @ Transaction is added to both service.

In fact, the data source will not be switched, because the default transaction propagation level is required, and the parent-child service belongs to the same thing, so the same Connection will be used. Here are multiple data sources. If you change the transaction propagation mode to require_new to add new things to the child service, you can switch the data sources. They are all independent transactions, and then the parent service rollback will not cause the child service rollback (see spring transaction propagation for details). This ensures the data integrity of each individual data source. If you want to ensure the integrity of all data sources, use the seata distributed transaction framework.

@ Transactionalpublic void addTeacherAndStudentWithTx () {/ / database operations aaaDao.doSomethings ("test"); teacherService.addTeacherWithTx ("ss", 1); studentService.addStudentWithTx ("tt", 2); throw new RuntimeException ("test");}

With regard to transaction nesting, another situation is to do some DB1 operations in the external service, and then call the service of DB2 and DB3. If you want to guarantee the transaction of the DB1, you need to add @ Transaction to the external service. If you want the service inside to switch the data source normally, according to the transaction propagation behavior, you can set it to propagation = Propagation.REQUIRES_NEW, and the data sources can be switched normally, because they are independent transactions.

Add: questions about @ Transaction operating multi-data source transactions

@ Transaction public void insertDB1andDB2 () {db1Service.insertOne (); db2Service.insertOne (); throw new RuntimeException ("test");}

Similar to the operation above, we implement multiple data sources by injecting multiple DataSource, DataSourceTransactionManager, SqlSessionFactory, and SqlSessionTemplate Bean (as mentioned in the first blog post above), and then add @ Transaction to the outside to implement the transaction.

I tried to throw an exception in the middle to see if it can be rolled back normally, and it turns out that only the transaction of one data source will take effect. Click the @ Transaction annotation and find that there is a transactionManager attribute in it. This is the previously declared transactionManager Bean. We default that the transactionManager of DB1 is @ Primary, so the transaction of DB2 will not take effect at this time, because the TransactionManager of DB1 is used. Because @ Transactional can only specify one transaction manager, and the annotations do not allow repetition, you can only use the transaction manager of one data source. If the update in DB2 fails, I want to roll back DB1 and DB2 for rollback, which can be solved using ChainedTransactionManager, which can finally do its best to roll back the transaction

Source code analysis

Source code is based on version 3.1.1 (20200522)

Due to space constraints, only the key code is cut. If you need to see the complete code, you can go to github or click to download dynamic-datasource-spring-boot-starter.zip.

Whole structure

To get the code, you need to find the starting point. Read the code here with questions.

How is automatic configuration implemented?

Generally speaking, the best starting point for a starter is to automatically configure the class. Specify the automatic configuration class entry in the META-INF/spring.factories file.

Org.springframework.boot.autoconfigure.EnableAutoConfiguration=\ com.baomidou.dynamic.datasource.spring.boot.autoconfigure.DynamicDataSourceAutoConfiguration

You can see this automatic configuration in spring.factories

So start with the core auto-configuration class DynamicDataSourceAutoConfiguration

You can think of this as the Main entry of the program.

@ Slf4j@Configuration@AllArgsConstructor// read configuration @ EnableConfigurationProperties (DynamicDataSourceProperties.class) with spring.datasource.dynamic prefix / / need to inject our DataSource bean@AutoConfigureBefore (DataSourceAutoConfiguration.class) before spring boot's DataSource bean auto configuration / / autoConfig with Druid and Creator@Import of various data source connection pools (value = {DruidDynamicDataSourceConfiguration.class, DynamicDataSourceCreatorAutoConfiguration.class}) / / enable this autoConfig@ConditionalOnProperty (prefix = DynamicDataSourceProperties.PREFIX, name = "enabled" when it contains spring.datasource.dynamic configuration HavingValue = "true", matchIfMissing = true) public class DynamicDataSourceAutoConfiguration {private final DynamicDataSourceProperties properties / * Multi-data source loading API. Read multi-data source configuration from yml by default * @ return DynamicDataSourceProvider * / @ Bean @ ConditionalOnMissingBean public DynamicDataSourceProvider dynamicDataSourceProvider () {Map datasourceMap = properties.getDatasource (); return new YmlDynamicDataSourceProvider (datasourceMap) } / * register your own dynamic multi-data source DataSource * @ param dynamicDataSourceProvider various data source connection pool builders * @ return DataSource * / @ Bean @ ConditionalOnMissingBean public DataSource dataSource (DynamicDataSourceProvider dynamicDataSourceProvider) {DynamicRoutingDataSource dataSource = new DynamicRoutingDataSource (); dataSource.setPrimary (properties.getPrimary ()); dataSource.setStrict (properties.getStrict ()); dataSource.setStrategy (properties.getStrategy ()) DataSource.setProvider (dynamicDataSourceProvider); dataSource.setP6spy (properties.getP6spy ()); dataSource.setSeata (properties.getSeata ()); return dataSource } / * AOP section to enhance the annotated method of DS to achieve the purpose of switching the data source * @ param dsProcessor dynamic parameter parsing the data source if the data source name starts with # Will enter the parser chain * @ return advisor * / @ Bean @ ConditionalOnMissingBean public DynamicDataSourceAnnotationAdvisor dynamicDatasourceAnnotationAdvisor (DsProcessor dsProcessor) {/ / aop method interceptor to do the operation DynamicDataSourceAnnotationInterceptor interceptor = new DynamicDataSourceAnnotationInterceptor () before and after the method call / / dynamic parameter parser interceptor.setDsProcessor (dsProcessor); / / use AbstractPointcutAdvisor to connect pointcut and advice to form a section DynamicDataSourceAnnotationAdvisor advisor = new DynamicDataSourceAnnotationAdvisor (interceptor); advisor.setOrder (properties.getOrder ()); return advisor } / * dynamic parameter resolver chain * @ return DsProcessor * / @ Bean @ ConditionalOnMissingBean public DsProcessor dsProcessor () {DsHeaderProcessor headerProcessor = new DsHeaderProcessor (); DsSessionProcessor sessionProcessor = new DsSessionProcessor (); DsSpelExpressionProcessor spelExpressionProcessor = new DsSpelExpressionProcessor (); / / order header- > session- > spel all parameters starting with # get the data source headerProcessor.setNextProcessor (sessionProcessor) from the parameter SessionProcessor.setNextProcessor (spelExpressionProcessor); return headerProcessor } / * provides switching between data source schemes using regular or spel without annotations (experimental feature) * if you want to enable this feature, you have to configure DynamicDataSourceConfigure Bean * @ param dynamicDataSourceConfigure dynamicDataSourceConfigure * @ param dsProcessor dsProcessor * @ return advisor * / @ Bean @ ConditionalOnBean (DynamicDataSourceConfigure.class) public DynamicDataSourceAdvisor dynamicAdvisor (DynamicDataSourceConfigure dynamicDataSourceConfigure). DsProcessor dsProcessor) {DynamicDataSourceAdvisor advisor = new DynamicDataSourceAdvisor (dynamicDataSourceConfigure.getMatchers ()) Advisor.setDsProcessor (dsProcessor); advisor.setOrder (Ordered.HIGHEST_PRECEDENCE); return advisor;}}

The five Bean automatically configured here are all very important, which will be covered one by one later

Let's talk about automatic configuration here, mainly because there are several comments on the above automatic configuration class, all of which are annotated, of which the important one is this note:

/ / read the configuration @ EnableConfigurationProperties (DynamicDataSourceProperties.class) prefixed with spring.datasource.dynamic

@ EnableConfigurationProperties: validate classes annotated with @ ConfigurationProperties, mainly used to convert properties or yml configuration files to bean, which is very practical in practice

@ ConfigurationProperties (prefix = DynamicDataSourceProperties.PREFIX) public class DynamicDataSourceProperties {public static final String PREFIX = "spring.datasource.dynamic"; public static final String HEALTH = PREFIX + ".health"; / * you must set the default library, default master * / private String primary = "master"; / * whether strict mode is enabled or not. The error is reported directly if the data source is not matched to the data source in strict mode, while in non-strict mode, the data source * / private Boolean strict = false; set by the default data source primary is used. / * Druid global parameter configuration * / @ NestedConfigurationProperty private DruidConfig druid = new DruidConfig (); / * HikariCp global parameter configuration * / @ NestedConfigurationProperty private HikariCpConfig hikari = new HikariCpConfig (); }

You can find that everything we configured in spring.datasource.dynamic will be injected into this configuration Bean. It is important to note that @ NestedConfigurationProperty is used to nest other configuration classes. If you are not sure what the configuration item is, just look at the DynamicDataSourceProperties class.

For example, DruidConfig, this DruidConfig is a custom configuration class, not in Druid. There is a toProperties method under it. In order to realize that the durid under each dataSource in the yml configuration can be configured independently (if not configured, the global configuration is used), it is converted to Properties according to the combination of global configuration and independent configuration, and then the druid connection pool is created according to this configuration in the DruidDataSourceCreator class.

How to integrate many connection pools

Integrated connection pool configuration has been mentioned above, that is, under the DynamicDataSourceProperties configuration class, but how to generate a real data source connection pool through these configurations? let's take a look at the creator package.

You can tell which data sources are supported by the name.

In automatic configuration, when you configure DataSource, you new a DynamicRoutingDataSource, which implements the InitializingBean interface and does something during bean initialization.

@ Slf4jpublic class DynamicRoutingDataSource extends AbstractRoutingDataSource implements InitializingBean, DisposableBean {/ * all databases * / private final Map dataSourceMap = new LinkedHashMap (); / * grouped databases * / private final Map groupDataSources = new ConcurrentHashMap (); omit part of the code. / * add data source * * @ param ds data source name * @ param dataSource data source * / public synchronized void addDataSource (String ds, DataSource dataSource) {/ / if the data source does not exist, save an if (! dataSourceMap.containsKey (ds)) {/ / wrap seata, p6spy plug-in dataSource = wrapDataSource (ds, dataSource) / Save to all data sources map dataSourceMap.put (ds, dataSource); / / Group them and save map this.addGroupDataSource (ds, dataSource); log.info ("dynamic-datasource-load a datasource named [{}] success", ds) } else {log.warn ("dynamic-datasource-load a datasource named [{}] failed, because it already exist", ds);}} / / method for wrapping seata and p6spy plug-ins private DataSource wrapDataSource (String ds, DataSource dataSource) {if (p6spy) {dataSource = new P6DataSource (dataSource); log.debug ("dynamic-datasource [{}] wrap p6spy plugin", ds) } if (seata) {dataSource = new DataSourceProxy (dataSource); log.debug ("dynamic-datasource [{}] wrap seata plugin", ds);} return dataSource } / / the method of adding grouping data sources private void addGroupDataSource (String ds, DataSource dataSource) {/ / the grouping is divided by _ underscore if (ds.contains (UNDERLINE)) {/ / get the group name String group = ds.split (UNDERLINE) [0] / / if a group already exists, add the data source if (groupDataSources.containsKey (group)) {groupDataSources.get (group) .addDatasource (dataSource) to it;} else {try {/ / otherwise create a new packet DynamicGroupDataSource groupDatasource = new DynamicGroupDataSource (group, strategy.newInstance ()) GroupDatasource.addDatasource (dataSource); groupDataSources.put (group, groupDatasource);} catch (Exception e) {log.error ("dynamic-datasource-add the datasource named [{}] error", ds, e); dataSourceMap.remove (ds) } @ Override public void afterPropertiesSet () throws Exception {/ / load the data source Map dataSources = provider.loadDataSources () through configuration; / / add and group the data source for (Map.Entry dsItem: dataSources.entrySet ()) {addDataSource (dsItem.getKey (), dsItem.getValue () } / / detect the default data source settings if (groupDataSources.containsKey (primary)) {log.info ("dynamic-datasource initial loaded [{}] datasource,primary group datasource named [{}]", dataSources.size (), primary)) } else if (dataSourceMap.containsKey (primary)) {log.info ("dynamic-datasource initial loaded [{}] datasource,primary datasource named [{}]", dataSources.size (), primary);} else {throw new RuntimeException ("dynamic-datasource Please check the setting of primary");}

This class is the core dynamic data source component, which maintains DataSource in map. Here we focus on how to create a data source connection pool.

What it does is to get the created data source map from provider during initialization, and then parse the map to group it to see how the map is created in the provider.

@ Bean @ ConditionalOnMissingBean public DynamicDataSourceProvider dynamicDataSourceProvider () {Map datasourceMap = properties.getDatasource (); return new YmlDynamicDataSourceProvider (datasourceMap);}

In the automatic configuration, what is injected is this bean, which reads the configuration file through yml (and then read the configuration file through jdbc). This is not the point, which will be mentioned later.

By tracking provider.loadDataSources (); found that dataSourceCreator.createDataSource (dataSourceProperty) is called in the createDataSourceMap method

@ Slf4j@Setterpublic class DataSourceCreator {/ * whether druid * / private static Boolean druidExists = false; / * exists hikari * / private static Boolean hikariExists = false; static {try {Class.forName (DRUID_DATASOURCE); druidExists = true Log.debug ("dynamic-datasource detect druid,Please Notice\ n" + "https://github.com/baomidou/dynamic-datasource-spring-boot-starter/wiki/Integration-With-Druid");} catch (ClassNotFoundException ignored) {} try {Class.forName (HIKARI_DATASOURCE); hikariExists = true) } catch (ClassNotFoundException ignored) {}} / * create data source * * @ param dataSourceProperty data source information * @ return data source * / public DataSource createDataSource (DataSourceProperty dataSourceProperty) {DataSource dataSource; / / if it is a jndi data source String jndiName = dataSourceProperty.getJndiName (); if (jndiName! = null & &! jndiName.isEmpty ()) {dataSource = createJNDIDataSource (jndiName) } else {Class

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report