In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Most people do not understand the knowledge points of this article "how to configure dynamic data sources using mybatis+druid in springboot", so the editor summarizes the following contents, detailed contents, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this article "how to configure dynamic data sources using mybatis+druid in springboot".
First, build databases and tables
1. Database demo1 puts a user table
SET FOREIGN_KEY_CHECKS=0;-- Table structure for user-- DROP TABLE IF EXISTS `user`; CREATE TABLE `user` (`id` int (11) NOT NULL, `name` varchar (255) DEFAULT NULL,PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 -Records of user-- INSERT INTO `user`VALUES ('1clients,' aa'); INSERT INTO `user`VALUES ('2clients,' bb')
two。 Database demo2 puts a role table
SET FOREIGN_KEY_CHECKS=0;-- Table structure for role-- DROP TABLE IF EXISTS `role`; CREATE TABLE `role` (`id` int (11) NOT NULL, `name` varchar (255) DEFAULT NULL,PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 -Records of role-- INSERT INTO `role` VALUES ('1th,' CC'); INSERT INTO `role` VALUES ('2th,' DD')
2. Pom.xml introduction package
Org.springframework.bootspring-boot-starter-weborg.springframework.bootspring-boot-starter-testtestorg.springframework.bootspring-boot-starter-thymeleafmysqlmysql-connector-javaruntimemysqlmysql-connector-javaorg.springframework.bootspring-boot-starter-jdbcorg.mybatis.spring.bootmybatis-spring-boot-starter2.0.1org.aspectjaspectjweavercom.alibabadruid-spring-boot-starter1.1.10com.typesafe.dynamicdatasourcedynamic-data-source_2.11
Third, use the generator plug-in to generate the entity classes, mapper.java and mapper.xml of user and role tables
User.javaRole.javaUserMapper.javaRoleMapper.javaUserMapper.xmlRoleMapper.xml
4. Configure application.yml
Server:port: 8088mybatis:mapper-locations: classpath:mapper/*.xmlspring:datasource:db1:url: jdbc:mysql://localhost:3306/demo1?useUnicode=true&characterEncoding=utf8&serverTimezone=GMTusername: rootpassword: roottype: com.alibaba.druid.pool.DruidDataSource# driver package driver-class-name: com.mysql.cj.jdbc.Driver# initial connections initial-size: "minimum idle min-idle:" maximum number of activities max-active: 2 "waiting timeout max-wait: 6000" Configure how often the test is performed. Detect idle connections that need to be closed in milliseconds time-between-eviction-runs-millis: 6000 seconds configure the minimum survival time of a connection in the pool, in milliseconds min-evictable-idle-time-millis: 30000 seconds to verify the database connection query, MYSQL is select 1validation-query: SELECT 1 FROM DUAL# idle test, testOnBorrow and testOnReturn are generally not enabled in the production environment, mainly due to performance considerations. The failed connection is mainly through testWhileIdle to ensure that test-while-idle: truetest-on-borrow: falsetest-on-return: false# opens the PSCache, and specifies the PSCache size on each link pool-prepared-statements: truemax-pool-prepared-statement-per-connection-size: 2 "configure the filters intercepted by monitoring statistics. After removal, the monitoring interface sql cannot be counted. 'wall' is used for firewalls. Here is where filter modified filters: stat,wall# turns on the mergesql function through the connectproperties property: slow sql record connection-properties: druid.stat.mergeSql=true Druid.stat.slowSqlMillis=5000# merges multiple DruidDataSourceuseGlobalDataSourceStat: truedb2:url: jdbc:mysql://localhost:3306/demo2?useUnicode=true&characterEncoding=utf8&serverTimezone=GMTusername: rootpassword: roottype: com.alibaba.druid.pool.DruidDataSource# driver package driver-class-name: com.mysql.cj.jdbc.Driver# initial number of connections initial-size: "minimum idle min-idle:" maximum number of activities max-active: 2 "waiting timeout max-wait: 6000" configuration interval how often is it checked Detect idle connections that need to be closed in milliseconds time-between-eviction-runs-millis: 6000 seconds configure the minimum survival time of a connection in the pool, in milliseconds min-evictable-idle-time-millis: 30000 seconds to verify the database connection query, MYSQL is select 1validation-query: SELECT 1 FROM DUAL# idle test, testOnBorrow and testOnReturn are generally not enabled in the production environment, mainly due to performance considerations. The failed connection is mainly through testWhileIdle to ensure that test-while-idle: truetest-on-borrow: falsetest-on-return: false# opens the PSCache, and specifies the PSCache size on each link pool-prepared-statements: truemax-pool-prepared-statement-per-connection-size: 2 "configure the filters intercepted by monitoring statistics. After removal, the monitoring interface sql cannot be counted. 'wall' is used for firewalls. Here is where filter modified filters: stat,wall# turns on the mergesql function through the connectproperties property: slow sql record connection-properties: druid.stat.mergeSql=true Druid.stat.slowSqlMillis=5000# merges multiple DruidDataSourceuseGlobalDataSourceStat: true
5. Start the class to scan mapper.java files
@ SpringBootApplication@MapperScan ("com.example.demo.dao") public class DemoApplication {public static void main (String [] args) {SpringApplication.run (DemoApplication.class, args);}}
Define DataSourceConfig, import the configuration in application.yml into DataSource, and inject it into bean
@ Configurationpublic class DataSourceConfig {/ / configure data source @ Primary@Bean (name= "datasource1") @ ConfigurationProperties ("spring.datasource.db1") public DataSource dataSource1 () {return new DruidDataSource ();} / / configure data source @ Bean (name= "datasource2") @ ConfigurationProperties ("spring.datasource.db2") public DataSource dataSource2 () {return new DruidDataSource ();} / / switch data source from dynamic data source @ Bean (name= "dynamicDataSource") public DataSource dynamicDataSource () {DynamicDataSource dynamicDatasource=new DynamicDataSource () / / set the default data source dynamicDatasource.setDefaultTargetDataSource (dataSource1 ()); / / configure multiple data sources Map dsMap=new HashMap (); dsMap.put ("datasource1", dataSource1 ()); dsMap.put ("datasource2", dataSource2 ()); / / put multiple data sources into the data source pool dynamicDatasource.setTargetDataSources (dsMap); return dynamicDatasource;}}
Define the dynamic data source switching class DynamicDataSourceContextHolder
Public class DynamicDataSourceContextHolder {private static final ThreadLocal contextHolder=new ThreadLocal (); / / set the data source name public static void setDB (String dbType) {contextHolder.set (dbType);} / / get the data source name public static String getDB () {return contextHolder.get ();} / / clear the data source name public static void clearDB () {contextHolder.remove ();}}
Define the class DynamicDataSource that acquires dynamic data sources
Public class DynamicDataSource extends AbstractRoutingDataSource {@ Overrideprotected Object determineCurrentLookupKey () {return DynamicDataSourceContextHolder.getDB ();}}
9. Define the mybatis configuration class and put DynamicDataSource into SqlSessionFactoryBean
@ EnableTransactionManagement@Configurationpublic class MyBatisConfig {@ Resource (name = "dynamicDataSource") private DataSource dynamicDataSource;@Beanpublic SqlSessionFactory sqlSessionFactory () throws Exception {SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean (); sqlSessionFactoryBean.setDataSource (dynamicDataSource); / / configure dynamic data source bean to sqlsessionfactorysqlSessionFactoryBean.setMapperLocations (new PathMatchingResourcePatternResolver (). GetResources ("classpath:mapper/*.xml"); return sqlSessionFactoryBean.getObject ();} @ Beanpublic PlatformTransactionManager platformTransactionManager () {return new DataSourceTransactionManager (dynamicDataSource);}}
Define annotated TargetDataSource for switching data sources
Target ({ElementType.METHOD,ElementType.TYPE}) @ Retention (RetentionPolicy.RUNTIME) @ Documentedpublic @ interface TargetDataSource {String value () default "datasource1";}
Define aspect DynamicDataSourceAspect, which is used to intercept annotations and perform data source switching function
Aspect@Componentpublic class DynamicDataSourceAspect {@ Before ("@ annotation (targetDataSource)") public void beforeSwitchDS (JoinPoint point,TargetDataSource targetDataSource) {DynamicDataSourceContextHolder.setDB (targetDataSource.value ());} @ After ("@ annotation (targetDataSource)") public void afterSwitchDS (JoinPoint point,TargetDataSource targetDataSource) {DynamicDataSourceContextHolder.clearDB ();}}
12. Test Test
@ RestControllerpublic class Test {@ Autowiredprivate RoleMapper roleMapper;@Autowiredprivate UserMapper userMapper;// does not use TargetDataSource annotations, then the default data source is datasource1@RequestMapping ("/ ds1") public String selectDataSource1 () {return userMapper.selectByPrimaryKey (1) .toString ();} / / if annotations are used, the data source is datasource2@RequestMapping ("/ ds2") @ TargetDataSource ("datasource2") public String selectDataSource2 () {return roleMapper.selectByPrimaryKey (1). ToString ();}}
test
1. Input
Http://localhost:8088/ds1
Return
↓
two。 Input
Http://localhost:8088/ds2
Return
↓
The above is about the content of this article on "how to configure dynamic data sources with mybatis+druid in springboot". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more related knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.