这篇文章主要介绍了springboot jpa分库分表项目实现过程详解,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
分库分表场景
关系型数据库本身比较容易成为系统瓶颈,单机存储容量、连接数、处理能力都有限。当单表的数据量达到1000W或100G以后,由于查询维度较多,即使添加从库、优化索引,做很多操作时性能仍下降严重。此时就要考虑对其进行切分了,切分的目的就在于减少数据库的负担,缩短查询时间。
分库分表用于应对当前互联网常见的两个场景——大数据量和高并发。通常分为垂直拆分和水平拆分两种。
垂直拆分是根据业务将一个库(表)拆分为多个库(表)。如:将经常和不常访问的字段拆分至不同的库或表中。由于与业务关系密切,目前的分库分表产品均使用水平拆分方式。
水平拆分则是根据分片算法将一个库(表)拆分为多个库(表)。如:按照ID的最后一位以3取余,尾数是1的放入第1个库(表),尾数是2的放入第2个库(表)等。
单纯的分表虽然可以解决数据量过大导致检索变慢的问题,但无法解决过多并发请求访问同一个库,导致数据库响应变慢的问题。所以通常水平拆分都至少要采用分库的方式,用于一并解决大数据量和高并发的问题。这也是部分开源的分片数据库中间件只支持分库的原因。
但分表也有不可替代的适用场景。最常见的分表需求是事务问题。同在一个库则不需考虑分布式事务,善于使用同库不同表可有效避免分布式事务带来的麻烦。目前强一致性的分布式事务由于性能问题,导致使用起来并不一定比不分库分表快。目前采用最终一致性的柔性事务居多。分表的另一个存在的理由是,过多的数据库实例不利于运维管理。综上所述,最佳实践是合理地配合使用分库+分表。
Sharding-JDBC简介
Sharding-JDBC是当当应用框架ddframe中,从关系型数据库模块dd-rdb中分离出来的数据库水平分片框架,实现透明化数据库分库分表访问。Sharding-JDBC是继dubbox和elastic-job之后,ddframe系列开源的第3个项目。
定位为轻量级Java框架,在Java的JDBC层提供的额外服务。 它使用客户端直连数据库,以jar包形式提供服务,无需额外部署和依赖,可理解为增强版的JDBC驱动,完全兼容JDBC和各种ORM框架。
SQL解析功能完善,支持聚合、分组、排序、limit、or等查询,并支持Binding Table以及笛卡尔积表查询。
项目实践
数据准备
准备两个数据库。并在两个库中建好表, 建表sql如下:
DROP TABLE IF EXISTS `user_auth_0`; CREATE TABLE `user_auth_0` ( `user_id` bigint(20) NOT NULL, `add_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `email` varchar(16) DEFAULT NULL, `password` varchar(255) DEFAULT NULL, `phone` varchar(16) DEFAULT NULL, `remark` varchar(16) DEFAULT NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `USER_AUTH_PHONE` (`phone`), UNIQUE KEY `USER_AUTH_EMAIL` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; DROP TABLE IF EXISTS `user_auth_1`; CREATE TABLE `user_auth_1` ( `user_id` bigint(20) NOT NULL, `add_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `email` varchar(16) DEFAULT NULL, `password` varchar(255) DEFAULT NULL, `phone` varchar(16) DEFAULT NULL, `remark` varchar(16) DEFAULT NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `USER_AUTH_PHONE` (`phone`), UNIQUE KEY `USER_AUTH_EMAIL` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
POM配置
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- 引入jpa--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <!-- 引入mysql--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </dependency> <!-- druid --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.9</version> </dependency> <!-- sharding-jdbc --> <dependency> <groupId>com.dangdang</groupId> <artifactId>sharding-jdbc-core</artifactId> <version>1.5.4</version> </dependency> <!-- fastjson --> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.51</version> </dependency>
application.yml配置
spring: jpa: properties: hibernate: dialect: org.hibernate.dialect.MySQL5InnoDBDialect show-sql: true database0: driverClassName: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/mazhq?serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8 username: root password: 123456 databaseName: mazhq database1: driverClassName: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/liugh?serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8 username: root password: 123456 databaseName: liugh
分库分表最主要有几个配置
1. 有多少个数据源 (2个:database0和database1)
@Data @ConfigurationProperties(prefix = "database0") @Component public class Database0Config { private String url; private String username; private String password; private String driverClassName; private String databaseName; public DataSource createDataSource() { DruidDataSource result = new DruidDataSource(); result.setDriverClassName(getDriverClassName()); result.setUrl(getUrl()); result.setUsername(getUsername()); result.setPassword(getPassword()); return result; } }
2. 用什么列进行分库以及分库算法 (一般是用具体值对2取余判断入哪个库,我采用的是判断值是否大于20)
@Component public class DatabaseShardingAlgorithm implements SingleKeyDatabaseShardingAlgorithm<Long> { @Autowired private Database0Config database0Config; @Autowired private Database1Config database1Config; @Override public String doEqualSharding(Collection<String> collection, ShardingValue<Long> shardingValue) { Long value = shardingValue.getValue(); if (value <= 20L) { return database0Config.getDatabaseName(); } else { return database1Config.getDatabaseName(); } } @Override public Collection<String> doInSharding(Collection<String> availableTargetNames, ShardingValue<Long> shardingValue) { Collection<String> result = new LinkedHashSet<>(availableTargetNames.size()); for (Long value : shardingValue.getValues()) { if (value <= 20L) { result.add(database0Config.getDatabaseName()); } else { result.add(database1Config.getDatabaseName()); } } return result; } @Override public Collection<String> doBetweenSharding(Collection<String> availableTargetNames, ShardingValue<Long> shardingValue) { Collection<String> result = new LinkedHashSet<>(availableTargetNames.size()); Range<Long> range = shardingValue.getValueRange(); for (Long value = range.lowerEndpoint(); value <= range.upperEndpoint(); value++) { if (value <= 20L) { result.add(database0Config.getDatabaseName()); } else { result.add(database1Config.getDatabaseName()); } } return result; } }
3. 用什么列进行分表以及分表算法
@Component public class TableShardingAlgorithm implements SingleKeyTableShardingAlgorithm<Long> { @Override public String doEqualSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) { for (String each : tableNames) { if (each.endsWith(shardingValue.getValue() % 2 + "")) { return each; } } throw new IllegalArgumentException(); } @Override public Collection<String> doInSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) { Collection<String> result = new LinkedHashSet<>(tableNames.size()); for (Long value : shardingValue.getValues()) { for (String tableName : tableNames) { if (tableName.endsWith(value % 2 + "")) { result.add(tableName); } } } return result; } @Override public Collection<String> doBetweenSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) { Collection<String> result = new LinkedHashSet<>(tableNames.size()); Range<Long> range = shardingValue.getValueRange(); for (Long i = range.lowerEndpoint(); i <= range.upperEndpoint(); i++) { for (String each : tableNames) { if (each.endsWith(i % 2 + "")) { result.add(each); } } } return result; } }
4. 每张表的逻辑表名和所有物理表名和集成调用
@Configuration public class DataSourceConfig { @Autowired private Database0Config database0Config; @Autowired private Database1Config database1Config; @Autowired private DatabaseShardingAlgorithm databaseShardingAlgorithm; @Autowired private TableShardingAlgorithm tableShardingAlgorithm; @Bean public DataSource getDataSource() throws SQLException { return buildDataSource(); } private DataSource buildDataSource() throws SQLException { //分库设置 Map<String, DataSource> dataSourceMap = new HashMap<>(2); //添加两个数据库database0和database1 dataSourceMap.put(database0Config.getDatabaseName(), database0Config.createDataSource()); dataSourceMap.put(database1Config.getDatabaseName(), database1Config.createDataSource()); //设置默认数据库 DataSourceRule dataSourceRule = new DataSourceRule(dataSourceMap, database0Config.getDatabaseName()); //分表设置,大致思想就是将查询虚拟表Goods根据一定规则映射到真实表中去 TableRule orderTableRule = TableRule.builder("user_auth") .actualTables(Arrays.asList("user_auth_0", "user_auth_1")) .dataSourceRule(dataSourceRule) .build(); //分库分表策略 ShardingRule shardingRule = ShardingRule.builder() .dataSourceRule(dataSourceRule) .tableRules(Arrays.asList(orderTableRule)) .databaseShardingStrategy(new DatabaseShardingStrategy("user_id", databaseShardingAlgorithm)) .tableShardingStrategy(new TableShardingStrategy("user_id", tableShardingAlgorithm)).build(); DataSource dataSource = ShardingDataSourceFactory.createDataSource(shardingRule); return dataSource; } @Bean public KeyGenerator keyGenerator() { return new DefaultKeyGenerator(); }
接口测试代码
1、实体类
/** * @author mazhq * @date 2019/7/30 16:41 */ @Entity @Data @Table(name = "USER_AUTH", uniqueConstraints = {@UniqueConstraint(name = "USER_AUTH_PHONE", columnNames = {"PHONE"}), @UniqueConstraint(name = "USER_AUTH_EMAIL", columnNames = {"EMAIL"})}) public class UserAuthEntity implements Serializable { private static final long serialVersionUID = 7230052310725727465L; @Id private Long userId; @Column(name = "PHONE", length = 16) private String phone; @Column(name = "EMAIL", length = 16) private String email; private String password; @Column(name = "REMARK",length = 16) private String remark; @Column(name = "ADD_DATE", nullable = false, columnDefinition = "datetime default now()") private Date addDate; }
2. Dao层
@Repository public interface UserAuthDao extends JpaRepository<UserAuthEntity, Long> { }
3. controller层
/** * @author mazhq * @Title: UserAuthController * @date 2019/8/1 17:18 */ @RestController @RequestMapping("/user") public class UserAuthController { @Autowired private UserAuthDao userAuthDao; @PostMapping("/save") public String save(){ for (int i=0;i<40;i++) { UserAuthEntity userAuthEntity = new UserAuthEntity(); userAuthEntity.setUserId((long)i); userAuthEntity.setAddDate(new Date()); userAuthEntity.setEmail("test"+i+"@163.com"); userAuthEntity.setPassword("123456"); userAuthEntity.setPhone("1388888888"+i); Random r = new Random(); userAuthEntity.setRemark(""+r.nextInt(100)); userAuthDao.save(userAuthEntity); } return "success"; } @PostMapping("/select") public String select(){ return JSONObject.toJSONString(userAuthDao.findAll(Sort.by(Sort.Order.desc("remark")))); } }
测试方式:
先调用:http://localhost:8080/user/save
再查询:http://localhost:8080/user/select
git地址:sharding
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持菜鸟教程(cainiaojc.com)。
声明:本文内容来源于网络,版权归原作者所有,内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:notice#cainiaojc.com(发邮件时,请将#更换为@)进行举报,并提供相关证据,一经查实,本站将立刻删除涉嫌侵权内容。