首页
关于
友链
Search
1
wlop 4K 壁纸 4k8k 动态 壁纸
1,529 阅读
2
Nacos持久化MySQL问题-解决方案
967 阅读
3
Docker搭建Typecho博客
767 阅读
4
滑动时间窗口算法
754 阅读
5
Nginx反向代理微服务配置
721 阅读
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
登录
Search
标签搜索
java
javase
docker
java8
springboot
thread
spring
分布式
mysql
锁
linux
redis
源码
typecho
centos
git
map
RabbitMQ
lambda
stream
少年
累计撰写
189
篇文章
累计收到
25
条评论
首页
栏目
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
页面
关于
友链
搜索到
12
篇与
的结果
2022-03-12
SpringBoot整合seata分布式事务(不适用高并发场景)
Springboot整合seata分布式事务一、创建seata日志表-- 注意此处0.3.0+ 增加唯一索引 ux_undo_log CREATE TABLE `undo_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `branch_id` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollback_info` longblob NOT NULL, `log_status` int(11) NOT NULL, `log_created` datetime NOT NULL, `log_modified` datetime NOT NULL, `ext` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;二、安装事务协调器(seata-server)1、下载地址 2、导入依赖<dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> </dependency>3、启动seata服务器4、seata文件说明registry.conf:注册中心配置。这里执行注册中西为nacos。type = "nacos",config {执行seata配置数据放在那里,这里默认使用文件放配置数据。 type = "file"file.conf:seata默认配置信息存放这里。例如,事务日志存放地方配置## transaction log store, only used in server side store { ## store mode: file、db mode = "file" ## file store property file { ## store location dir dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions maxBranchSessionSize = 16384 # globe session size , if exceeded throws exceptions maxGlobalSessionSize = 512 # file buffer size , if exceeded allocate new buffer fileWriteBufferCacheSize = 16384 # when recover batch read size sessionReloadReadSize = 100 # async, sync flushDiskMode = async } ## database store property db { ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc. datasource = "druid" ## mysql/oracle/postgresql/h2/oceanbase etc. dbType = "mysql" driverClassName = "com.mysql.jdbc.Driver" ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param url = "jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true" user = "mysql" password = "mysql" minConn = 5 maxConn = 30 globalTable = "global_table" branchTable = "branch_table" lockTable = "lock_table" queryLimit = 100 } }三、自定义代理数据源注意:所有想要用到分布式事务的微服务使用seata DataSourceProxy代理自己的数据源。springboot默认使用的是Hikari数据源。DataSourceAutoConfiguration.java源代码:// // Source code recreated from a .class file by IntelliJ IDEA // (powered by FernFlower decompiler) // package org.springframework.boot.autoconfigure.jdbc; import javax.sql.DataSource; import javax.sql.XADataSource; import org.springframework.boot.autoconfigure.condition.AnyNestedCondition; import org.springframework.boot.autoconfigure.condition.ConditionMessage; import org.springframework.boot.autoconfigure.condition.ConditionOutcome; import org.springframework.boot.autoconfigure.condition.ConditionalOnClass; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.boot.autoconfigure.condition.SpringBootCondition; import org.springframework.boot.autoconfigure.condition.ConditionMessage.Builder; import org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.Dbcp2; import org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.Generic; import org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.Hikari; import org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.OracleUcp; import org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.Tomcat; import org.springframework.boot.autoconfigure.jdbc.DataSourceInitializationConfiguration.InitializationSpecificCredentialsDataSourceInitializationConfiguration; import org.springframework.boot.autoconfigure.jdbc.DataSourceInitializationConfiguration.SharedCredentialsDataSourceInitializationConfiguration; import org.springframework.boot.autoconfigure.jdbc.metadata.DataSourcePoolMetadataProvidersConfiguration; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.boot.jdbc.DataSourceBuilder; import org.springframework.boot.jdbc.EmbeddedDatabaseConnection; import org.springframework.context.annotation.Condition; import org.springframework.context.annotation.ConditionContext; import org.springframework.context.annotation.Conditional; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.context.annotation.ConfigurationCondition.ConfigurationPhase; import org.springframework.core.env.Environment; import org.springframework.core.type.AnnotatedTypeMetadata; import org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType; import org.springframework.util.StringUtils; @Configuration( proxyBeanMethods = false ) @ConditionalOnClass({DataSource.class, EmbeddedDatabaseType.class}) @ConditionalOnMissingBean( type = {"io.r2dbc.spi.ConnectionFactory"} ) @EnableConfigurationProperties({DataSourceProperties.class}) @Import({DataSourcePoolMetadataProvidersConfiguration.class, InitializationSpecificCredentialsDataSourceInitializationConfiguration.class, SharedCredentialsDataSourceInitializationConfiguration.class}) public class DataSourceAutoConfiguration { public DataSourceAutoConfiguration() { } static class EmbeddedDatabaseCondition extends SpringBootCondition { private static final String DATASOURCE_URL_PROPERTY = "spring.datasource.url"; private final SpringBootCondition pooledCondition = new DataSourceAutoConfiguration.PooledDataSourceCondition(); EmbeddedDatabaseCondition() { } public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) { Builder message = ConditionMessage.forCondition("EmbeddedDataSource", new Object[0]); if (this.hasDataSourceUrlProperty(context)) { return ConditionOutcome.noMatch(message.because("spring.datasource.url is set")); } else if (this.anyMatches(context, metadata, new Condition[]{this.pooledCondition})) { return ConditionOutcome.noMatch(message.foundExactly("supported pooled data source")); } else { EmbeddedDatabaseType type = EmbeddedDatabaseConnection.get(context.getClassLoader()).getType(); return type == null ? ConditionOutcome.noMatch(message.didNotFind("embedded database").atAll()) : ConditionOutcome.match(message.found("embedded database").items(new Object[]{type})); } } private boolean hasDataSourceUrlProperty(ConditionContext context) { Environment environment = context.getEnvironment(); if (environment.containsProperty("spring.datasource.url")) { try { return StringUtils.hasText(environment.getProperty("spring.datasource.url")); } catch (IllegalArgumentException var4) { } } return false; } } static class PooledDataSourceAvailableCondition extends SpringBootCondition { PooledDataSourceAvailableCondition() { } public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) { Builder message = ConditionMessage.forCondition("PooledDataSource", new Object[0]); return DataSourceBuilder.findType(context.getClassLoader()) != null ? ConditionOutcome.match(message.foundExactly("supported DataSource")) : ConditionOutcome.noMatch(message.didNotFind("supported DataSource").atAll()); } } static class PooledDataSourceCondition extends AnyNestedCondition { PooledDataSourceCondition() { super(ConfigurationPhase.PARSE_CONFIGURATION); } @Conditional({DataSourceAutoConfiguration.PooledDataSourceAvailableCondition.class}) static class PooledDataSourceAvailable { PooledDataSourceAvailable() { } } @ConditionalOnProperty( prefix = "spring.datasource", name = {"type"} ) static class ExplicitType { ExplicitType() { } } } @Configuration( proxyBeanMethods = false ) @Conditional({DataSourceAutoConfiguration.PooledDataSourceCondition.class}) @ConditionalOnMissingBean({DataSource.class, XADataSource.class}) @Import({Hikari.class, Tomcat.class, Dbcp2.class, OracleUcp.class, Generic.class, DataSourceJmxConfiguration.class}) protected static class PooledDataSourceConfiguration { protected PooledDataSourceConfiguration() { } } @Configuration( proxyBeanMethods = false ) @Conditional({DataSourceAutoConfiguration.EmbeddedDatabaseCondition.class}) @ConditionalOnMissingBean({DataSource.class, XADataSource.class}) @Import({EmbeddedDataSourceConfiguration.class}) protected static class EmbeddedDatabaseConfiguration { protected EmbeddedDatabaseConfiguration() { } } } 导入很多数据源 @Import({Hikari.class, Tomcat.class, Dbcp2.class, OracleUcp.class, Generic.class, DataSourceJmxConfiguration.class})DataSourceConfiguration.java源代码@Configuration( proxyBeanMethods = false ) @ConditionalOnClass({HikariDataSource.class}) @ConditionalOnMissingBean({DataSource.class}) @ConditionalOnProperty( name = {"spring.datasource.type"}, havingValue = "com.zaxxer.hikari.HikariDataSource", matchIfMissing = true ) static class Hikari { Hikari() { } @Bean @ConfigurationProperties( prefix = "spring.datasource.hikari" ) HikariDataSource dataSource(DataSourceProperties properties) { HikariDataSource dataSource = (HikariDataSource)DataSourceConfiguration.createDataSource(properties, HikariDataSource.class); if (StringUtils.hasText(properties.getName())) { dataSource.setPoolName(properties.getName()); } return dataSource; } }自定义代理数据源配置:import com.zaxxer.hikari.HikariDataSource; import io.seata.rm.datasource.DataSourceProxy; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.util.StringUtils; import javax.sql.DataSource; /** * @description: Seata自定义代理数据源 * @author: <a href="mailto:batis@foxmail.com">清风</a> * @date: 2022/3/12 14:41 * @version: 1.0 */ @Configuration public class MySeataConfig { @Autowired DataSourceProperties dataSourceProperties; @Bean public DataSource dataSource(DataSourceProperties dataSourceProperties){ HikariDataSource dataSource = dataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build(); if (StringUtils.hasText(dataSourceProperties.getName())) { dataSource.setPoolName(dataSourceProperties.getName()); } return new DataSourceProxy(dataSource); } }四、复制seata配置文件到项目resources下主要复制seata里面的模板文件file.conf、registry.conf注意:file.conf 的 service.vgroup_mapping 配置必须和spring.application.name一致。vgroup_mapping.{应用名称}-fescar-service-group = "default"service{ vgroup_mapping.family-booking-fescar-service-group = "default" }family-booking:微服务名字也可以通过配置 spring.cloud.alibaba.seata.tx-service-group修改后缀,但是必须和file.conf中的配置保持一致五、使用在分布式事务入口方法加上全局事务注解@GlobalTransactional,远程调用方法上加本地事务注解@Transactional即可。@GlobalTransactional @Transactional public PageUtils queryPage(Map<String, Object> params) { //调用远程方法,另一个微服务的方法加上小事务@Transactional }六、使用场景不适用高并发的场景,适用普通的业务远程调用。seata默认是使用的AT模式。针对高并发场景还是的用柔性事务,消息队列、延迟队列,MQ中间件完成。
2022年03月12日
375 阅读
0 评论
8 点赞
2022-03-10
Springboot整合SpringSession,解决分布式session共享问题
springboot整合SpringSession1、引入依赖<dependencies> <dependency> <groupId>org.springframework.session</groupId> <artifactId>spring-session-data-redis</artifactId> </dependency> </dependencies>2、springboot配置application.properties配置:保存类型spring.session.store-type=redis # Session store type.session过期时间:server.servlet.session.timeout= # Session timeout. If a duration suffix is not specified, seconds is used.其它配置信息可参考官网:spring.session.redis.flush-mode=on_save # Sessions flush mode. spring.session.redis.namespace=spring:session # Namespace for keys used to store sessions.连接信息配置:spring.redis.host=localhost # Redis server host. spring.redis.password= # Login password of the redis server. spring.redis.port=6379 # Redis server port.3、开启springSession主类加上注解@EnableRedisHttpSession@EnableRedisHttpSession4、使用注意:1、需要保存到redis中的数据必须序列化。2、需要实现session共享的微服务都必须整合SpringSession。5、解决子域session共享、cookie进行json序列化MySessionConfig配置package com.yanxizhu.family.booking.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.RedisSerializer; import org.springframework.session.web.http.CookieSerializer; import org.springframework.session.web.http.DefaultCookieSerializer; /** * @description: springSession配置 * @author: <a href="mailto:batis@foxmail.com">清风</a> * @date: 2022/3/10 20:11 * @version: 1.0 */ @Configuration public class MySessionConfig { @Bean public CookieSerializer cookieSerializer(){ DefaultCookieSerializer defaultCookieSerializer = new DefaultCookieSerializer(); //设置共享域名 defaultCookieSerializer.setDomainName("yanxizhu.com"); //设置session名称 defaultCookieSerializer.setCookieName("YANXIZHUSESSION"); return defaultCookieSerializer; } //session序列化机制 @Bean public RedisSerializer<Object> springSessionDefaultRedisSerializer(){ return new GenericJackson2JsonRedisSerializer(); } }注意:需要session共享的微服务都需要配置,或者抽取到公共模块。6、多系统-session共享问题(单点登录)抽取单独的认证中心微服务处理单点登录单点登录,可参考一个开源单点登录项目,许雪里/xxl-sso
2022年03月10日
370 阅读
0 评论
7 点赞
2022-03-03
springboot整合redis
适合当如缓存场景:即时性、数据一致性要求不高的。访问量大且更新频率不高的数据(读多,写少)承担持久化工作。读模式缓存使用流程:凡是放入缓存中的数据,我们应该指定过期时间,使其可以再系统即使没有主动更新数据也能自动触发数据加载进缓存的流程。避免业务崩溃导致的数据永久不一致问题。解决分布式缓存中本地缓存导致数据不一致问题,可以使用redis中间件解决。springboot整合redis1、引入springboot整合的start:pom文件引入依赖 <!--redis--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>引入依赖之后就会有RedisAutoConfiguration,里面可以看的到redis的配置文件Redis.Properties.@Configuration( proxyBeanMethods = false ) @ConditionalOnClass({RedisOperations.class}) @EnableConfigurationProperties({RedisProperties.class}) @Import({LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class}) public class RedisAutoConfiguration { public RedisAutoConfiguration() { } @Bean @ConditionalOnMissingBean( name = {"redisTemplate"} ) @ConditionalOnSingleCandidate(RedisConnectionFactory.class) public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) { RedisTemplate<Object, Object> template = new RedisTemplate(); template.setConnectionFactory(redisConnectionFactory); return template; } @Bean @ConditionalOnMissingBean @ConditionalOnSingleCandidate(RedisConnectionFactory.class) public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) { return new StringRedisTemplate(redisConnectionFactory); } }Redis.Properties文件里面包含了redis的配置信息:@ConfigurationProperties( prefix = "spring.redis" ) public class RedisProperties { private int database = 0; private String url; private String host = "localhost"; private String username; private String password; private int port = 6379; private boolean ssl; private Duration timeout; private Duration connectTimeout; private String clientName; private RedisProperties.ClientType clientType; private RedisProperties.Sentinel sentinel; private RedisProperties.Cluster cluster; private final RedisProperties.Jedis jedis = new RedisProperties.Jedis(); private final RedisProperties.Lettuce lettuce = new RedisProperties.Lettuce();2、redis配置,可以根据上面的Redis.Properties类型,再applicatioin.yml中配置相关属性。spring: redis: host: 127.0.0.1 port: 6379上面一个最简单的redis配置就完成了。3、使用redis由上面的RedisAutoConfiguration自动配置类,可以看到自动装配了RedisTemplate<Object, Object>、StringRedisTemplate。RedisTemplate<Object, Object>:一般是字符串的Key,和序列化后的字符串value。由于使用String类型的key、value较多,所以还提供了StringRedisTemplate。public class StringRedisTemplate extends RedisTemplate<String, String> { public StringRedisTemplate() { this.setKeySerializer(RedisSerializer.string()); this.setValueSerializer(RedisSerializer.string()); this.setHashKeySerializer(RedisSerializer.string()); this.setHashValueSerializer(RedisSerializer.string()); } public StringRedisTemplate(RedisConnectionFactory connectionFactory) { this(); this.setConnectionFactory(connectionFactory); this.afterPropertiesSet(); } protected RedisConnection preProcessConnection(RedisConnection connection, boolean existingConnection) { return new DefaultStringRedisConnection(connection); } }StringRedisTemplate也是继承了RedisTemplate<String, String>,都是字符串的key,value.this.setKeySerializer(RedisSerializer.string());this.setValueSerializer(RedisSerializer.string());this.setHashKeySerializer(RedisSerializer.string());this.setHashValueSerializer(RedisSerializer.string());上面的key、value都是用RedisSerializer.string()类型序列化。使用配置好的StringRedisTemplate。StringRedisTemplate的value由多种不同类型的值。保存获取值: @Test public void testStringRedisTemplate(){ ValueOperations<String, String> opsForValue = stringRedisTemplate.opsForValue(); opsForValue.set("hello", "world"+ UUID.randomUUID().toString()); String hello = opsForValue.get("hello"); System.out.println("之前保存值为:"+hello); }输出结果:之前保存值为:world1e9f395e-67b8-4077-9912-c690c7da0f06redis数据库的值:注意保存获取时,key必须是一样的。value一般是序列化后的json字符串,因为json字符串是跨语言跨平台的。vlaue存复杂对象时,序列化可以用alibaba的fastjson。Map<String,List<UserEntity>> data:要存入redis的复杂数据,通过序列化后存在redis。 String s = JSON.toJSONString(data); ValueOperations<String, String> ops = redisTemplate.opsForValue(); ops.set("mydata",s);同样获取的json数据也需要反序列化后才能使用:比如转化成一个复杂数据格式String jsonStr= ops.get("mydata"); Map<String,List<UserEntity>> result = JSON.parseObject(jsonStr, new TypeReference<Map<String,List<UserEntity>>>() {});注意:springboot2.0后默认使用lettuce作为操作redis的客户端,使用netty进行网络通信。但是lettuce的bug导致netty容易堆外内存溢出。netty如果没有指定堆外内存,默认使用-Xmx300m。可以通过-Dio.netty.maxDirectMemory设置堆外内存大小,但是始终会出现堆外内存溢出。 private static void incrementMemoryCounter(int capacity) { if (DIRECT_MEMORY_COUNTER != null) { long newUsedMemory = DIRECT_MEMORY_COUNTER.addAndGet((long)capacity); if (newUsedMemory > DIRECT_MEMORY_LIMIT) { DIRECT_MEMORY_COUNTER.addAndGet((long)(-capacity)); throw new OutOfDirectMemoryError("failed to allocate " + capacity + " byte(s) of direct memory (used: " + (newUsedMemory - (long)capacity) + ", max: " + DIRECT_MEMORY_LIMIT + ')'); } } }解决方案:不能只用-Dio.netty.maxDirectMemory去调大内存。1、升级netty客户端。2、切换使用jedis如何使用jedisredis默认使用的是lettuce,因此需要排除掉lettuce,引入jedis。 <!--引入redis--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> <exclusions> <exclusion><!--排除lettuce--> <groupId>io.lettuce</groupId> <artifactId>lettuce-core</artifactId> </exclusion> </exclusions> </dependency> <!--引入jedis--> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> </dependency>lettuce、jedis都是操作redis的底层客户端。RedisTemplate是对lettuce、jedis对redis操作的再次封装,以后操作都可以用RedisTemplate进行操作redis。通过redis的自动装配可以看到是引入了lettuce,jedis的。@Import({LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class}) public class RedisAutoConfiguration {通过jedis可以看到。class JedisConnectionConfiguration extends RedisConnectionConfiguration { JedisConnectionConfiguration(RedisProperties properties, ObjectProvider<RedisSentinelConfiguration> sentinelConfiguration, ObjectProvider<RedisClusterConfiguration> clusterConfiguration) { super(properties, sentinelConfiguration, clusterConfiguration); } @Bean JedisConnectionFactory redisConnectionFactory(ObjectProvider<JedisClientConfigurationBuilderCustomizer> builderCustomizers) { return this.createJedisConnectionFactory(builderCustomizers); }无论是lettuce、jedis最后都会注入JedisConnectionFactory,就是redis要用的。高并发下缓存失效问题-缓存穿透:缓存穿透:指查询一个一定不存的数据,由于缓存是不命中,将去查询数据库,但是数据库也没有此记录,我们将这此查询的null写入缓存,这将导致这个不存在的数据每次请求都要到存储层查询,失去了缓存的意义。风险:利用不存在的数据进行攻击,,数据库瞬时压力增大,最终导致崩溃。解决方案:null结果也缓存到redis,并加入短暂的过期时间。高并发下缓存失效-缓存雪崩缓存雪崩:指在我们设置缓存时key采用了相同的过期时间,导致缓存在某一时间同时失效,请求全部转发到DB,DB瞬时压力过大雪崩。解决方案:原有的失效时间基础上增加一个随机值,比如11-5分分钟随机,这样每一个缓存的过期时间重复概率就会降低,很难引发集体失效。高并发下缓存失效问题-缓存击穿缓存穿透:对于一些设置了过期时间的key,如果这些key可能会在某些时间点被超高并发地访问,是一种费用“热点”的数据。如果这个key在大量请求同时进来前正好失效,那么所有对这个key的数据查询都落到DB上,我们称为缓存击穿。解决方案:加锁大量并发只让一个去查,其他人等待,查到以后释放锁,其它人获取到锁,先差缓存,就会有数据了,而不用去DB查询。总结:1、解决缓存穿透:空结果也缓存。2、解决缓存雪崩:设置过期时间(随机值)。3、解决缓存击穿:加锁。例如:ops.set("RecordData",null, 1, TimeUnit.DAYS); //null值也缓存 ops.set("RecordData",JSON.toJSONString(recordEntity), 1, TimeUnit.DAYS); 解锁:1、通过synchronized/Lock本地锁。2、分布式锁解决方案Redisson。但是本地锁synchronized/Lock不能解决分布式带来的击穿问题。因此分布式还得用Redisson进行加锁解决。redisson分布式锁,可参考另一篇redisson分布式锁。 //本地加锁 public RecordEntity getEntityByIdByRedis(Long id){ synchronized (this){ ValueOperations<String, String> ops = redisTemplate.opsForValue(); String recordData = ops.get("RecordData"); if(recordData!=null){ System.out.println("从数据库查询数据之前,有缓存数据。。。。"); return JSON.parseObject(recordData, new TypeReference<RecordEntity>() {}); } System.out.println("从数据库查询数据...."); RecordEntity recordEntity = baseMapper.selectById(id); ops.set("RecordData",JSON.toJSONString(recordEntity), 1, TimeUnit.DAYS); return recordEntity; } } //redis分布式锁 public RecordEntity getEntityByIdByFbs(Long id){ String uuid = UUID.randomUUID().toString(); ValueOperations<String, String> ops = redisTemplate.opsForValue(); //保证原子性,加锁同时设置过期时间 Boolean lock = ops.setIfAbsent("lock", uuid, 30, TimeUnit.SECONDS); if(lock){ System.out.println("获取分布式锁成功。。。。。"); RecordEntity recordEntity = null; try{ recordEntity = getEntityById(id); }finally { //删除锁保证原子性,使用脚本 String script ="if redis.call(\"get\",KEYS[1]) == ARGV[1]\n" + "then\n" + " return redis.call(\"del\",KEYS[1])\n" + "else\n" + " return 0\n" + "end"; Long lock1 = redisTemplate.execute(new DefaultRedisScript<Long>(script, Long.class), Arrays.asList("lock"), uuid); } return recordEntity; }else{ System.out.println("获取分布式锁失败,等待重试。。。。"); //重试,可以等待sleep一下 try{ Thread.sleep(200); }catch (Exception e){ } return getEntityByIdByFbs(id); } } //业务代码 public RecordEntity getEntityById(Long id){ ValueOperations<String, String> ops = redisTemplate.opsForValue(); String recordData = ops.get("RecordData"); if(recordData!=null){ System.out.println("从数据库查询数据之前,有缓存数据。。。。"); return JSON.parseObject(recordData, new TypeReference<RecordEntity>() {}); } System.out.println("从数据库查询数据...."); RecordEntity recordEntity = baseMapper.selectById(id); ops.set("RecordData",JSON.toJSONString(recordEntity), 1, TimeUnit.DAYS); return recordEntity; } //Redisson分布式锁 public RecordEntity getEntityByIdByFbsRedisson(Long id){ //保证原子性,加锁同时设置过期时间 RLock lock = redissonClient.getLock("RecordData-lock"); lock.lock(); RecordEntity recordEntity = null; try { System.out.println("获取分布式锁成功。。。。。"); recordEntity = getEntityById(id); }finally { System.out.println("释放锁成功。。。。。"); lock.unlock(); } return recordEntity; } // 使用缓存注解方式 @Override @Cacheable(value = {"record"},key = "#root.method.name") public RecordEntity getRecordAllInfoById(Long id) { //未使用注解缓存方案 // RecordEntity recordEntity = null; // // ValueOperations<String, String> forValue = redisTemplate.opsForValue(); // String recordData = forValue.get("RecordData"); // if(recordData==null){ // System.out.println("缓存没数据,执行查询数据库方法。。。。"); // recordEntity = getEntityByIdByFbsRedisson(id); // }else{ // System.out.println("从缓存中获取数据。。。。。"); // recordEntity = JSON.parseObject(recordData, new TypeReference<RecordEntity>() { // }); // } //使用注解缓存后 RecordEntity recordEntity = getEntityById(id); if(recordEntity!=null){ Long categoryId = recordEntity.getCategoryId(); Long[] catelogPath = findCatelogPath(categoryId); recordEntity.setCatelogPath(catelogPath); } return recordEntity; }利用redis的set NX实现分布式redisson分布式锁,实现原理就是利用redis的set nx实现的。SET key value [EX seconds] [PX milliseconds] [NX|XX]分布式锁原理可以看redis官方文档set nx命令。更多分布式锁可参考,另一个篇springboot整合分布式锁redisson。
2022年03月03日
275 阅读
0 评论
6 点赞
1
2