Redis Learning Integration

Keywords: Redis Database Jedis Spring

Redis

ps: before studying Redis, I always took notes locally. Recently, the computer format, backup files found for half a day, now find a one-time transfer to csdn.

Redis

1. CAP principle of relational database

CAP

C: Consistency (strong consistency)

A: Availability

P: Partition tolerance

Theoretical core: a distributed system can not meet the three requirements of consistency, availability and partition fault tolerance at the same time, at most, it can only meet two requirements at the same time.

Three principles

  • CA: single point cluster, a system that meets consistency and availability, is usually not very strong in scalability. (traditional database)
  • CP: a system that satisfies consistency and partition tolerance, usually with low performance. (Redis, Mongodb, etc.)
  • AP: a system that meets availability, partition tolerance, and may generally have lower requirements for consistency. (most website architectures)

Three in two of CAP

CAP theory means that in the distributed storage system, only the above two points can be realized at most.

However, the current network hardware will inevitably suffer from packet loss and other problems, so the fault tolerance of the partition must be realized.

So we can only make trade-offs between consistency and availability.

BASE

BASE is a solution to the problem caused by the strong consistency of relational database and the decrease of availability.

Basically Available

Soft state

Finally consistent

Idea: let the system relax the requirements of data consistency at a certain time in exchange for the overall scalability and performance improvement of the system. The reason is that because of the geographical distribution and high performance requirements of large-scale systems, it is impossible to use distributed transactions to complete these indicators, so BASE is used.

2. Introduction to distributed and cluster

Distributed

Different service modules (projects) are deployed on different servers. They communicate and call with each other through RPC/RMI to provide external services and intra group cooperation.

colony

The same service module is deployed on different servers, and unified scheduling is carried out through distributed scheduling software to provide external services and access

3. Introduction to redis

What is it?

REmote DIctionary Server

High performance (Key/Value) memory database, running based on memory.

Open source database written in C language

characteristic

  • Data persistence is supported. Data in memory can be kept on disk. It can be loaded again for use when restarting
  • It not only supports simple key value data, but also provides the storage of list, set, zset, hash and other data structures
  • Support data backup, i.e. data backup in master-slave mode
  • Internal single thread mechanism
  • There is no necessary relationship between data
  • High performance.

application

  • Accelerate the query of hot data, such as hot goods, hot news, hot information, promotion and other high traffic information.
  • Task queue, such as seckill, rush to buy, ticket queue, etc
  • Instant information query, such as leaderboards, visit statistics of various websites, bus arrival information, online population information, equipment signals, etc
  • Timeliness information control, such as verification code control, stock control, etc
  • Distributed data sharing, such as session separation in distributed cluster architecture
  • Message queuing
  • Distributed lock

basic operation

add to

set key value

Value

get key

If there is no key, return null

4. Redis data type

String

set,get,

mset (set multiple key Val's at a time), mget (get multiple data at a time)

mset key1 value1 key2 value2...

mget key1 key2

Strlen (get the number of data characters: string length)

strlen key

Append (append str information to the end of the original data)

append key str //Append str string to the end of value corresponding to key

incr (increase 1)

incr key //Add 1 to the value corresponding to the key

decr (decrease 1)

decr key //Reduce the value of key by 1

incrby (increase the value of the specified step)

incrby key len //Add len to the value corresponding to key

decrby (increase the value of the specified step)

incrby key len //Decrease len for the value corresponding to key

getrange (intercept character)

getrange key1 L R //Intercept the string of value [L,R] interval corresponding to key1

getrange key 0 -1 //Get all strings

setrange (replace character)

setrange key x str

setex (set with expire)

setnx (set if not exist) does not exist and is often used in distributed locks

setex key second value //Set the expiration time second (second) for the key. If the key exists, the old value will be copied with value.

setnx key value //When the key does not exist, assign the value value to the key

#Object

set user:1 {name:wjb , age:1} //Pass in the json string. The key here is user:1

mset user:1:name wjb user:1:age 1 //set multi value, the key here is user:1:name and user:1:age

getset (get in set first)

getset key value //If the key does not exist, null will be returned. If it exists, get the original value first and set the new value

Selection of single data operation and multi data operation:

For example:

There are three processes to send the set instruction to the returned result: set sending, redis processing, and result return. All three processes take time.

It is difficult to balance single instruction and multi instruction, because the time-consuming process of sending return depends on the data you send.

List

In redis, list can implement stack, queue, blocking queue, etc.

#######################################
127.0.0.1:6379> lpush list one  //Insert from left
(integer) 1
127.0.0.1:6379> lpush list two
(integer) 2
127.0.0.1:6379> lpush list three
(integer) 3
127.0.0.1:6379> lrange list 0 -1 //Get all list values
1) "three"
2) "two"
3) "one"
127.0.0.1:6379> lrange list 0 1 //Get interval list value
1) "three"
2) "two"
127.0.0.1:6379> rpush list rone //Insert from right
(integer) 4
127.0.0.1:6379> lrange list 0 -1
1) "three"
2) "two"
3) "one"
4) "rone"
###########################################
pop remove

127.0.0.1:6379> lpop list //Remove from left
"three"
127.0.0.1:6379> rpop list //Remove from right
"rone"
127.0.0.1:6379> lrange list 0 -1
1) "two"
2) "one"
###########################################
lindex //Get the specified subscript value

127.0.0.1:6379> lindex list 1 
"one"
127.0.0.1:6379> lindex list 0
"two"
###########################################
lset //Set the value of the specified subscript. The key is required

lset key index value
###########################################
llen //Get list length
127.0.0.1:6379> llen list
(integer) 2
###########################################
lrem //Remove the specified number of values
lrem key count value
127.0.0.1:6379> lrem list 1 one
(integer) 1
127.0.0.1:6379> lrange list 0 -1
1) "two"

###########################################
ltrim //truncation
ltrim list start,end ## list only has the intercepted value
###########################################
rpoplpush #Remove the last element of the list, move it to a new list, and create if the new list does not exist
rpoplpush source destination

127.0.0.1:6379> lpush list one
(integer) 1
127.0.0.1:6379> lpush list two
(integer) 2
127.0.0.1:6379> lpush list three
(integer) 3
127.0.0.1:6379> rpoplpush list mylist
"one"
127.0.0.1:6379> lrange mylist 0 -1
1) "one"
###########################################
linsert #Insert a specific value before or after an element in the channel list

linsert key BEFORE|AFTER pivot value #Value inserted by pivot

Summary

  • It's actually a linked list. left and right can insert values

  • If the key does not exist, create a new linked list

  • New content if key exists

  • If all values are removed, an empty list also means that it does not exist

  • Inserting or changing values on both sides is the most efficient. Inserting into the middle element is relatively inefficient.

Message queuing

Set

The value in set is unique, no repetition

127.0.0.1:6379> sadd set "hello" #insert
(integer) 1
127.0.0.1:6379> sadd set "wjb"
(integer) 1
127.0.0.1:6379> smembers set  #View elements
1) "wjb"
2) "hello"
127.0.0.1:6379> sismember set hello  #Judge whether the value exists
(integer) 1
127.0.0.1:6379> sismember set hh
(integer) 0
127.0.0.1:6379> scard set  #Number of viewing elements
(integer) 2
127.0.0.1:6379> srem set hello  #Removing Elements 
(integer) 1
127.0.0.1:6379> sadd set "A"
(integer) 1
127.0.0.1:6379> sadd set "B"
(integer) 1
127.0.0.1:6379> srandmember set  #Random elements
"B"
127.0.0.1:6379> srandmember set
"A"
127.0.0.1:6379> smembers set
1) "wjb"
2) "A"
3) "B"
127.0.0.1:6379> spop set   #Randomly remove elements
"B"
127.0.0.1:6379> flushdb
OK
127.0.0.1:6379> sadd set1 "A"
(integer) 1
127.0.0.1:6379> sadd set1 "b"
(integer) 1
127.0.0.1:6379> sadd set1 "c"
(integer) 1
127.0.0.1:6379> sadd set2 "set2"
(integer) 1
127.0.0.1:6379> smove set1 set2 "c" #Move elements from set1 to set2
(integer) 1
127.0.0.1:6379> smembers set1
1) "b"
2) "A"
127.0.0.1:6379> smembers set2
1) "set2"
2) "c"

############################################

#Micro blog and other common concerns
127.0.0.1:6379> sadd set1 a
(integer) 1
127.0.0.1:6379> sadd set1 b
(integer) 1
127.0.0.1:6379> sadd set1 c
(integer) 1
127.0.0.1:6379> sadd set2 a
(integer) 1
127.0.0.1:6379>  sadd set2 d
(integer) 1
127.0.0.1:6379>  sadd set2 e
(integer) 1
127.0.0.1:6379> sduff set1 set2 
(error) ERR unknown command 'sduff'
127.0.0.1:6379> sdiff set1 set2   #Difference set
1) "b"
2) "c"
127.0.0.1:6379> sinter set1 set2  #intersection
1) "a"
127.0.0.1:6379> sunion set1 set2  #Union
1) "b"
2) "a"
3) "c"
4) "e"
5) "d"

#A user puts all the followers in one collection and his fans in the same collection
#Can realize common concern, common hobbies, second friends, recommend friends

Hash

Map set, store key value

Using Hash in Redis is equivalent to saving key value in Redis. This value stores a Map set and another key value

127.0.0.1:6379> hset hash k1 wjb  #insert
(integer) 1
127.0.0.1:6379> hget hash k1  #obtain
"wjb"
127.0.0.1:6379> hmset hash k2 a k3 b k4 c #Insert multiple
OK
127.0.0.1:6379> hmget hash k1 k2 k3 k4  #Get multiple
1) "wjb"
2) "a"
3) "b"
4) "c"
127.0.0.1:6379> hgetall hash  #Get all
1) "k1"
2) "wjb"
3) "k2"
4) "a"
5) "k3"
6) "b"
7) "k4"
8) "c"
127.0.0.1:6379> hdel hash k1  #Delete the key, and the value disappears
(integer) 1
127.0.0.1:6379> hlen hash  #Get the number of hash fields
(integer) 3
127.0.0.1:6379> hexists hash k1  #Check whether the key exists
(integer) 0
127.0.0.1:6379> hexists hash k2
(integer) 1
127.0.0.1:6379> hkeys hash #Get all keys
1) "k2"
2) "k3"
3) "k4"
127.0.0.1:6379> hvals hash #Get all values
1) "a"
2) "b"
3) "c"
127.0.0.1:6379> hset hash k5 5
(integer) 1
127.0.0.1:6379> hincrby hash k5 2 #increase
(integer) 7

hash changed data, especially user information and other frequently changed information, is more suitable for object storage.

Similar to hash < user< name:xx "

Zset

Ordered set

On the basis of set, a value score is added to indicate the sorting key

zset k1 score1 v1

127.0.0.1:6379> zadd zset 1 one  #add to
(integer) 1
127.0.0.1:6379> zadd zset 3 two 2 three
(integer) 2
127.0.0.1:6379> zrange zset 0 -1
1) "one"
2) "three"
3) "two"
127.0.0.1:6379> zadd zset 200 A 100 B 300 C
(integer) 3
127.0.0.1:6379> zrange zset 0 -1
1) "one"
2) "three"
3) "two"
4) "B"
5) "A"
6) "C"
127.0.0.1:6379> zrangebyscore zset -inf inf  #Sorting score from small to large range from negative infinity to positive infinity
1) "one"
2) "three"
3) "two"
4) "B"
5) "A"
6) "C"
127.0.0.1:6379> zrangebyscore zset -inf 100  #Sort from small to large sort values with score less than 100
1) "one"
2) "three"
3) "two"
4) "B"
127.0.0.1:6379> zrevrangebyscore zset 100 -inf #Sort descending values with score less than 100 from large to small
1) "B"
2) "two"
3) "three"
4) "one"
127.0.0.1:6379> zcount zset 100 300  #Get the number of members in the specified interval
(integer) 3

Case: set sorting stores class grade table, salary table sorting, etc.

General message: 1. Important message 2. Weighted message

Leaderboard application, Top N

5. Three special data types of redis

geospatial location

Location of friends, nearby people, taxi distance calculation

Related commands:

# geoadd add location
# Rule: two levels cannot be added directly.
# Parameter: key latitude and longitude name
127.0.0.1:6379> geoadd china:city 116.40 39.90 beijing
(integer) 1
127.0.0.1:6379> geoadd china:city 121.47 31.23 shanghai
(integer) 1
127.0.0.1:6379> geoadd china:city 106.50 29.53 chongqing 114.05 22.52 shenzhen
(integer) 2
127.0.0.1:6379> geoadd china:city 120.16 30.24 hangzhou 108.96 34.26 xian
(integer) 2
# geopos gets the longitude and latitude of the specified City
127.0.0.1:6379> geopos china:city beijing
1) 1) "116.39999896287918"
   2) "39.900000091670925"
#Distance between geodist
127.0.0.1:6379> geodist china:city shanghai beijing
"1067378.7564"
127.0.0.1:6379> geodist china:city shanghai beijing km
"1067.3788"

Company:
m meter
 Km km
 mi Mile
 FT ft
# Geordius takes the given longitude and latitude as the center to find out the elements within a certain radius
127.0.0.1:6379> georadius china:city 110 30 500 km
1) "chongqing"
2) "xian"

127.0.0.1:6379> georadius china:city 110 30 500 km withcoord  #Get the latitude and longitude of the elements in the radius
1) 1) "chongqing"
   2) 1) "106.49999767541885"
      2) "29.529999579006592"
2) 1) "xian"
   2) 1) "108.96000176668167"
      2) "34.2599996441893"

127.0.0.1:6379> georadius china:city 110 30 500 km withdist  #Get the distance from the element within the radius to the center
1) 1) "chongqing"
   2) "341.9374"
2) 1) "xian"
   2) "483.8340"
   
127.0.0.1:6379> GEORADIUSBYMEMBER china:city beijing 1000 km #Gets the element within the specified value radius
1) "beijing"
2) "xian"

127.0.0.1:6379> geohash china:city beijing chongqing  #Obtain the hash value of value, and convert the two-dimensional longitude and latitude into one-dimensional hash string
1) "wx4fbxxfke0"
2) "wm5xzrybty0"

#You can compare the hash values to determine whether the two locations are within a certain range

The underlying implementation principle of geo is zset! We can use the zset command to operate Geo

people nearby? How to achieve it?

Get the addresses of all nearby people, insert a collection, and query by radius.

Hyperloglog basic statistics

What is the cardinality?

A{1,3,5,7,8,7}

B{1,3,5,7,8}

Cardinality (number of elements that are not repeated) = 5, error is acceptable.

Algorithm for cardinality statistics of Redis Hyperloglog

Advantage: the occupied memory is fixed. 2 ^ 64 different elements only need to consume 12kb of memory.

UV of web page (a person visits a website many times, but it still counts as a person)

The traditional way is to use set to save user id, and then count the size of set

If a large number of user IDs are saved in this way, it will be more troublesome.

Our purpose is to count, not to save the user id

127.0.0.1:6379> PFadd mykey a b c d e f g h i j #Create the first set of elements
(integer) 1
127.0.0.1:6379> PFcount mykey  #Statistical base
(integer) 10
127.0.0.1:6379> PFadd mykey2 i j z x c v b n m #Create a second set of elements
(integer) 1
127.0.0.1:6379> PFcount mykey2
(integer) 9
127.0.0.1:6379> PFmerge mykey3 mykey mykey2  #Merge two groups, union
OK
127.0.0.1:6379> PFcount mykey3
(integer) 15

Bitmaps

Bit storage

Statistics of user information, active / inactive. Bitmaps can be used only in two states, such as login / not login.

Bitmaps is a data structure. All are binary operations to record, there are only two states: 0 and 1.

# Use bitmaps to record the clock outs from Monday to Sunday.
127.0.0.1:6379> setbit sign 0 0
(integer) 0
127.0.0.1:6379> setbit sign 1 0
(integer) 0
127.0.0.1:6379> setbit sign 2 1
(integer) 0
127.0.0.1:6379> setbit sign 3 0
(integer) 0
127.0.0.1:6379> setbit sign 4 1
(integer) 0
127.0.0.1:6379> setbit sign 5 0
(integer) 0
127.0.0.1:6379> setbit sign 6 0
(integer) 0

#Check whether to punch in one day
127.0.0.1:6379> getbit sign 6
(integer) 0
127.0.0.1:6379> getbit sign 4
(integer) 1

#Count the number of clock in days (1)
127.0.0.1:6379> bitcount sign
(integer) 2

6. Redis affairs

affair

ACID, Redis single command guarantees atomicity, but transaction does not guarantee atomicity!

Redis transaction nature: a set of commands. All commands in a transaction are serialized and executed in sequence during the execution of the transaction.

Disposable, sequential and exclusive.

Redis transactions have no concept of isolation level.

All commands are not directly executed in the transaction, and will only be executed when the execution command is initiated. Exec

Redis transactions:

  • Open transaction (Multi)
  • Order to join (...)
  • Execute transaction (exec)
127.0.0.1:6379> multi  #Open transaction
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> get k2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> exec  #Executing transactions, displaying results
1) OK
2) OK
3) "v2"
4) OK 

#Cancel transaction
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> set k4 v4
QUEUED
127.0.0.1:6379> discard  #Cancel the transaction, and the commands in the transaction queue will not be executed
OK
127.0.0.1:6379> get k4
(nil)

abnormal

Compile type exception (code problem, command error), all commands in the transaction will not be executed.

In Redis transactions, if there is a compile type exception in the command queue, the exec of the transaction will also report an error.

Runtime exception (such as 1 divided by 0). If there is a syntax error in the transaction queue, other commands can be executed normally when the command is executed.

7. Redis lock

Pessimistic lock

I'm pessimistic. I think there will always be problems, so I will lock whatever I do

Optimistic lock

I am optimistic that there will be no problem at any time, so I will not lock it. When updating data, judge whether someone has modified the version number of the data during this period.

  • Get version
  • Compare version on update

Redis monitoring test

127.0.0.1:6379> set money 100
OK
127.0.0.1:6379> set out 0
OK
127.0.0.1:6379> watch money  #Monitoring money objects
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby money 20
QUEUED
127.0.0.1:6379> incrby out 20
QUEUED
127.0.0.1:6379> exec
1) (integer) 80
2) (integer) 20

#Normal execution

Test the multithreaded modification value, and use watch as the optimistic lock operation of redis.

127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby money 10
QUEUED
127.0.0.1:6379> incrby out 10
QUEUED
127.0.0.1:6379> exec #Before execution, another thread modified the value, causing the transaction execution to fail
(nil)

#Monitoring failed
127.0.0.1:6379> unwatch # When transaction execution fails, unlock first
OK
127.0.0.1:6379> watch money  #Monitor again to get the latest value
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby money 10
QUEUED
127.0.0.1:6379> incrby out 10
QUEUED
127.0.0.1:6379> exec  #Reexecution
1) (integer) 990
2) (integer) 30

Summary: Redis's Watch command is actually an optimistic lock

So Redis can implement optimistic lock

8. Jedis

We use java to operate Redis

What is jedis? Jedis is an official JAVA connection development tool recommended by Redis. Use java to operate redis middleware. If you want to use java to operate redis, you need to be familiar with jedis.

operation

Import dependency

<!--    jedis-->
        <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.2.0</version>
        </dependency>

        <!--  fastjson      -->
        <!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.68</version>
        </dependency>

Connect to redis database

    public static void main(String []args){
        // 1. Connect to the server
        Jedis jedis = new Jedis("127.0.0.1",6379);

        // 2. All the commands of jedis are set get and so on
        System.out.println(jedis.ping()); //Connect successfully output PONG
    }

Specific API

It is the same as section 45 and will not be repeated.

[the external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-ssdjthkz-1591521900734) (E: \ myyyyyyyyyyyyyynote \ redis\ Jedis.PNG ]

Operational transactions

public static void main(String[] args) {
        Jedis jedis = new Jedis("127.0.0.1",6379);

        JSONObject jsonObject = new JSONObject();
        jsonObject.put("hello","world");
        jsonObject.put("name","wjb");
        //Open transaction
        Transaction multi = jedis.multi();
        String result = jsonObject.toJSONString();
        try{
            multi.set("user1", result);
            multi.set("user2",result);
            //int x = 1/0;
            multi.exec();  //Execute transaction
        }catch (Exception e){
            multi.discard();  //Discard transaction
            e.printStackTrace();
        }finally {
            System.out.println(jedis.get("user1"));
            System.out.println(jedis.get("user2"));


            jedis.close();
        }

    }

9. Spring boot integrates Redis

Spring boot operation data: Spring data

Note: after Springboot2.x, the original jedis is replaced with lettuce

Jedis: it is not safe to use direct connection and multi thread operation. If you want to avoid this, use jedis pool to connect to the pool and BIO.

lettuce: with netty, instances can be shared among multiple threads, without thread insecurity. Can reduce thread data, NIO.

Source code analysis

@Bean 
@ConditionalOnMissingBean(name = {"redisTemplate"}) //You can customize redisTemplate to replace the default
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
    //The default RedisTemplate does not have too many configurations. All redis objects need to be serialized
    //Both generics are objects. Subsequent use requires forced conversion of < string, Object >
    RedisTemplate<Object, Object> template = new RedisTemplate();
    template.setConnectionFactory(redisConnectionFactory);
    return template;
}

@Bean
@ConditionalOnMissingBean //string is the most commonly used data type in redis, which is created separately
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
    StringRedisTemplate template = new StringRedisTemplate();
    template.setConnectionFactory(redisConnectionFactory);
    return template;
}

integration

  1. Import dependency

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    
  2. Configuration profile

#Configure redis
spring.redis.host=127.0.0.1	
spring.redis.port=6379

  1. API

    /*
    redisTemplate is equivalent to jedis, which is used to operate instructions
     opsForValue() operation string
    opsForList()
    opsForSet()
    opsForHash()
    opsForZSet()
    opsForGeo()
    opsForHyperLogLog()
    */
    
    In addition to basic operations, our common methods can be directly operated through redisTemplate, such as transactions and basic CRUD
        
    /*
    Get the connection object of redis
    RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();
            connection.flushDb();
            connection.flushAll();
    */
    
  2. Write your own RedisTemplate

    @Configuration
    public class RedisConfig {
    
        @Bean
        public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
            RedisTemplate<String, Object> template = new RedisTemplate<>();
            template.setConnectionFactory(redisConnectionFactory);
            //json serialization configuration
            Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
            ObjectMapper om = new ObjectMapper();
            om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
            om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
    
            //String serialization
            StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
    
            //Configure specific serialization methods
            //key set to string serialization
            template.setKeySerializer(stringRedisSerializer);
            //Hash key set to string serialization
            template.setHashKeySerializer(stringRedisSerializer);
            //value set to jackson serialization
            template.setValueSerializer(jackson2JsonRedisSerializer);
            //Hash value set to jackson serialization
            template.setHashValueSerializer(jackson2JsonRedisSerializer);
            template.afterPropertiesSet();
    
            return template;
        }
    }
    

ten Redis.conf Detailed explanation

Company

[failed to transfer the image from the external link. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-fn8ehf0w-1591521900737) (E: \ myyyyyyyyyyyyyyyynote \ redis \ redisconf1. PNG))

redis can ignore the case of units

contain

[failed to save the image in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-lg5XwTmU-1591521900739)(E:\myyyyyyyyyyyyyynote\Redis\RedisConf2.png))

Multiple profiles can be included.

network

bind 127.0.0.1 # Bound ip
protected-mode yes  #Protection mode yes is on no is off
port 6379  #port settings

GENERAL

daemonize no #Run as a daemons, no by default, yes needs to be turned on manually
pidfile /var/run/redis.pid   #If you want to run in a later mode, you need to specify a pid file

#journal
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably) 
# warning (only very important / critical messages are logged)
loglevel notice  #log level

logfile ""  #Log file location
databases 16  #Number of databases, 16 by default

Snapshot

Persistence: in the specified time, how many operations are performed will be persisted to the file.rdb.aof

redis is a memory database. If there is no persistence, the data will be lost in case of power failure.

# If at least one key is modified within 900s, we will perform persistence operation
save 900 1
# If at least 10 key s are modified within 300s, we will perform persistence operation
save 300 10
# If there are at least 10000 key s modified in 60 seconds, we will perform persistence operation
save 60 10000

stop-writes-on-bgsave-error yes  # Persistence if error, continue to work
rdbcompression yes    # Whether to compress the rdb file or not requires some cpu resources
rdbchecksum yes       # Check and verify the errors when saving rdb files
dir ./                # Saving directory of rdb file

REPLICATION master-slave REPLICATION

SECURITY

requirepass  xxx  #Set the password, which is blank by default, or set from the command line

Restrict CLIENTS

maxclients 10000   #Set the maximum number of clients that can connect to redis
maxmemory <bytes>  #redis configuration maximum memory capacity
maxmemory-policy noeviction   #Processing strategy after the memory reaches the upper limit
    1,volatile-lru: Only for key conduct LRU(Default) 
    2,allkeys-lru :  delete lru Algorithmic key   
    3,volatile-random: Random deletion is about to expire key   
    4,allkeys-random: Random delete  
    5,volatile-ttl :  Delete expiring   
    6,noeviction :  Never expire, return error

APPEND ONLY MODE aof to configure

appendonly no  #Not on by default aof Mode, default rdb Way persistence, in most cases, rdb It's enough. #Persistent files
appendfilename "appendonly.aof"   #Persistent filename 

# appendfsync always    #Every modification will sync and consume performance
appendfsync everysec    #Perform sync once a second, and you may lose this 1s of data
# appendfsync no        #Do not execute sync. At this time, the operating system synchronizes data by itself, and the data is the fastest.

The specific configuration is discussed in redis persistence.

11. Redis persistence (key)

Redis is a memory database. If you do not save the database state in memory to disk, the database state in the server will disappear once the server process exits. So redis provides persistence.

RDB

What is RDB? redis database

[failed to save the image in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-lll2BTaz-1591521900746)(E:\myyyyyyyyyyyyyynote\Redis\RDB saving process. PNG))

In the specified time interval, the snapshot of the data set in memory is written to disk. When it is recovered, the snapshot file is read directly into memory.

Redis will create (fork) a subprocess for persistence, and write data to a temporary file. When the persistence process is over, use the temporary file to replace the last persistent file. During the whole process, the main process does not perform any IO operations. This ensures extremely high performance. If large-scale data recovery is needed and the integrity of data recovery is not very sensitive, RDB is more efficient than AOF. The disadvantage of RDB is that the data may be lost (server down) after the last persistence.

In redis, the default is RDB. In general, this configuration does not need to be modified.

The file RDB saves is dump.rdb

Trigger redis creation dump.rdb Mechanism of documents

1. In the configuration file, the rules of save are met

2. Execute the flush command

3. Exit redis

Backup is automatically generated

How to recover rdb files

You only need to put the RDB file in the directory of the RDB file set in the configuration file, which will be checked automatically when redis starts dump.rdb , recover the data in it.

advantage

  • Suitable for large-scale data recovery
  • Low requirements for data integrity (due to possible server downtime leading to small data loss)

shortcoming

  • A certain time interval is required for operation. If redis goes down unexpectedly, the last modified data may be lost.
  • fork process consumes a certain amount of memory space

AOF

What is AOF? append only file

Record all our write commands, which is equivalent to a history file. When recovering, execute all the write commands in this file again.

[the external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-e3gysay0-1591521900749) (E: \ myyyyyyyyyyyynote \ redis\ AOF.PNG ]

Every write operation is recorded in the form of daily log. All instructions executed by redis are recorded (read operation is not recorded). Only the file can be added but not rewritten. At the beginning of redis startup, the data of the file will be read and rebuilt. In other words, when redis is restarted, write instructions will be executed from the front to the back according to the contents of the log file to complete the data recovery.

The files saved by AOF are appendonly.aof

Just change the appendonly no in the configuration file to yes, and restart redis to use AOF

What if AOF goes wrong

The server may shut down while the program is writing to the AOF file. If the shutdown causes the AOF file to be corrupt ed, Redis will refuse to load the AOF file when it restarts, so as to ensure that the consistency of the data will not be damaged.

When this happens, you can fix the failed AOF file by using the following methods:

  1. Create a backup of the existing AOF file.
  2. Use the Redis check AOF program provided with Redis to repair the original AOF file.
$ redis-check-aof --fix
  1. (optional) use diff-u to compare the backup of the repaired AOF file and the original AOF file to see the differences between the two files.
  2. Restart the Redis server, wait for the server to load the repaired AOF file, and recover the data.

advantage

  • Better data integrity
  • Write data in append only mode, no disk IO overhead, high write performance

shortcoming

  • Compared with data files, aof is much faster than rdb, and the repair speed is slower than rdb
  • aof is also slower than rdb, so the default persistence configuration of redis is to use rdb

12. Redis publishing and subscription

slightly

Usage scenario:

1. Real time message system.

2. Live chat (chat room)

3. Subscription and attention functions

13. Redis master-slave replication

concept

Master-Slave replication refers to copying the data of one redis server to other redis servers. The former is called master/leader, and the latter is called slave/follower; data replication is unidirectional and can only be from master to Slave. Master focuses on writing while Slave focuses on reading.

effect

1. Data backup: master-slave replication realizes the hot backup of data, which is a way of data backup other than persistence.

2. Fault recovery: when there is a problem in the primary node, the secondary node can provide services to achieve rapid fault recovery. It's actually a backup of a service.

3. Load balancing: on the basis of master-slave replication, with read-write separation, the master node can provide write services, and the slave node can provide read services and share the server load. Especially in the scenario of less write and more read, by sharing the read load among multiple slave nodes, the concurrency of Redis server can be greatly increased.

4. High availability cornerstone: master-slave replication is the basis of sentinel mechanism and cluster implementation, so master-slave replication is the basis of high availability of redis.

Master-slave copy, read-write separation. 80% of the cases are read operations, which slow down the pressure of the server. It is often used in the architecture. The minimum configuration is 1 master and 2 slave.

Environment configuration

You only need to configure the slave library, not the master library.

127.0.0.1:6379> info replication   #View current library information
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

Copy three redis.conf , respectively modify the port number, pidfile, logfile dump.rdb Name, open daemon

Then start

[failed to transfer the pictures in the external chain. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-ahwn65b5-1591521900750) (E: \ myyyyyyyyyyyyyyyynote \ redis \ redis master-slave copy 1.PNG))

One master and two slaves

By default, each redis node is the primary node. In general, we only need to configure the slave node.

One master (Port 79) and two slaves (ports 80, 81)

127.0.0.1:6380> SLAVEOF 127.0.0.1 6379  #Set port 79 as host
OK
127.0.0.1:6380> info replication
# Replication
role:slave      #The current role is from
master_host:127.0.0.1
master_port:6379
. . . . . . 

81 port is the same

127.0.0.1:6379> info replication  #View host information
# Replication
role:master
connected_slaves:2   #Two slaves
slave0:ip=127.0.0.1,port=6380,state=online,offset=196,lag=0   #Slave information
slave1:ip=127.0.0.1,port=6381,state=online,offset=196,lag=1

The command line configuration master-slave relationship is only temporary. In fact, it needs to be configured in the configuration file to be permanent.

be careful

The host can write, while the slave can only read.

127.0.0.1:6379> set k1 v1  #Host write
OK
 
#Slave trying to read
127.0.0.1:6380> get k1
"v1"   #Normal results

#Slave attempts to write
127.0.0.1:6380> set k2 v2
(error) READONLY You can't write against a read only replica.  #report errors


Test host down

The host is down. The slave is still connected to the host, but there is no write operation. At this time, the host returns, and the slave can still get the information written by the host directly.

In fact, when the host is disconnected, one of the remaining slaves needs to be selected as the host to avoid resource consumption while the slave waits.

Test slave down

The slave is down. If the master-slave relationship is not permanent (that is, the master-slave relationship is not configured in the configuration file), then the slave will become a host after reconnection, and the value of the original host write operation will not be obtained. If you reconfigure the master-slave relationship between this slave and the original host, the slave can still obtain all the data of the original host.

Principle of replication

After the Slave starts successfully and connects to the master, a sync synchronization command will be sent

After receiving the command, master starts the background save process and collects all received commands for modifying the data set. After the background process is executed, master will transfer the entire data file to slave and complete a full synchronization.

**Full copy: * * after the slave receives the database file data, save it to disk and load it into memory

**Incremental replication: * * master continues to pass all new collected modification commands to slave once, and completes synchronization

Layer by layer link

[the external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-O1sCuync-1591521900753)(E:\myyyyyyyyyyyyyynote\Redis \ layer by layer link. PNG))

At this time, although 80 slave is the master node of 81 slave, in essence, 80 slave is still the slave node.

If the 79 host is disconnected, the 80 slave can use the slavof no one command to make itself a host, and other nodes can receive the connection to the latest master node. If 79 is reconnected, the master-slave relationship of other nodes will not be affected at this time!

Sentinel mode

(automatic election host)

It is too cumbersome to manually switch the master and slave computers, so redis 2.8 began to provide Sentinel architecture to solve this problem.

Sentinel mode is a special mode. First, Redis provides sentinel commands. Sentinel is an independent process. As a process, it will run independently.

The principle is: the Sentry can monitor multiple Redis instances by sending commands and waiting for the Redis server to respond.

[failed to save the image in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-qRiDDzk6-1591521900755)(E:\myyyyyyyyyyyyyynote\Redis \ sentry. PNG))

Sentinel role

1. Send the command to return the Redis server to its running state

2. When the sentinel detects that the master is down, it will automatically switch the slave to the master, and then notify other slave servers through publish and subscribe mode, modify the configuration files, and let them switch the host.

In order to prevent the Sentinels from hanging up, multiple sentinels can be set up to monitor each other.

In the multi sentry mode, if the host is down, sentry 1 detects the downtime first, and the system will not immediately perform host switching (failover) operation, because at this time, sentry 1 only subjectively considers the host unavailable (subjectively offline), and only when the latter sentry also detects the host unavailable, and the number reaches a certain value, a vote will be made between sentries. The vote is initiated by a sentinel for failover operations. After the handover is successful, the Sentinels will switch their monitored hosts from the server through the publish and subscribe mode. This process is called objective offline.

test

My current status is one master and two slaves.

1. Configure the sentry profile sentinel.conf

# sentinel monitor the monitored name host port 1
sentinel monitor myredis 127.0.0.1 6379 1

The following number 1 means that the host is hung up. slave votes to see who can replace the host. The one with the most votes will become the host.

2. Start sentry

wayjasy@wayjasy-virtual-machine:~/redis-5.0.8/src$ redis-sentinel ../myconfig/sentinel.conf 
13576:X 22 Apr 2020 22:25:52.377 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13576:X 22 Apr 2020 22:25:52.377 # Redis version=5.0.8, bits=64, commit=00000000, modified=0, pid=13576, just started
13576:X 22 Apr 2020 22:25:52.377 # Configuration loaded
13576:X 22 Apr 2020 22:25:52.378 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 5.0.8 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in sentinel mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 26379
 |    `-._   `._    /     _.-'    |     PID: 13576
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

13576:X 22 Apr 2020 22:25:52.456 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
13576:X 22 Apr 2020 22:25:52.465 # Sentinel ID is debaca00730683e6abc58b232e4de78f88086b03
13576:X 22 Apr 2020 22:25:52.466 # +monitor master myredis 127.0.0.1 6379 quorum 1
13576:X 22 Apr 2020 22:25:52.551 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6379
13576:X 22 Apr 2020 22:25:52.552 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379

3. System failover when the host goes down

[the external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-fgj5kba7-1591521900756) (E: \ myyyyyyyyyyyyyynote \ redis\ failover.png ]

Sentinel mode benefits

  • Sentinel cluster, based on master-slave replication mode, has all the advantages of master-slave replication.
  • The master and slave can be switched, the failure can be switched, and the availability of the system will be better.
  • Sentinel mode is the upgrade of master-slave mode, manual to automatic, more robust.

Shortcomings of sentry mode

  • Redis online capacity expansion is not easy. Once the cluster capacity reaches the upper limit, online capacity expansion will be very troublesome.
  • In fact, it's very troublesome to configure the sentinel mode. There are many options.

Write at the end

It should be noted that the above master-slave replication and sentinel mode are all pseudo master-slave replication based on linux virtual machine. In fact, their configuration is very complicated!!

14. Redis cache penetration and avalanche (key)

High availability of services

concept

Users want to query a data, and find that there is no redis in memory database, that is, the cache is not hit, so it is like a persistent layer database query. No, so this query failed. When there are many users, the cache fails to hit (seckill scenario), so they all request the persistent layer database, which will cause great pressure on the persistent layer database, and then there is cache penetration.

Solution

Bloom filter

The bloom filter is a data structure, which stores all possible query parameters (i.e. equivalent to all available and effective key s) in the form of hash, checks them in the control layer first, and discards them if they are inconsistent, thus avoiding the query pressure on the underlying storage system.

The great function of the bloon filter is to judge whether an element exists in a collection. (it may be wrong, but it will never be missed) therefore, Bloom Filter is not suitable for those "zero error" applications.

Compared with other search algorithms, such as Hash table, half search and so on, bloom filter greatly saves memory space in low error rate scenarios

Cache empty objects

When there is no data queried by users in the underlying database, cache the data as an empty object, set an expiration time, and then access the data will be obtained from the cache, protecting the underlying database.

But there are two problems:

  • If the cache is empty, it means that the cache needs more space to store more keys, and the corresponding value of these keys is empty, meaningless and wasteful of space.
  • Even if expiration time is set for null value, there will be inconsistency between data in cache layer and storage layer for a period of time, which will affect data consistency.

Cache breakdown

concept

There is a very popular key, there are very many requests to access the key continuously, and a huge amount of concurrent accesses to this point. Because we set the expiration time for the key in the cache, when the key fails, the continuous large amount of concurrency will break through the cache and directly request the database, resulting in server downtime.

Solution

Set hotspot data never to expire (not recommended)

From the cache level, there is no expiration time set, so the cache of a hot key can continuously resist large amount of concurrency.

Add mutually exclusive lock

By using distributed lock, only one thread can query the database for each key at the same time, and other query threads can only wait. Therefore, the high concurrent pressure transferred to the distributed lock is a great challenge to the distributed lock.

Cache avalanche

concept

It means that the cache set expires in a certain period of time. (Redis down)

One of the reasons for the cache avalanche, such as double eleven o'clock, soon ushered in a wave of rush buying. This wave of hot goods are put into the cache intensively, assuming that the expiration time is set to be one hour. By 1:00 a.m., the cache of a large area of commodities expired. However, all the access queries to these commodities fell on the database. For the database, there was a periodic pressure wave peak. Then all requests will reach the storage layer, and the call volume of the storage layer will be guaranteed, which may cause the storage layer to hang up.

During the double 11: Ali will stop some services to ensure the normal operation of main services.

Solution

redis high availability

Build a redis cluster and live in different places.

Current limiting degradation

After the cache fails, lock or queue is used to control the number of threads that read and write the cache.

Data preheating

Before the formal deployment of the project, the possible data should be accessed once, so that part of the possibly accessed data will be loaded into the cache. Before large concurrent access, manually trigger to load different key s of cache, set different expiration time, and make the time point of cache invalidation as balanced as possible, rather than focusing on one point. This method is equivalent to the common activity forecast.

15. Redis expiration and elimination strategy

Redis expiration policy

  • Scheduled deletion
    • Meaning: when setting the expiration time of a key, create a timer for the key to delete the key when the expiration time of the key comes
    • Advantages: ensure that the memory is released as soon as possible
    • Disadvantages:
      • If there are a lot of expired keys, deleting these keys will take up a lot of CPU time. When CPU time is tight, CPU can't use all the time to do the important things, and it needs to take time to delete these keys
      • Timer creation takes a long time. If you create a timer for each key with expiration time set (there will be a large number of timers), the performance will be seriously affected
      • Nobody uses it
  • Lazy delete
    • Meaning: when the key expires, it will not be deleted. Each time you get the key from the database, check whether it expires. If it expires, delete it and return null.
    • Advantage: the deletion only occurs when the key is removed from the database, and only the current key is deleted, so the CPU time consumption is relatively small, and the deletion at this time has reached the point where it must be done (if it is not deleted at this time, we will get the expired key)
    • Disadvantages: if a large number of key s have not been acquired for a long time after the timeout, memory leakage may occur (useless garbage takes up a large amount of memory)
  • Delete periodically
    • Meaning: delete expired key every other time
    • advantage:
      • Reduce the CPU time consumption of delete operation by limiting the time and frequency of delete operation – the disadvantage of processing "scheduled delete"
      • Delete expired key s regularly - disadvantages of dealing with "lazy delete"
    • shortcoming
      • In terms of memory friendliness, it's better to "delete regularly"
      • In terms of CPU time friendliness, it's better to "lazy delete"
    • difficulty
      • Reasonably set the execution time (how long to execute each deletion) and execution frequency (how often to delete each deletion) (this depends on the operation of the server)

Main expiration policies adopted by Redis

Lazy delete + regular delete

  • Lazy delete process
    • When performing get or setnx operations, first check whether the key has expired,
    • If it expires, delete the key, and then perform the corresponding operation;
    • If it does not expire, perform the corresponding operation directly
  • Regular deletion process (in simple terms, randomly delete less than or equal to the specified number of expired key s for each specified number of libraries)
    • Traverse each database (that is redis.conf Number of "databases" configured in, default is 16)
      • Check the specified number of keys in the current library (by default, each library checks 20 keys, which is equivalent to the description of the bottom side of the loop when the loop is executed 20 times)
        • If no key in the current library has an expiration time set, traverse the next library directly
        • Randomly obtain a key with expiration time set. Check whether the key expires. If it expires, delete the key
        • Judge whether the regular deletion operation has reached the specified time. If it has, directly exit the regular deletion.

RDB processing of expired key s

Expired key has no effect on RDB

  • Persistent data from memory database to RDB file
    • Before the key is persisted, it will be checked whether it is expired. The expired key does not enter the RDB file
  • Recover data from RDB file to memory database
    • Before the data is loaded into the database, the key will be checked for expiration. If the key expires, it will not be imported into the database (main database)

AOF processing of expired key s

Expired key has no effect on AOF

  • From memory database persistent data to AOF file:

    • When the key has expired and has not been deleted, perform the persistence operation (the key will not enter the aof file because there is no modification command)
    • When the key expires, in case of deletion, the program will append a del command to the aof file (the expired key will be deleted in the future when the data is recovered with the aof file)
  • AOF rewrite

    • When rewriting, it will first determine whether the key has expired, and the expired key will not be rewritten to the aof file

LRU elimination strategy of redis

There are three LRU related Redis configurations:

  • maxmemory: specify the limited memory size when configuring Redis to store data, such as 100m. When the memory consumed by cache exceeds this value, data elimination will be triggered. When the data is configured as 0, it means that there is no limit to the amount of data cached, that is, the LRU function is not effective. The default value of 64 bit system is 0, and the default memory limit of 32-bit system is 3GB
  • maxmemory_policy: trigger the obsolescence policy after data obsolescence
  • maxmemory_samples: the precision of random sampling, that is, the number of key s taken out immediately. The larger the value configuration is, the closer it is to the real LRU algorithm. However, the larger the value is, the higher the corresponding consumption will be, which has a certain impact on the performance. The default sample value is 5.

Elimination strategy is maxmemory_ The assignment of policy is as follows:

  • noeviction: if the cache data exceeds the maxmemory limit, and the commands that the client is executing (most of the write instructions except DEL and several instructions) will cause memory allocation, an error response will be returned to the client
  • Allkeys LRU: LRU elimination for all keys
  • Volatile LRU: LRU elimination only for keys with expiration time set
  • Allkeys random: Reclaim all keys at random
  • Volatile random: randomly reclaim keys that set the expiration time
  • Volatile TTL: only obsolete keys with expiration time set - obsolete keys with smaller TTL(Time To Live)

Volatile LRU, volatile random and volatile TTL are three elimination strategies that do not use full amount of data and may not be able to eliminate enough memory space. In the case of no expiration key or no key with timeout attribute set, these three strategies are similar to noeviction.

General rules of experience:

  • Use the allkeys LRU policy: when the expected request conforms to a power distribution (28 rules, etc.), for example, some subset elements are accessed more than others, you can choose this policy.
  • Use allkeys random: when accessing all keys in a loop, or when the expected request distribution is average (the probability of all elements being accessed is almost the same)
  • Use volatile TTL: to adopt this strategy, the TTL values of cache objects should be different

Volatile LRU and volatile random policies are very useful when you want to use a single Redis instance to implement cache elimination and persistence of some frequently used key collections at the same time. The keys with expiration time not set are persisted. The keys with expiration time set participate in cache elimination. However, running two instances is a better way to solve this problem.

Setting the expiration time for a key also consumes memory, so using the allkeys LRU strategy can save more space, because it is not necessary to set the expiration time for a key under this strategy.

Posted by saku on Sun, 07 Jun 2020 03:28:48 -0700