Redis Persistence and Data Types
Redis is a memory-level caching program, that is, redis uses memory to cache data, but it can save the data in memory to hard disk according to certain strategies, so as to achieve the purpose of data persistence. redis supports two different ways of data persistence mechanism, namely RDB. And OF
RDB: Time-based snapshots, which only keep the latest snapshots, are characterized by faster execution speed, and the disadvantage is that data not snapshotted between the last snapshot and the current time point may be lost.
The specific process of RDB implementation is that Redis first fork s out a sub-process from the main process. Using the write-time replication mechanism, the sub-process saves the memory data as a temporary file, such as dump.rdb.temp. When the data is saved, it replaces the last saved RDB file, and then closes the sub-process, so that it can be saved. The data saved in every RDB snapshot is complete, because there may be a sudden power failure when directly replacing RDB files, which may lead to the loss of data when the RDB files have not been saved completely and then suddenly shut down to save, so that the process of each generated RDB file can be backed up manually. Samples can maximize the preservation of historical data.
Advantages and disadvantages of RDB model
- RDB snapshot saves data at a certain time point. It can customize time point backup by script execution of bgsave (non-blocking) or save (blocking) commands. It can retain multiple backups. When problems arise, it can be restored to versions at different time points. - The performance of o can be maximized, because the only thing the parent process needs to do when saving RDB files is fork to produce a child process, and then the - operation will have this child process operation, the parent process does not need any IO operation.
- RDB recovers faster than AOF in large amounts of data, such as several G data.
- Failure to save data from time to time will result in loss of memory data from the last RDB backup to the current one
- When the amount of data is very large, it takes a little time to fork from the parent process, maybe milliseconds or seconds or minutes, depending on disk IO performance.
AOF: Adding operations to a specified log file in order of operation is characterized by relatively high data security. The disadvantage is that even if some operations are repeated, all operations will be recorded.
Like RDB, AOF uses write-time replication mechanism. By default, AOF uses fsync once a second. The commands to be executed are stored in the AOF file. Thus, even if the redis server fails, data within one second will be lost at most. Different fsync policies can be set, or commands can be executed every time. When fsync is executed, fsync executes threads in the background, so the main thread can continue to process the user's normal requests without being affected by the IO written to the AOF file.
Advantages and disadvantages of AOF model:
The file size of AOF is larger than that of RDB format. According to the fsync policy used (fsync is redis in synchronous memory, all modified files are transferred to storage device), the default is append fsync everysec, that is, fsync is executed once a second.
String is the most common and commonly used data type in all programming languages, and it is also one of the most basic data types in redis, and all key types in redis are strings.
- Add, get a key,
# 127.0.0.1:6379> set key1 value1 OK # 127.0.0.1:6379> get key1 "value1" # 127.0.0.1:6379> TYPE key1 string # 127.0.0.1:6379> SET name2 jack2 EX 3 OK
EX 3, set expiration time
- Get the content of a key:
# 127.0.0.1:6379> get key1 "value1"
- Delete a key:
# 127.0.0.1:6379> DEL key1 (integer) 1
- Set multiple key s in batches:
# 127.0.0.1:6379> MSET key1 value1 key2 value2 OK
- Bulk acquisition of multiple key s:
# 127.0.0.1:6379> MGET key1 key2 OK
- Additional data:
# 127.0.0.1:6379> APPEND key1 append (integer) 12 # 127.0.0.1:6379> get key1 "value1append"
- Increasing numerical value:
# 127.0.0.1:6379> set num 10 OK # 127.0.0.1:6379> INCR num (integer) 11 # 127.0.0.1:6379> get num "11
- Decreasing numerical value:
# 127.0.0.1:6379> set num 10 OK # 127.0.0.1:6379> DECR num (integer) 9 # 127.0.0.1:6379> get num "9"
- Returns the string key length:
# 127.0.0.1:6379> STRLEN key1 (integer) 12
A list is a two-way readable and writable pipeline with the head on the left and the tail on the right. A list can contain up to 2^32-1 elements, or 4294967295 elements.
- Generate lists and insert data:
# 127.0.0.1:6379> LPUSH list1 jack tom jhon (integer) 3 # 127.0.0.1:6379> TYPE list1 list
- Add data to the list:
# 127.0.0.1:6379> LPUSH list1 tom (integer) 2 # 127.0.0.1:6379> RPUSH list1 jack (integer) 3
- Get the list length:
# 127.0.0.1:6379> LLEN list1 (integer) 3
- Remove list data:
# 127.0.0.1:6379> RPOP list1 #The last one "jack" # 127.0.0.1:6379> LPOP list1 #First "tom"
Set is an unordered collection of String types. Collection members are unique, which means that duplicate data cannot occur in a collection.
- Generate set key:
# 127.0.0.1:6379> SADD set1 v1 (integer) 1 # 127.0.0.1:6379> SADD set2 v2 v4 (integer) 2 # 127.0.0.1:6379> TYPE set1 set # 127.0.0.1:6379> TYPE set2 set
- Additional values:
You can't add values that already exist when you add them.
# 127.0.0.1:6379> SADD set1 v2 v3 v4 (integer) 3 # 127.0.0.1:6379> SADD set1 v2 #No additional success (integer) 0 # 127.0.0.1:6379> TYPE set1 set # 127.0.0.1:6379> TYPE set2 set
- View all data in the collection:
# 127.0.0.1:6379> SMEMBERS set1 1) "v4" 2) "v1" 3) "v3" 4) "v2" # 127.0.0.1:6379> SMEMBERS set2 1) "v4" 2) "v2"
- Get the difference set of the set:
Difference sets: Elements that already belong to A but not B are called differences between A and B.
# 127.0.0.1:6379> SDIFF set1 set2 1) "v1" 2) "v3"
- Get the intersection of collections:
Intersection: Elements that already belong to A and B are called intersections of A and B.
# 127.0.0.1:6379> SINTER set1 set2 1) "v4" 2) "v2"
- Get the union of sets:
Unification: Elements that already belong to A or B are called unions of A and B.
# 127.0.0.1:6379> SUNION set1 set2 1) "v2" 2) "v4" 3) "v1" 4) "v3"
Redis ordered set is also a set of string-type elements, and no duplicate members are allowed. The difference is that each element will be associated with a double (double-precision floating-point) type score. redis is used to sort the members of the set from small to large through the score. The members of ordered set are unique. However, scores can be repeated. Collections are implemented by hash tables. Therefore, the complexity of adding, deleting and searching is O(1). The largest number of members in a collection is 2 ^ 32 - 1 (4294967295, each set can store more than 4 billion members).
- Generate ordered sets:
# 127.0.0.1:6379> ZADD zset1 1 v1 (integer) 1 # 127.0.0.1:6379> ZADD zset1 2 v2 (integer) 1 # 127.0.0.1:6379> ZADD zset1 2 v3 (integer) 1 # 127.0.0.1:6379> ZADD zset1 3 v4 (integer) 1 # 127.0.0.1:6379> TYPE zset1zset # 127.0.0.1:6379> TYPE zset2 zset
# 127.0.0.1:6379> ZADD paihangbang 10 key1 20 key2 30 key3 (integer) 3 # 127.0.0.1:6379> ZREVRANGE paihangbang 0 -1 withscores Displays all key s and scores in the specified set 1) "key3" 2) "30" 3) "key2" 4) "20" 5) "key1" 6) "10"
- Add multiple values in batches:
# 127.0.0.1:6379> ZADD zset2 1 v1 2 v2 4 v3 5 v5 (integer) 4
- Gets the number of lengths of the collection:
# 127.0.0.1:6379> ZCARD zset1 (integer) 4 # 127.0.0.1:6379> ZCARD zset2 (integer) 4
- Index-based return values:
# 127.0.0.1:6379> ZRANGE zset1 1 3 1) "v2" 2) "v3" 3) "v4" # 127.0.0.1:6379> ZRANGE zset1 0 2 1) "v1" 2) "v2" 3) "v3" # 127.0.0.1:6379> ZRANGE zset1 2 2 1) "v3"
- Returns an index of a value:
# 127.0.0.1:6379> ZRANK zset1 v2 (integer) 1 # 127.0.0.1:6379> ZRANK zset1 v3 (integer) 2
Hash is a mapping table of field and value of string type. Hash is especially suitable for storing objects. Each hash in Redis can store 2^32-1 key-value pairs (more than 4 billion)
- Generate hash key:
# 127.0.0.1:6379> HSET hset1 name tom age 18 (integer) 1 # 127.0.0.1:6379> TYPE hset1 hash
- Get the hash key field value:
# 127.0.0.1:6379> HGET hset1 name "tom" # 127.0.0.1:6379> HGET hset1 age "18"
- Delete a hash key field:
# 127.0.0.1:6379> HDEL hset1 age (integer) 1
- Get the fields in all hash tables:
# 127.0.0.1:6379> HSET hset1 name tom age 19 (integer) 1 # 127.0.0.1:6379> HKEYS hset1 1) "name" 2) "age"