Redis Note 8: Parse the configuration file redis.conf

Keywords: Redis Database socket Linux

If it's a professional DBA, the instance boots with a lot of parameters to make the system run very stable, which may add a parameter after Redis at boot to specify the path of the configuration file, just like MySQL, to start the database by reading the boot configuration file. After the source code is compiled, there is a redis.conf file in the Redis directory, which is the Redis configuration file. We can start with configuration files using the following commands at startup.

[root@localhost ~]# ./redis-server /opt/redis/redis.conf

Some of Redis's units of measurement, redis configuration, are not sensitive to unit case, 1GB, 1GB and 1GB are the same. This also shows that redis only supports bytes and does not support bit units.

# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes

Redis can introduce external configuration files, much like the include instructions in C/C++. Multiple configuration files, Redis always uses the last loaded configuration items. If the configuration you want to introduce is not rewritten, it can be introduced at the end of the main configuration file.

include /path/to/other.conf

==================== Redis Configuration - Universal====================

# Redis does not run as a daemon by default. It can be modified by this configuration item and the daemon mode can be enabled by yes. Note that when configured as a daemon, Redis writes the process number to the file / var/run/redis.pid
daemonize yes
#
# When Redis runs as a daemon, Redis by default writes pid to the / var/run/redis.pid file, which can be specified by pidfile
pidfile /var/run/redis.pid
#
#Specify Redis listening port, default port is 6379, why use 6379, because 6379 on the phone button MERZ corresponding number, and MERZ from the Italian singer Alessia Merz name
port 6379
#
# In high concurrent environments, you need a high backlog value to avoid slow client connection problems.
# Notice that the Linux kernel silently reduces this value to / proc/sys/net/core/somaxconn.
# So we need to confirm that we can increase the values of somaxconn and tcp_max_syn_backlog to achieve the desired effect.
tcp-backlog 511
#
# The default Redis listens for all available network interfaces on the server. One or more network interfaces can be monitored using "bind" configuration instructions and one or more ip addresses
bind 192.168.1.100 10.0.0.1
bind 127.0.0.1
#
# If redis does not listen to ports, how can they communicate with the outside world? In fact, redis also supports receiving requests through unix socket.
# The path of the unix socket file can be specified by the unix socket configuration item, and the permissions of the file can be specified by the unix socket perm.
# Specify the path to listen for Unix sockets. There is no default value, so Redis does not listen for Unix sockets without specifying them
unixsocket /tmp/redis.sock
unixsocketperm 755
#
# When a redis-client has not requested to send a request to the server, the server has the right to actively close the connection. It can set "idle timeout" through timeout, 0 means never close.
# Set the timeout time of client connection in seconds. When the client does not issue any instructions during this period, close the connection.
# Default value: 0 stands for disabled, never closed 
timeout 0
#
# TCP keepalive.
#
# If it is not zero, set the SO_KEEPALIVE option to send ACK to the idle connected client, which is useful for the following two reasons:
#
# 1) Can detect the unresponsive end
# 2) Let the network device in the middle of the connection know that the connection is still alive
#
# On Linux, this specified value (in seconds) is the time interval between sending ACK.
# Note: To close the connection, it takes twice the time value.
# This time interval is determined by the kernel configuration on other kernels
#
# TCP connection alive policy can be set by tcp-keep alive configuration item in seconds.
# If set to 60 seconds, the server will initiate an ACK request every 60 seconds to the idle client to check whether the client has been deactivated.
# For unresponsive clients, their connections are closed. So closing a connection takes up to 120 seconds. If set to 0, no live detection will be performed.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0
#
# Specify logging level 
# Redis supports four levels in total: debug, verbose, notice, warning, default verbose 
# debug records a lot of information for development and testing
# varbose has a lot of useful, streamlined information, not as much as debug does.
# Notce is a common verbose, often used in production environments
# warning only has very important or serious information that is logged. 
loglevel notice
#
# Specifies the log file name. You can also use "stdout" to force Redis to write log information to standard output.
# The default is standard output. If you configure Redis to run in daemon mode and log mode to standard output here, the log will be sent to / dev/null.
logfile ""
#
# To use the system logger, just set "syslog-enabled" to "yes".
# Then set some other syslog parameters as needed.
syslog-enabled no
#
# Specify the linux system log syslog identifier, which is invalid if "syslog-enabled=no"
syslog-ident redis
#
# Specify the linux system log syslog device (facility), which must be between USER or LOCAL0 and LOCAL7
syslog-facility local0 
#
# The default value is 16. The default database is stored in DB 0 ID library. There is no special requirement. It is recommended that only one database databases 1 be set.
# Query database using SELECT < dbid >
# dbid is between 0 and'databases'-1
databases 16 

==================== Redis configuration - snapshot====================

#
# Store the database on disk:
#
#   save <seconds> <changes>
#  
#   The database is written to disk after the specified number of seconds and the number of data changes.
#
#   The following example will perform the operation of writing data to disk:
#   After 900 seconds (15 minutes), with at least one key change
#   After 300 seconds (5 minutes), and at least 10 key changes
#   After 60 seconds, there are at least 10,000 key changes
#
#   Note: If you don't need to write to disk, annotate all "save" settings, that is, implement a full memory server.
# If you want to disable the policy of RDB persistence, you can do the same without setting any save instructions or passing an empty string parameter to save.
save 900 1
save 300 10
save 60 10000 
#
# If the user turns on the RDB snapshot function, redis will stop accepting all write requests by default if redis fails to persist data to disk.
# The advantage of doing this is that users can clearly know that data in memory and data on disk are inconsistent.
# If redis continues to accept write requests willfully despite this inconsistency, it may lead to some disastrous consequences.
# If the next RDB persistence succeeds, redis automatically resumes accepting write requests.
# Of course, if you don't care about this data inconsistency or other means of discovering and controlling it, you can turn it off so that redis can continue to accept new write requests when snapshot writing fails.
stop-writes-on-bgsave-error yes
#
#
#
# Whether to use LZF to compress string objects when exporting to.rdb database.
# The default setting is yes.
# If you want to save CPU, you can set this to "no", but if you have a compressible key but no compression, the data file will become larger.
rdbcompression yes
#
# Because version 5 RDB has a checksum of CRC64 algorithm at the end of the file. This will make the file format more reliable, but
# There is a performance cost (about 10%) when producing and loading RDB files, so you can turn it off to get the best performance.
# The generated RDB file for closing checks has a checksum of 0, which tells the load code to skip checks.
rdbcompression yes
#
# File Name and Storage Path of Database
dbfilename dump.rdb
#
# working directory
# The local database is written to this directory, and the file name is the value of "dbfilename" above.
# The cumulative files are also put here.
# Note that you must specify a directory, not a file name.
dir ./ 

==================== Redis configuration - synchronization====================

# Master-slave synchronization. The backup of Redis instances is achieved through slaveof configuration.
# Note that the data is copied locally from the far end. That is to say, there are different database files, different IP bindings and different ports to listen on locally.
# When the host is a slave service, set the IP and port of the master service. When Redis starts, it automatically synchronizes data from the slave service.
slaveof <masterip> <masterport> 
#
# If the master service sets the password (configured by the following "requirepass" option) and the slave service connects the master's password, the slave must authenticate before starting the synchronization, otherwise its synchronization request will be rejected.
# When the local machine is a slave service, set the connection password for the primary service
masterauth <master-password> 
#
# When a slave loses its connection to the master or synchronization is in progress, there are two possibilities for slave's behavior:
# 1) If slave-serve-stale-data is set to "yes" (default), slave will continue to respond to client requests, either normal data or empty data with no value yet obtained.
# 2) If slave-serve-stale-data is set to "no", slave will reply "SYNC with master in progress" to process various requests, except INFO and SLAVEOF commands.
slave-serve-stale-data yes
#
# You can configure salve Whether the instance accepts a write operation or not. Writable slave Instances may be useful for storing temporary data(Because writing salve# Data will be easily deleted after synchronization with master.
# However, some problems may arise when the client writes because of configuration errors.
# Default all slave s as read-only from Redis 2.6
#
# Note: Read-only slave s are not designed to expose untrustworthy clients on the Internet. It's just a layer of protection against instance misuse.
# A read-only slave supports all administrative commands such as config,debug, etc. To limit it, you can use'rename-command'
# Hide all management and dangerous commands to enhance read-only slave security
slave-read-only yes
#
# Replication Set Synchronization Policy: Disk or socket
# New slave connection or old slave reconnection can not only receive different, but also make a full synchronization. A new RDB file dump is needed and then passed from master to slave. There can be two situations:
# 1) disk-backed: the master creates a new process dump RDB, which is incrementally passed to slaves by the parent process (i.e., the main process).
# 2) Based on socket (diskless): master creates a new process that directly dump RDB to slave socket, without the main process, without the hard disk.
# Based on the hard disk, once the RDB file is created, more slaves can be served at the same time. Based on socket s, when the new slave arrives, it has to queue up (if it's not over repl-diskless-sync-delay yet), and finish one thing and then proceed to the next.
# When using diskless, the master waits for a repl-diskless-sync-delay of seconds. If no slave comes, it will be transmitted directly. Later, the master has to wait in line. Otherwise, they can be transmitted together.
# When disk is slow and the network is fast, you can use diskless. (disk-based by default)
repl-diskless-sync no
#
# When set to 0, transmission starts ASAP
repl-diskless-sync-delay 5
#
# Slve sends ping requests to the server at specified intervals.
# The interval can be set by repl_ping_slave_period.
# Default 10 seconds
repl-ping-slave-period 10
#
# The following options set the synchronization timeout
#
# 1) slave has a large amount of data transmission with master SYNC, resulting in timeout
# 2) From the slave perspective, master timeouts include data, ping, etc.
# 3) From master's point of view, slave timeouts when master sends REPLCONF ACK pings
# 
# Make sure that this value is greater than the specified repl-ping-slave-period, otherwise the timeout will be detected every time the traffic between master and slave is not high.
repl-timeout 60
#
# Is TCP_NODELAY disabled after the slave socket sends SYNC?
#
# If you choose "yes" Redis, you will use fewer TCP packets and bandwidth to send data to slaves.
# But this will delay data transfer to slave, and the default configuration of the Linux kernel will reach 40 milliseconds.
# If you choose "no" data transfer to salve, the delay will be reduced, but to use more bandwidth, by default, we will optimize for low latency.
# But in high traffic situations or when there are too many hops between master and slave, setting this option to "yes" is a good choice.
repl-disable-tcp-nodelay no
#
# Set the backlog size for data backup.
# backlog is a buffer that records salve data when a slave disconnects for a period of time.
# So when a slave reconnects, it does not need full synchronization, but an incremental synchronization is enough to transmit some of the data that slave lost during the disconnection period to it.
# The larger the synchronized backlog, the longer slave can incrementally synchronize and allow disconnection.
# backlog is allocated only once and requires at least one slave connection
repl-backlog-size 1mb
#
# When the master no longer connects to any slave for a period of time, the backlog will be released. The following options configure how many seconds after the last slave disconnection starts, the backlog buffer will be released.
# 0 means never releasing backlog
repl-backlog-ttl 3600
#
# Slve's priority is an integer displayed in Redis's Info output. If the master no longer works properly,
# Sentinel will use it to select a slave promotion = to master. salve with low priority number will give priority to master.
# So for example, there are three slave priorities of 10, 100, 25, respectively.
# Sentinel will select a slave with a minimum priority of 10.
# 0 is a special priority and identifies that the slave cannot be a master, so a slave with a priority of 0 will never be promoted to master by Sentinel.
# The default priority is 100
slave-priority 100
#
# If the master has less than N connected slave s with delays less than or equal to M seconds, it can stop receiving and writing operations.
# N slave s need to be "oneline" status
# The delay is in seconds and must be less than or equal to the specified value, starting with the last ping received from slave (usually sent per second).
# For example, at least three slave s with a delay less than or equal to 10 seconds are required to use the following instructions:
# Setting one of the two to 0 disables this feature.
# The default min-slaves-to-write value is 0 (this function is disabled) and the min-slaves-max-lag value is 10.
min-slaves-to-write 3
min-slaves-max-lag 10

==================== redis configuration - security====================

# The client is required to verify the identity and password when processing any command.
# This feature is very useful in environments where other clients you don't trust can access redis servers.
# In order to be backward compatible, this paragraph should be commented out. And most people don't need authentication (for example, they run on their own servers)
# Warning: Because Redis is too fast, outsiders can try 150 k/s to crack the code. That means you need
# A strong password, otherwise it's too easy to crack.
requirepass foobared
#
# Command renaming
# In a shared environment, you can change the name of a dangerous command. For example, you can change CONFIG to another name that is not easy to guess.
# In this way, the internal tools can still be used, while the ordinary client will not.
# For example:
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# Note: Changing the command name to be recorded in an AOF file or transferred to a slave server may cause problems.
# You can also completely disable a command by renaming it as an empty string
rename-command CONFIG ""

==================== Redis Configuration - Limitations====================

# Set the maximum number of clients connected at the same time. By default, this limit is 10,000 clients, but if the Redis server cannot be configured
# The maximum number of client connections is set to reduce the current file limit by 32 (because
# Some file descriptors are reserved for Redis servers for internal use
# Once this limit is reached, Redis closes all new connections and sends an error'max number of clients reached'
maxclients 10000
#
# Do not use more memory than the upper limit set. Once memory usage reaches the upper limit, Redis deletes key based on the selected recovery strategy (see: maxmemmory-policy)
# If Redis cannot delete key because of the deletion policy, or if the policy is set to "noeviction", Redis will reply that more needs to be done.
# Multiple memory error messages are given to commands. For example, SET,LPUSH, etc., but will continue to respond to read-only commands like Get.
# This option is usually useful when using Redis as an LRU cache or when setting hard memory limits for instances (using the "noeviction" strategy).
# Warning: When multiple instances of slaves are connected to the upper memory limit, the master needs to synchronize the slave's output buffer
# Memory is not counted in use memory. In this way, when a key is expelled, it will not be triggered by a network problem/resynchronization event.
# The output buffer of slaves, in turn, is filled with DEL commands that key is expelled, which will trigger the deletion of more keys.
# Until the database is completely emptied
# 
# All in all, if you need to attach more than one slave, it is recommended that you set a slightly smaller maxmemory limit so that the system will be free.
# Memory serves as the output buffer of slave (but not necessary if the maximum memory policy is set to "noeviction")
maxmemory <bytes>
#
# Maximum Memory Policy: How Redis chooses to delete key s if the memory limit is reached. You can choose from the following five actions:
# 
# Volatile-lru-> is deleted according to the expiration time generated by the LRU algorithm.
# Allkeys-lru-> Delete any key according to the LRU algorithm.
# Volatile-random - > deletes key randomly according to expiration settings. 
# All keys - > Random - > Random deletion without difference. 
# Volatile-ttl - > Delete according to the latest expiration time (supplemented by TTL) 
# Noeviction - > Nobody deletes, and returns an error directly when writing.
# 
# Note: For all policies, if Redis fails to find the appropriate key to delete, an error will be returned during the write operation.
#       Commands so far involved: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default values are as follows:
maxmemory-policy volatile-lru
#
# The implementation of LRU and minimum TTL algorithms is not very accurate, but they are very close (in order to save memory), so you can use sample size for detection.
# For example, by default Redis checks for three key s and then takes the oldest one. You can set the number of samples by following configuration instructions.
maxmemory-samples 3

==================== Redis Configuration-Additional Mode====================

# By default, Redis exports data asynchronously to disk. This pattern is good enough in many applications, but the Redis process
# Writing can be lost for a period of time when something goes wrong or power fails (depending on the save instructions configured).
#
# AOF is a more reliable alternative to persistence mode, such as using the default data write file policy (see configuration below).
# Redis can only lose one second of write operation when it encounters unexpected events such as server power failure or write-only, when Redis's own process fails but the operating system is still running normally.
#
# AOF and RDB persistence can be started at the same time without any problems.
# If AOF is turned on, then Redis will load the AOF file at startup, which can better guarantee the reliability of the data.
# See http://redis.io/topics/persistence for more information.
appendonly no
#
# Added file name (default: "appendonly.aof")
appendfilename "appendonly.aof"
#
# fsync() system calls tell the operating system to write data to disk instead of waiting for more data to enter the output buffer.
# Some operating systems will actually brush data to disk immediately; others will try to do so as soon as possible.
#
# Redis supports three different modes:
#
# no: Don't brush at once, only when the operating system needs to brush. Faster.
# always: Every write operation is immediately written to the aof file. Slow, but safest.
# everysec: Write once a second. A compromise plan. 
#
# The default "everysec" usually strikes a good balance between speed and data security. According to your understanding
# Decide that if you can relax the configuration to "no" for better performance (but if you can tolerate some data loss, consider using it
# The default snapshot persistence mode), or vice versa, is slower but safer than everysec.
#
# See the following article for more details
# http://antirez.com/post/redis-persistence-demystified.html 
# 
# If you're not sure, use "everysec"
appendfsync everysec
#
# If AOF's synchronization policy is set to "always" or "everysec" and the backend storage process (backend storage or write to AOF)
# Logging) generates a lot of disk I/O overhead. Some Linux configurations can cause Redis to block for a long time due to fsync() system calls.
# Note that this situation has not been perfectly corrected, and even fsync() from different threads will block our synchronized write(2) call.
#
# To alleviate this problem, you can use the following option. It can block fsync() during BGSAVE or BGREWRITEAOF processing.
# 
# This means that Redis is in an "unsynchronized" state if a child process is being saved.
# This actually means that at worst you might lose 30 seconds of log data. (default Linux settings)
# 
# If setting this to yes causes latency problems, keep "no", which is the safest way to save persistent data.
no-appendfsync-on-rewrite no
#
# Automatic Rewriting of AOF Files
# If AOF log files are increased to a specified percentage, Redis can automatically rewrite AOF log files through BGREWRITEAOF.
# 
# How it works: Redis remembers the size of the AOF file when it was last rewritten (if there is no write operation after restart, use the AOF size at startup directly)
# 
# This baseline size is compared with the current size. If the current size exceeds the specified ratio, a rewrite operation is triggered. You also need to specify to be rewritten
# The minimum size of the log, which avoids rewriting when the specified percentage is reached but the size is still small.
#
# Specifying a percentage of 0 disables the AOF auto-rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
#
# The AOF file may be incomplete at the end (there was a problem with the last system shutdown, especially when mount ext4 filesystem did not have the data=ordered option added). It only happens when os dies, redis does not die incompletely.
# That's when redis reboot loads into memory. When this happens, you can choose redis to start error reporting or load as much normal data as possible.
# If aof-load-truncated is yes, a log is automatically published to the client and then loaded (default). If no, the user must manually redis-check-aof to repair the AOF file.
aof-load-truncated yes

==================== Redis configuration - LUA script====================

# If the maximum time limit (milliseconds) is reached, redis records a log and returns an error.
# When a script exceeds the maximum time limit. Only SCRIPT KILL and SHUTDOWN NOSAVE can be used. The first one is to kill something without the write command. If you have called write, you can only kill it with the second command.
# Set it to zero or negative, and the time limit is unlimited.
lua-time-limit 5000

==================== Redis Configuration - Cluster====================

WARNING Redis Cluster is not yet a stable version in version 3.0.X

# Open cluster
cluster-enabled yes
#
# Each cluster node has a cluster configuration file
cluster-config-file nodes-6379.conf
#
# Overtime of cluster nodes in milliseconds
cluster-node-timeout 15000
#
# Control the settings associated with FailOver from the slave node
# Setting it to 0, the slave node will always try to start FailOver.
# Set it to a positive number, the disconnection is greater than a certain time (factor * node TimeOut), and FailOver is no longer performed.
cluster-slave-validity-factor 10
#
# Minimum number of slave connections
cluster-migration-barrier 1
#
# By default Yes, the cluster stops accepting write operations when a certain percentage of Key is lost (Node may not be able to connect or hang up).
# Set to No to provide query service even if the cluster loses Key
cluster-require-full-coverage yes

==================== Redis Configuration - Slow Logging====================

# Redis slow query logs can record queries that exceed a specified time. Running time does not include all kinds of I/O time, such as: connecting the client,
# Send response data, etc., and only compute the actual time of command execution (this is only the command execution phase when the thread is blocked and cannot serve other requests at the same time)
# 
# You can configure two parameters for slow query logs: a timeout time (in microseconds) indicating Redis to record commands that exceed that time
# Another is slow query log length. When a new command is written to the log, the oldest record is removed from the queue.
#
# The following unit of time is microsecond, so 1000000 is one second. Note that slow query logs are disabled for negative time, while zero forces recording
# All commands.
slowlog-log-slower-than 10000
#
# There is no limit to the length. It's just that memory is mainly consumed. You can reclaim memory through SLOWLOG RESET.
slowlog-max-len 128

==================== Redis Configuration - Delay Monitoring====================

# Delay monitoring is disabled by default because it is essentially unnecessary in milliseconds
latency-monitor-threshold 0

==================== Redis configuration - event notification====================

# Redis notifies Pub/Sub clients about key space events
# This feature document is located at http://redis.io/topics/keyspace-events
#
# For example, if the key space event notification is turned on and the client executes the DEL command on the key foo of database 0, it will pass through the
# Pub/Sub releases two messages:
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# You can select the type of event Redis wants to notify in the table below. Event types are identified by a single character:
#
# K-key space notification, prefixed with _keyspace@<db>
# E-key event notification, prefixed with _keysevent@<db>
# G Del, EXPIRE, RENAME and other types of unrelated general command notification,...
# String command
# L List command
# S Set command
# H Hash command
# z-ordered set command
# x expiration event (generated each time the key expires)
# e Expulsion Event (Generated when the key is full of memory and cleared)
# The alias of A g $lshzxe, so "AKE" means all events
#
# Notfy-keyspace-events takes a string parameter consisting of 0 to more than one character. An empty string means that the notification is disabled.
#
# Example: Enable List and General Event Notification:
# notify-keyspace-events Elg
#
# Example 2: To get notification subscriptions for expired keys, the channel named _keyevent@: expired is configured as follows
# notify-keyspace-events Ex
#
# The notification used by default is disabled because users usually do not need this feature and it can cause performance degradation.
# Note that if you do not specify at least one of K or E, no event will be sent.
notify-keyspace-events ""

==================== Redis Configuration - Advanced Configuration====================

# When there is a large amount of data, it is appropriate to use hash encoding (which requires more memory), and the upper limit of the number of elements should not exceed the given limit.
# Redis Hash is a HashMap inside value. If the number of members of the Map is relatively small, a compact format similar to one-dimensional linear format is used to store the Map. 
# That is to say, it saves a lot of memory overhead of pointers. Any of the following two conditions that exceed the set value will be converted into a real HashMap.
# When there are no more than a few members in the Map, it will be stored in a linear compact format. The default is 64. That is to say, 64 members in the Map are stored in a linear compact format, beyond which the value is automatically converted to a real HashMap.
hash-max-zipmap-entries 512
#
# When the length of each member value within the Map does not exceed a few bytes, linear compact storage is used to save space.
hash-max-zipmap-value 64
#
# Similar to hash-max-zipmap-entries hash, when there are fewer data elements, it can be coded in another way to save a lot of space.
# The compact storage format of de-pointer will be used below the number of nodes of list data type
list-max-ziplist-entries 512
#
# Compact storage format is used for list data type nodes whose value is less than how many bytes
list-max-ziplist-value 64
#
# There is also a special case of encoding: the data are all 64-bit unsigned integer numbers constitute a string.
# The following configuration item is used to limit the maximum upper limit for using this encoding in this case.
set-max-intset-entries 512 
#
# Similar to the first and second scenarios, ordered sequences can also be processed in a special encoding way, which can save a lot of space.
# This encoding is only suitable for ordered sequences whose length and elements meet the following restrictions:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64 
#
# about HyperLogLog Introduction: http://www.redis.io/topics/data-types-intro#hyperloglogs  
# HyperLogLog sparsely represents restriction settings, and if its value is greater than 16000, dense representations are still used, because dense representations are more efficient in using memory at this time.  
# The recommended value is 3000  
hll-sparse-max-bytes 3000  
#
# Hash refresh, every 100 CPU milliseconds will take out one millisecond to refresh Redis's main hash table (top-level key mapping table).
# The hash table implementation used by redis (see dict.c) uses a delayed hash refresh mechanism: the more you operate on a hash table, the more frequent the hash refresh operation is;
# Conversely, if the server is not very active, it is just to save the hash table in point memory.
# The default is to refresh the hash table 10 times per second to refresh the dictionary and then release the memory as soon as possible.
# Recommendation:
# If you care about delays, use "activerehashing no." It's not good to delay every request by 2 milliseconds.
# Set "active hashing yes" if you don't care much about latency and want to release memory as soon as possible.
activerehashing yes 
#
# The limitation of the client's output buffer can be used to forcibly disconnect clients that for some reason read data from the server slowly enough.
# (A common reason is that a publish/subscribe client can't consume messages as fast as it can produce them.)
#
# Different restrictions can be set for three different clients:
# Normal - > Normal Client
# Slave - > slave and MONTOR clients
# PubSub - > subscribes to at least one pubsub channel or pattern client
#
# The following is each client-output-buffer-limit grammar:
# client-output-buffer-limit <class><hard limit> <soft limit> <soft seconds>
# Once the hard limit is reached, the client will be disconnected immediately, or reach the soft limit and continue to reach the specified number of seconds (continuous).
# For example, if the hard limit is 32 megabytes and the soft limit is 16 megabytes/10 seconds, the client will immediately disconnect.
# If the size of the output buffer reaches 32 megabytes, or if the client reaches 16 megabytes and continuously exceeds the limit for 10 seconds, the connection will be disconnected.
#
# The default normal client is not restricted because they do not receive data when they do not actively request it (by pushing), only asynchronous clients.
# There may be scenarios where data is requested faster than it can be read.
#
# The pubsub and slave clients have a default value because subscribers and slaves receive data by push
#
# Set both hard and soft limits to 0 to disable this feature
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
#
# Redis calls internal functions to perform many background tasks, such as closing client timeout connections, clearing expired keys that have not been requested, and so on.
#
# Not all tasks are performed at the same frequency, but Redis performs the checking task according to the specified "hz" value.
#
# By default, "hz" is set to 10. Increasing this value will use more CPU s when Redis is idle, but at the same time when there are multiple key s
# At the same time, expiration will make Redis's response more sensitive, and timeouts can be handled more accurately.
#
# The range is between 1 and 500, but a value of more than 100 is usually not a good idea.
# Most users should use the default value of 10, and it is only necessary to increase it to 100 at very low latency requirements.
hz 10
#
# When a child process rewrites an AOF file, if the following options are enabled, the file will be synchronized for every 32M data generated. For incremental
# It's very useful to write to a hard disk and avoid large latency peaks.
aof-rewrite-incremental-fsync yes

(end)

Reference resources:
http://blog.csdn.net/thinkercode/article/details/46580871
http://lizhenliang.blog.51cto.com/7876557/1656305

Posted by eurozaf on Tue, 09 Apr 2019 11:18:31 -0700