Principles and solutions of Redis cache penetration, cache breakdown and cache avalanche

Keywords: Database Redis Cache

1, Preface:

In the era of big data, the concurrency of network requests leads to huge database I/O overhead. Therefore, in order to alleviate the pressure of the database, caching technology is essential. Among them, redis is basically one of the server's caching services. Although caching technology is very easy to use, there will be a variety of problems. Here, we will analyze and solve the three most common problems, I hope to bring you help

  • Cache penetration: the corresponding cache data in the key does not exist, which leads to the request for the database and the doubling of the pressure on the database

  • Cache breakdown: at the moment after redis expires, a large number of users request the same cache data, resulting in these requests to request the database, resulting in double pressure on the database. For a key

  • Cache avalanche: when the cache server goes down or a certain period of time in a large number of cache sets fails, all requests go to the database, resulting in double pressure on the database. This is for multiple key s

2, Cache penetration solution

Common methods can use bloom filter method to intercept data. Secondly, there is another solution, that is, if the requested data is empty, cache the null value, and no penetration will occur

 

<?php
class getPrizeList {
    /**
     * redis example
     * @var \Redis
     */
    private $redis;
 
    /**
     * @var string
     */
    private $redis_key = '|prize_list';
 
    /**
     * Expiration time
     * @var int
     */
    private $expire = 30;
 
    /**
     * getPrizeList constructor.
     * @param $redis
     */
    public function __construct($redis)
    {
        $this->redis = $redis;
    }
 
    /**
     * @return array|bool|string
     */
    public function fetch()
    {
        $result = $this->redis->get($this->redis_key);
        if(!isset($result)) {
            //Database query should be performed here
            //If the query result does not exist, the default empty array is cached
            $result = [];
            $this->redis->set($this->redis_key, $result, $this->expire);
        }
 
        return $result;
    }
}

3, Cache breakdown solution

Using mutex keys means that when a key expires, multiple requests come over to allow one request to operate the database, and other requests wait until the first request returns the result successfully.

 <?php
class getPrizeList {
    /**
     * redis example
     * @var \Redis
     */
    private $redis;
 
    /**
     * @var string
     */
    private $redis_key = '|prize_list';
 
    /**
     * @var string
     */
    private $setnx_key = '|prize_list_setnx';
 
    /**
     * Expiration time
     * @var int
     */
    private $expire = 30;
 
    /**
     * getPrizeList constructor.
     * @param $redis
     */
    public function __construct($redis)
    {
        $this->redis = $redis;
    }
 
    /**
     * @return array|bool|string
     */
    public function fetch()
    {
        $result = $this->redis->get($this->redis_key);
        if(!isset($result)) {
            if($this->redis->setnx($this->setnx_key, 1, $this->expire)) {
                //Database query should be performed here
                //$result = database query result;
                $this->redis->set($this->redis_key, $result, $this->expire);
                $this->redis->del($this->setnx_key); //Delete mutex
            } else {
                //Other requests are re requested every 10 milliseconds
                sleep(10);
                self::fetch();
            }
        }
 
        return $result;
    }
}

4, Cache avalanche solution

  • This situation is due to the database pressure caused by multiple keys expiring at the same time. One method can increase the time random number based on the key expiration time to disperse the expiration time and reduce the repetition rate of cache time expiration

  • Another method is lock queuing, which is a bit like the above solution to cache breakdown, but the number of requests is too large. For example, 4999 requests need to wait for 5000 requests. This must be an indicator that does not cure the root cause. Not only is the user experience poor, but it is more complex in distributed environments. Therefore, it is rarely used in high concurrency scenarios

  • The best solution is to use the cache tag to judge whether the tag expires. If it expires, request the database, and the expiration time of the cached data should be set longer than the cache tag. In this way, when a request operates the database, other requests take the last cached data

<?php
class getPrizeList {
    /**
     * redis example
     * @var \Redis
     */
    private $redis;
 
    /**
     * @var string
     */
    private $redis_key = '|prize_list';
 
    /**
     * Cache tag key
     * @var string
     */
    private $cash_key = '|prize_list_cash';
 
    /**
     * Expiration time
     * @var int
     */
    private $expire = 30;
 
    /**
     * getPrizeList constructor.
     * @param $redis
     */
    public function __construct($redis)
    {
        $this->redis = $redis;
    }
 
    /**
     * @return array|bool|string
     */
    public function fetch()
    {
        $cash_result = $this->redis->get($this->cash_key);
        $result = $this->redis->get($this->redis_key);
        if(!$cash_result) {
            $this->redis->set($this->cash_key, 1, $this->expire);
            //Database query should be performed here
            //$result = database query result, and the set time is longer than cash_ The key length is set to 2 times here;
            $this->redis->set($this->redis_key, $result, $this->expire * 2);
        }
 
        return $result;
    }
}



Author: programmer Li Hui
Article source: https://www.jianshu.com/p/78baeb435c08
 

 

Posted by davidjwest on Fri, 17 Sep 2021 15:29:23 -0700