Spring Cache defect, I seem to have a solution

Keywords: Spring github Redis Database

Spring Cache defect

Spring Cache is a very good cache component.

But in the process of using Spring Cache, Xiao Hei also encountered some pain points.

For example, now there is a requirement: obtain user information in batches through multiple userids.

Option 1

At this point, our code may be as follows:

List<User> users = ids.stream().map(id -> {
    return getUserById(id);
})
.collect(Collectors.toList());

@Cacheable(key = "#p0", unless = "#result == null")
public User getUserById(Long id) {
	// ···
}

The disadvantages of this writing method are:

Operate redis in the for loop. If the data hit cache is good, once the cache fails to hit, the database will be accessed.

Option 2

Some students may do this:

@Cacheable(key = "#ids.hash")
public Collection<User> getUsersByIds(Collection<Long> ids) {
	// ···
}

The problem with this approach is:

Cache is based on the hashcode of id list. Only when the hashcode values of id list are equal, the cache will hit. Moreover, once one of the data in the list is modified, the entire list cache will be cleared.

For example:

The first request id list is 1,2,3,

The id list of the second request is 1,2,4

In this case, the two previous caches cannot be shared.

If the data with id 1 is changed, the cache of these two requests will be cleared

See what Spring officials say

Spring Issue:

https://github.com/spring-projects/spring-framework/issues/24139

https://github.com/spring-projects/spring-framework/issues/23221

For a simple translation, readers can refer to the relevant issue s by themselves.

Translation:

Thank you for your report. Cache abstraction doesn't have the concept of this state. If you return a collection, that's what you need to store in the cache. Nothing forces you to keep the same item type for a given cache, so this assumption is not appropriate for such a high-level abstraction.

My understanding is that for the high-level abstract framework of Spring Cache, Cache is method based. If the method returns Collection, the whole Collection is the content to be cached.

My solution

After a long struggle, Xiao Hei decided to build a wheel by himself.

What kind of effect do I want to achieve?

I hope that for this kind of bulk cache acquisition operation based on multiple keys, you can first search from the cache based on a single key. If the cache does not exist, load the data, and then put the data into the cache.

Without much nonsense, go to the source code directly:

https://github.com/shenjianeng/easy-cache

Briefly introduce the overall idea:

  • Core interface

    • com.github.shenjianeng.easycache.core.Cache

    • com.github.shenjianeng.easycache.core.MultiCacheLoader

Cache interface

The Cache interface defines some general Cache operations. Unlike most Cache frameworks, it supports batch caching based on key.

/**
 * According to the keys cache, if it does not exist in the cache, null will be returned
 */
@NonNull
Map<K, V> getIfPresent(@NonNull Iterable<K> keys);


/**
 * Get from the cache according to the keys. If the cache does not exist, call {@ link multicacheloader ා loadcache (Java. Util. Collection)} to load the data and add it to the cache
 */
@NonNull
Map<K, V> getOrLoadIfAbsent(@NonNull Iterable<K> keys);

MultiCacheLoader interface

@FunctionalInterface
public interface MultiCacheLoader<K, V> {

    @NonNull
    Map<K, V> loadCache(@NonNull Collection<K> keys);

    default V loadCache(K key) {
        Map<K, V> map = loadCache(Collections.singleton(key));
        if (CollectionUtils.isEmpty(map)) {
            return null;
        }
        return map.get(key);
    }
}

MultiCacheLoader is a functional interface. When the cache ා getorloadifabsent method is called, if the cache does not exist, the data will be loaded through MultiCacheLoader and then put into the cache.

RedisCache

RedisCache is now the only implementation of the Cache interface. Just like its class name, this is a redis based Cache implementation.

First, let's talk about the general realization idea:

  1. Use the mget command of redis to obtain the cache in batch. In order to ensure efficiency, a maximum of 20 pieces can be obtained in batches at a time.
  2. If any data is not in the cache, judge whether it is necessary to automatically load the data. If so, load the data through MultiCacheLoader
  3. Store the data in the cache. At the same time, a zset is maintained to save the known cache key, which is used to clear the cache.

Not much nonsense, directly on the source code.

private Map<K, V> doGetOrLoadIfAbsent(Iterable<K> keys, boolean loadIfAbsent) {
    List<String> cacheKeyList = buildCacheKey(keys);
    List<List<String>> partitions = Lists.partition(cacheKeyList, MAX_BATCH_KEY_SIZE);

    List<V> valueList = Lists.newArrayListWithExpectedSize(cacheKeyList.size());

    for (List<String> partition : partitions) {
        // Get multiple keys. Values are returned in the order of the requested keys.
        List<V> values = (List<V>) redisTemplate.opsForValue().multiGet(partition);
        valueList.addAll(values);
       
    }

    List<K> keysList = Lists.newArrayList(keys);
    List<K> missedKeyList = Lists.newArrayList();

    Map<K, V> map = Maps.newHashMapWithExpectedSize(partitions.size());


    for (int i = 0; i < valueList.size(); i++) {
        V v = valueList.get(i);
        K k = keysList.get(i);
        if (v != null) {
            map.put(k, v);
        } else {
            missedKeyList.add(k);
        }
    }

    if (loadIfAbsent) {
        Map<K, V> missValueMap = multiCacheLoader.loadCache(missedKeyList);

        put(missValueMap);

        map.putAll(missValueMap);
    }

    return map;
}

Cache clearing method implementation:

public void evictAll() {
    Set<Serializable> serializables = redisTemplate.opsForZSet().rangeByScore(knownKeysName, 0, 0);

    if (!CollectionUtils.isEmpty(serializables)) {
        List<String> cacheKeys = Lists.newArrayListWithExpectedSize(serializables.size());
        serializables.forEach(serializable -> {
            if (serializable instanceof String) {
                cacheKeys.add((String) serializable);
            }
        });
        redisTemplate.delete(cacheKeys);
        redisTemplate.opsForZSet().remove(knownKeysName, cacheKeys);
    }
}

Say a few more words

For more details of source code, if you are interested, you can read the source code yourself: easy-cache

Welcome to the fork experience, or leave a message in the comment area to discuss. If you don't write well, please give me more advice~~

Future plans:

  • Supports caching null values
  • Declarative caching with annotation support

Posted by dfownz on Wed, 08 Apr 2020 04:56:35 -0700