Deep Exploration of Glide Caching Mechanism

Keywords: network calculator

From Guo Shen's Blog http://blog.csdn.net/guolin_blog/article/details/54895665

Introduction to Glide Cache

Glide's cache design can be said to be very advanced, and the scenarios considered are also comprehensive. In terms of caching, Glide divides it into two modules: memory caching and hard disk caching.
The functions of these two caching modules are different. The main function of memory caching is to prevent application duplication from breaking through and reading into memory. The main function of hard disk caching is to prevent application duplication from downloading and reading data from network or other places.
The combination of memory caching and hard disk caching constitutes Glide's excellent image caching effect. Next, we will analyze the use of these two caches and their implementation principles.

Cache Key

Since it is a caching function, there will inevitably be keys for caching. So how does Glide's cache Key come into being? I have to say that Glide's cache key generation rules are very cumbersome, and I think there are 10 parameters for cache key. However, it's tedious, at least the logic is relatively simple. Let's first look at the generation logic of Glide cache Key. The code that generates the cache Key is in the load() method of the Engine class.
···
public class Engine implements EngineJobListener,
MemoryCache.ResourceRemovedListener,
EngineResource.ResourceListener {

public <T, Z, R> LoadStatus load(Key signature, int width, int height, DataFetcher<T> fetcher,
        DataLoadProvider<T, Z> loadProvider, Transformation<Z> transformation, ResourceTranscoder<Z, R> transcoder,
        Priority priority, boolean isMemoryCacheable, DiskCacheStrategy diskCacheStrategy, ResourceCallback cb) {
    Util.assertMainThread();
    long startTime = LogTime.getLogTime();

    final String id = fetcher.getId();
    EngineKey key = keyFactory.buildKey(id, signature, width, height, loadProvider.getCacheDecoder(),
            loadProvider.getSourceDecoder(), transformation, loadProvider.getEncoder(),
            transcoder, loadProvider.getSourceEncoder());

    ...
}

...

}
···
As you can see, in line 11 we call the fetcher.getId() method to get an id string, which is the only identifier of the image we want to load. For example, if it's a picture on the network, then the id is the url address of the image.
Next, in line 12, the id is passed into the buildKey() method of EngineKeyFactory along with 10 parameters such as signature, width, height, and so on, thus creating an EngineKey object, which is also the cache Key in Glide.
You can see that there are many conditions for determining the cache Key. Even if you change the width or height of the image by override(), a completely different cache Key will be generated.

The source code of the EngineKey class is interesting for you to see for yourself. In fact, the main thing is to rewrite the equals() and hashCode() methods to ensure that the source code is not posted here unless all the parameters passed into the EngineKey are the same.

Memory cache

With the cache Key, you can start caching, so let's start with the memory cache.

First of all, you need to know that by default, Glide automatically turns on memory caching. That is to say, when we use Glide to load an image, it will be cached into memory. As long as it has not been cleared from memory, the next time we use Glide to load this image, it will be read directly from memory, instead of re-reading from the network or hard disk. This will undoubtedly greatly improve the efficiency of image loading. For example, if you slide up and down repeatedly in a Recycler View, the images loaded by Glide can be read and displayed directly from memory, which greatly improves the user experience.

Glide is the most humane, and you don't even need to write any extra code to automatically enjoy this extremely convenient memory caching function, because Glide has turned it on by default.

Now that this function has been turned on by default, what other usage can we use? Only one thing is that Glide provides an interface to disable memory caching for any particular reason:

Glide.with(this)
     .load(url)
     .skipMemoryCache(true)
     .into(imageView);

As you can see, just calling the skipMemoryCache() method and passing in true means that Glide's memory caching function is disabled.
Next, we'll look at the source code to see how Glide's memory caching function is implemented.
Glide memory caching is naturally implemented through the LruCache algorithm, which is the rule of least use. Its main principle is to store strong references to recently used objects in LinkedHashMap and remove the least recently used objects from memory before the cache value reaches the preset value. LruCache is also easy to use. In addition to the LruCache algorithm, Glide also combines a weak reference mechanism to complete the memory caching function.
First look at the source code
···
public class Glide {

public static <T, Y> ModelLoader<T, Y> buildModelLoader(Class<T> modelClass, Class<Y> resourceClass,
        Context context) {
     if (modelClass == null) {
        if (Log.isLoggable(TAG, Log.DEBUG)) {
            Log.d(TAG, "Unable to load null model, setting placeholder only");
        }
        return null;
    }
    return Glide.get(context).getLoaderFactory().buildModelLoader(modelClass, resourceClass);
}

public static Glide get(Context context) {
    if (glide == null) {
        synchronized (Glide.class) {
            if (glide == null) {
                Context applicationContext = context.getApplicationContext();
                List<GlideModule> modules = new ManifestParser(applicationContext).parse();
                GlideBuilder builder = new GlideBuilder(applicationContext);
                for (GlideModule module : modules) {
                    module.applyOptions(applicationContext, builder);
                }
                glide = builder.createGlide();
                for (GlideModule module : modules) {
                    module.registerComponents(applicationContext, glide);
                }
            }
        }
    }
    return glide;
}

...

}
···
Here, when we look at line 11 to build the ModelLoader object, we first call a Glide.get() method, which is the key. We can see that the get() method implements a singleton function, and the creation of a Glide object is created by calling GlideBuilder's createGlide () method on line 24. So we follow this method:

public class GlideBuilder {
    ...

    Glide createGlide() {
        if (sourceService == null) {
            final int cores = Math.max(1, Runtime.getRuntime().availableProcessors());
            sourceService = new FifoPriorityThreadPoolExecutor(cores);
        }
        if (diskCacheService == null) {
            diskCacheService = new FifoPriorityThreadPoolExecutor(1);
        }
        MemorySizeCalculator calculator = new MemorySizeCalculator(context);
        if (bitmapPool == null) {
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) {
                int size = calculator.getBitmapPoolSize();
                bitmapPool = new LruBitmapPool(size);
            } else {
                bitmapPool = new BitmapPoolAdapter();
            }
        }
        if (memoryCache == null) {
            memoryCache = new LruResourceCache(calculator.getMemoryCacheSize());
        }
        if (diskCacheFactory == null) {
            diskCacheFactory = new InternalCacheDiskCacheFactory(context);
        }
        if (engine == null) {
            engine = new Engine(memoryCache, diskCacheFactory, diskCacheService, sourceService);
        }
        if (decodeFormat == null) {
            decodeFormat = DecodeFormat.DEFAULT;
        }
        return new Glide(engine, memoryCache, bitmapPool, context, decodeFormat);
    }
}

This is where the Glide object is constructed. If you look at line 22, you will find that a new LruResourceCache is generated here and assigned to the memory Cache object, which is the LruCache object used by Glide to implement memory caching.
Just now we have seen the code that generates the cache Key in Engine's load() method, and the code of memory cache is actually implemented here. So let's take a fresh look at the complete source code of the Engine class load() method:
···
public class Engine implements EngineJobListener,
MemoryCache.ResourceRemovedListener,
EngineResource.ResourceListener {
...

public <T, Z, R> LoadStatus load(Key signature, int width, int height, DataFetcher<T> fetcher,
        DataLoadProvider<T, Z> loadProvider, Transformation<Z> transformation, ResourceTranscoder<Z, R> transcoder,
        Priority priority, boolean isMemoryCacheable, DiskCacheStrategy diskCacheStrategy, ResourceCallback cb) {
    Util.assertMainThread();
    long startTime = LogTime.getLogTime();

    final String id = fetcher.getId();
    EngineKey key = keyFactory.buildKey(id, signature, width, height, loadProvider.getCacheDecoder(),
            loadProvider.getSourceDecoder(), transformation, loadProvider.getEncoder(),
            transcoder, loadProvider.getSourceEncoder());

    EngineResource<?> cached = loadFromCache(key, isMemoryCacheable);
    if (cached != null) {
        cb.onResourceReady(cached);
        if (Log.isLoggable(TAG, Log.VERBOSE)) {
            logWithTimeAndKey("Loaded resource from cache", startTime, key);
        }
        return null;
    }

    EngineResource<?> active = loadFromActiveResources(key, isMemoryCacheable);
    if (active != null) {
        cb.onResourceReady(active);
        if (Log.isLoggable(TAG, Log.VERBOSE)) {
            logWithTimeAndKey("Loaded resource from active resources", startTime, key);
        }
        return null;
    }

    EngineJob current = jobs.get(key);
    if (current != null) {
        current.addCallback(cb);
        if (Log.isLoggable(TAG, Log.VERBOSE)) {
            logWithTimeAndKey("Added to existing load", startTime, key);
        }
        return new LoadStatus(cb, current);
    }

    EngineJob engineJob = engineJobFactory.build(key, isMemoryCacheable);
    DecodeJob<T, Z, R> decodeJob = new DecodeJob<T, Z, R>(key, width, height, fetcher, loadProvider, transformation,
            transcoder, diskCacheProvider, diskCacheStrategy, priority);
    EngineRunnable runnable = new EngineRunnable(engineJob, decodeJob, priority);
    jobs.put(key, engineJob);
    engineJob.addCallback(cb);
    engineJob.start(runnable);

    if (Log.isLoggable(TAG, Log.VERBOSE)) {
        logWithTimeAndKey("Started new load", startTime, key);
    }
    return new LoadStatus(cb, engineJob);
}

...

}
···
As you can see, in line 17, the loadFromCache() method is called to get the cached image, and if it is obtained, the cb.onResourceReady() method is called directly for callback. If not, the loadFromActiveResources() method is called on line 26 to get the cached image, which is also called back directly. Only when neither method gets the cache, will it continue to execute downward, thus opening the thread to load the picture.

That is to say, Glide's image loading process calls two methods to get the memory cache, loadFromCache() and loadFromActiveResources(). One of these two methods uses the LruCache algorithm and the other uses weak references. Let's look at their source code:
···
public class Engine implements EngineJobListener,
MemoryCache.ResourceRemovedListener,
EngineResource.ResourceListener {

private final MemoryCache cache;
private final Map<Key, WeakReference<EngineResource<?>>> activeResources;
...

private EngineResource<?> loadFromCache(Key key, boolean isMemoryCacheable) {
    if (!isMemoryCacheable) {
        return null;
    }
    EngineResource<?> cached = getEngineResourceFromCache(key);
    if (cached != null) {
        cached.acquire();
        activeResources.put(key, new ResourceWeakReference(key, cached, getReferenceQueue()));
    }
    return cached;
}

private EngineResource<?> getEngineResourceFromCache(Key key) {
    Resource<?> cached = cache.remove(key);
    final EngineResource result;
    if (cached == null) {
        result = null;
    } else if (cached instanceof EngineResource) {
        result = (EngineResource) cached;
    } else {
        result = new EngineResource(cached, true /*isCacheable*/);
    }
    return result;
}

private EngineResource<?> loadFromActiveResources(Key key, boolean isMemoryCacheable) {
    if (!isMemoryCacheable) {
        return null;
    }
    EngineResource<?> active = null;
    WeakReference<EngineResource<?>> activeRef = activeResources.get(key);
    if (activeRef != null) {
        active = activeRef.get();
        if (active != null) {
            active.acquire();
        } else {
            activeResources.remove(key);
        }
    }
    return active;
}

...

}
···
At the beginning of the loadFromCache() method, we first determine whether isMemoryCacheable is false or not, and return null directly if it is false. What does that mean? Actually, it's very simple. Didn't we just learn a skipMemoryCache() method? If true is passed in this method, the isMemoryCacheable here will be false, indicating that the memory cache has been disabled.

Let's move on, and then call the getEngineResourceFromCache() method to get the cache. In this method, the cache Key is used to extract the value from the cache, and the cache object here is the RuResourceCache created when the Glide object is constructed, so the LruCache algorithm is actually used here.

But look at line 22. When we get the cached image from the LruResourceCache, we remove it from the cache, and then store the cached image in the active resources on line 16. ActeResources is a weakly referenced HashMap that caches images in use. We can see that the loadFromActiveResources() method takes values from ActeResources, a HashMap. Using active resources to cache images in use can protect them from being recycled by the LruCache algorithm.

Well, that's probably the logic for reading data from memory caches. Generally speaking, if the image to be loaded can be read from the memory cache, then it will be directly called back, if not read, then the thread will be opened to execute the following image loading logic.

Now that we've figured out how memory caches are read, the next question is where memory caches are written. Here we will review the content of the last article. Remember that we analyzed earlier that when the image is loaded, a message is sent through Handler in EngineJob to cut the execution logic back into the main thread, and the handleResultOnMainThread() method is executed. So let's take a look at this method again. The code is as follows:

lass EngineJob implements EngineRunnable.EngineRunnableManager {

    private final EngineResourceFactory engineResourceFactory;
    ...

    private void handleResultOnMainThread() {
        if (isCancelled) {
            resource.recycle();
            return;
        } else if (cbs.isEmpty()) {
            throw new IllegalStateException("Received a resource without any callbacks to notify");
        }
        engineResource = engineResourceFactory.build(resource, isCacheable);
        hasResource = true;
        engineResource.acquire();
        listener.onEngineJobComplete(key, engineResource);
        for (ResourceCallback cb : cbs) {
            if (!isInIgnoredCallbacks(cb)) {
                engineResource.acquire();
                cb.onResourceReady(engineResource);
            }
        }
        engineResource.release();
    }

    static class EngineResourceFactory {
        public <R> EngineResource<R> build(Resource<R> resource, boolean isMemoryCacheable) {
            return new EngineResource<R>(resource, isMemoryCacheable);
        }
    }
    ...
}

In line 13, an EngineResource object containing image resources is constructed through EngineResourceFactory, which is then called back to Engine's onEngineJobComplete() method in line 16, as follows:

public class Engine implements EngineJobListener,
        MemoryCache.ResourceRemovedListener,
        EngineResource.ResourceListener {
    ...    

    @Override
    public void onEngineJobComplete(Key key, EngineResource<?> resource) {
        Util.assertMainThread();
        // A null resource indicates that the load failed, usually due to an exception.
        if (resource != null) {
            resource.setResourceListener(key, this);
            if (resource.isCacheable()) {
                activeResources.put(key, new ResourceWeakReference(key, resource, getReferenceQueue()));
            }
        }
        jobs.remove(key);
    }

    ...
}

Now it's obvious that on line 13, the callback EngineResource is put into the activeResources, which is the cache written here.

So this is just a weak reference cache. Where is another LruCache cache written? This introduces a reference mechanism in EngineResource. Look at the handleResultOnMainThread() method just now. On lines 15 and 19, you call the acquire() method of EngineResource, and on line 23, you call its release() method. In fact, EngineResource uses a acquired variable to record the number of times the picture is referenced. Calling the acquire() method adds 1 to the variable, and calling the release() method subtracts 1 from the variable. The code is as follows:

class EngineResource<Z> implements Resource<Z> {

    private int acquired;
    ...

    void acquire() {
        if (isRecycled) {
            throw new IllegalStateException("Cannot acquire a recycled resource");
        }
        if (!Looper.getMainLooper().equals(Looper.myLooper())) {
            throw new IllegalThreadStateException("Must call acquire on the main thread");
        }
        ++acquired;
    }

    void release() {
        if (acquired <= 0) {
            throw new IllegalStateException("Cannot release a recycled or not yet acquired resource");
        }
        if (!Looper.getMainLooper().equals(Looper.myLooper())) {
            throw new IllegalThreadStateException("Must call release on the main thread");
        }
        if (--acquired == 0) {
            listener.onResourceReleased(key, this);
        }
    }
}

That is to say, when the acquired variable is greater than 0, it indicates that the picture is in use, and it should be placed in the weak reference cache of activeResources. After release(), if the acquired variable equals zero, the picture is no longer used, then the onResourceReleased() method of listener is called in line 24 to release the resource. This listener is the Engine object. Let's look at its onResourceReleased() method:

public class Engine implements EngineJobListener,
        MemoryCache.ResourceRemovedListener,
        EngineResource.ResourceListener {

    private final MemoryCache cache;
    private final Map<Key, WeakReference<EngineResource<?>>> activeResources;
    ...    

    @Override
    public void onResourceReleased(Key cacheKey, EngineResource resource) {
        Util.assertMainThread();
        activeResources.remove(cacheKey);
        if (resource.isCacheable()) {
            cache.put(cacheKey, resource);
        } else {
            resourceRecycler.recycle(resource);
        }
    }

    ...
}

As you can see, the cached image is first removed from the active resources and then put into the LruResourceCache. This also implements the function of using weak reference to cache the image in use, and using LruCache to cache the image in use.

This is how Glide implements memory caching.

Hard disk cache

Next we begin to learn about hard disk caching.

I don't know if you remember that in the first article in this series, we used the hard disk caching function. The following code was used to prohibit Glide from caching images on hard drives:

Glide.with(this)
     .load(url)
     .diskCacheStrategy(DiskCacheStrategy.NONE)
     .into(imageView);

By calling the diskCacheStrategy() method and passing in DiskCacheStrategy.NONE, Glide's hard disk caching function can be disabled.

This diskCacheStrategy() method is basically everything about Glide's hard disk caching function. It can receive four parameters:

DiskCacheStrategy.NONE: Represents that nothing is cached.
DiskCacheStrategy.SOURCE: Indicates that only the original image is cached.
DiskCacheStrategy.RESULT: Represents that only converted images are cached (default option).
DiskCacheStrategy.ALL: Represents caching both the original image and the converted image.
The explanations of the above four parameters are not difficult to understand, but there is a concept that we need to understand. When we use Glide to load a picture, Glide defaults not to show the original picture, but to compress and transform the picture (we will learn about this later). In short, after a series of operations, the image is called the converted image. By default, Glide caches the converted image on the hard disk. We can change this default behavior by calling the diskCacheStrategy() method.
Okay, there are only so many uses of Glide hard disk caching, so it's still the old way to go. We'll analyze how Glide's hard disk caching function is implemented by reading the source code.

First, like memory caching, hard disk caching is implemented using LruCache algorithm.
Next let's see where Glide reads the hard disk cache. Here we need to recall the content of the previous article. When Glide opens the thread to load the image, it will execute the run() method of Engine Runnable, and a decode() method will be called in the run() method. So let's take a look at the source code of the decode() method again.

private Resource<?> decode() throws Exception {
    if (isDecodingFromCache()) {
        return decodeFromCache();
    } else {
        return decodeFromSource();
    }
}

As you can see, there are two cases: one is to call the decodeFromCache() method to read pictures from the hard disk cache, and the other is to call decodeFromSource() to read the original pictures. By default, Glide will read from the cache first, and only when there is no image to read in the cache will the original image be read. Now let's look at the source code for the decodeFromCache() method, as follows:

private Resource<?> decodeFromCache() throws Exception {
    Resource<?> result = null;
    try {
        result = decodeJob.decodeResultFromCache();
    } catch (Exception e) {
        if (Log.isLoggable(TAG, Log.DEBUG)) {
            Log.d(TAG, "Exception decoding result from cache: " + e);
        }
    }
    if (result == null) {
        result = decodeJob.decodeSourceFromCache();
    }
    return result;
}

As you can see, DecodeJob's decodeResultFromCache() method will be invoked to get the cache first, and if not, the decodeSourceFromCache() method will be invoked to get the cache. The difference between the two methods is actually the difference between DiskCacheStrategy.RESULT and DiskCacheStrategy.SOURCE. I don't think I need to explain any more.

So let's look at the source code of these two methods, as follows:

public Resource<Z> decodeResultFromCache() throws Exception {
    if (!diskCacheStrategy.cacheResult()) {
        return null;
    }
    long startTime = LogTime.getLogTime();
    Resource<T> transformed = loadFromCache(resultKey);
    startTime = LogTime.getLogTime();
    Resource<Z> result = transcode(transformed);
    return result;
}

public Resource<Z> decodeSourceFromCache() throws Exception {
    if (!diskCacheStrategy.cacheSource()) {
        return null;
    }
    long startTime = LogTime.getLogTime();
    Resource<T> decoded = loadFromCache(resultKey.getOriginalKey());
    return transformEncodeAndTranscode(decoded);
}

As you can see, they all call the loadFromCache() method to read data from the cache. If it is decodeResultFromCache() method, it decodes the data directly and returns it. If it is decodeSourceFromCache() method, it also calls transformEncodeAndTranscode() method to convert the data first, then decode it and return it.

However, we noticed that the parameters passed in the two methods when calling the loadFromCache() method are different, one is resultKey, and the other is the getOriginalKey() method of resultKey. This is really very understandable. As we explained just now, Glide's cache Key consists of 10 parameters, including the width, height and so on. But if we are caching the original image, we don't need so many parameters, because we don't need to make any changes to the image. So let's look at the source code for the getOriginalKey() method:

public Key getOriginalKey() {
    if (originalKey == null) {
        originalKey = new OriginalKey(id, signature);
    }
    return originalKey;
}

As you can see, most of the parameters are ignored here, and only id and signature are used to construct the cache Key. The signature parameter is not used in most cases, so it can basically be said that the Original cache Key is determined by id (that is, picture url).

Understanding the difference between these two caching keys, let's take a look at the source code of the loadFromCache() method.

private Resource<T> loadFromCache(Key key) throws IOException {
    File cacheFile = diskCacheProvider.getDiskCache().get(key);
    if (cacheFile == null) {
        return null;
    }
    Resource<T> result = null;
    try {
        result = loadProvider.getCacheDecoder().decode(cacheFile, width, height);
    } finally {
        if (result == null) {
            diskCacheProvider.getDiskCache().delete(key);
        }
    }
    return result;
}

The logic of this method is very simple. Calling the getDiskCache() method to get an instance of the DiskLruCache tool class written by Glide itself, then calling its get() method and passing in the cache Key, you can get the hard disk cached file. If the file is empty, null is returned. If the file is not empty, it is decoded into a Resource object and returned.

In this way, we have analyzed the source code read by the hard disk cache. Where is the hard disk cache written? Strike while the iron is hot. Let's go on with the analysis.

As we have just analyzed, decodeFromSource() method is called to read the original picture without caching. So let's look at this method:

public Resource<Z> decodeFromSource() throws Exception {
    Resource<T> decoded = decodeSource();
    return transformEncodeAndTranscode(decoded);
}

There are only two lines of code in this method, decodeSource() is used to parse the original picture as its name implies, and transformEncodeAndTranscode() is used to convert and transcode the picture. Let's first look at the decodeSource() method:

private Resource<T> decodeSource() throws Exception {
    Resource<T> decoded = null;
    try {
        long startTime = LogTime.getLogTime();
        final A data = fetcher.loadData(priority);
        if (isCancelled) {
            return null;
        }
        decoded = decodeFromSourceData(data);
    } finally {
        fetcher.cleanup();
    }
    return decoded;
}

private Resource<T> decodeFromSourceData(A data) throws IOException {
    final Resource<T> decoded;
    if (diskCacheStrategy.cacheSource()) {
        decoded = cacheAndDecodeSourceData(data);
    } else {
        long startTime = LogTime.getLogTime();
        decoded = loadProvider.getSourceDecoder().decode(data, width, height);
    }
    return decoded;
}

private Resource<T> cacheAndDecodeSourceData(A data) throws IOException {
    long startTime = LogTime.getLogTime();
    SourceWriter<A> writer = new SourceWriter<A>(loadProvider.getSourceEncoder(), data);
    diskCacheProvider.getDiskCache().put(resultKey.getOriginalKey(), writer);
    startTime = LogTime.getLogTime();
    Resource<T> result = loadFromCache(resultKey.getOriginalKey());
    return result;
}

In line 5, fetcher's loadData() method is called to read the image data, and in line 9, decodeFromSourceData() method is called to decode the image. Next, in line 18, you decide whether to allow caching of the original image, and if so, you call the cacheAndDecodeSourceData() method. This method also calls the getDiskCache() method to get the DiskLruCache instance, and then calls its put() method to write to the hard disk cache. Note that the original image cache Key is resultKey.getOriginalKey().

Okay, the caching of the original image is so simple. Next, let's analyze the source code of the transformEncodeAndTranscode() method to see how the converted image cache is written. The code is as follows:

private Resource<Z> transformEncodeAndTranscode(Resource<T> decoded) {
    long startTime = LogTime.getLogTime();
    Resource<T> transformed = transform(decoded);
    writeTransformedToCache(transformed);
    startTime = LogTime.getLogTime();
    Resource<Z> result = transcode(transformed);
    return result;
}

private void writeTransformedToCache(Resource<T> transformed) {
    if (transformed == null || !diskCacheStrategy.cacheResult()) {
        return;
    }
    long startTime = LogTime.getLogTime();
    SourceWriter<Resource<T>> writer = new SourceWriter<Resource<T>>(loadProvider.getEncoder(), transformed);
    diskCacheProvider.getDiskCache().put(resultKey, writer);
}

The logic here is simpler and clearer. In line 3, the transform() method is called to convert the image, and then the converted image is written to the hard disk cache in the writeTransformed ToCache () method. The same put() method is called for the DiskLruCache instance, but the cache Key used here is resultKey.

In this way, we have analyzed the implementation principle of Glide hard disk caching. Although these source codes seem so complex, Glide's excellent encapsulation makes it easy to control Glide's caching function by skipMemoryCache() and diskCacheStrategy().

Now that you understand how Glide caching works, let's learn some advanced techniques for Glide caching.

Advanced Skills

Although Glide's highly encapsulated caching makes usage very simple, it also brings some problems.

For example, a group of friends told me that their project image resources are stored on Qiniuyun, and Qiniuyun in order to protect the image resources, will add a token parameter on the basis of the image url address. That is to say, the url address of a picture may be in the following format:

http://url.com/image.jpg?token=d9caa6e02c990b0a

If Glide is used to load the image, the url address is used to compose the cache Key.

But the next problem is that token, as an authentication parameter, is not static, and it is likely to change all the time. If token changes, the url of the image changes, the url of the image changes, and the cache Key changes. As a result, it is clear that the same picture is the same, because token is constantly changing, leading to the complete failure of Glide's cache function.

It's actually a tough problem, and I believe it's not just Qiniuyun. You'll probably encounter this problem when you use Glide.

So how to solve this problem? We will also analyze it from the source level. First, let's look at Glide's code for creating cache keys.

public class Engine implements EngineJobListener,
        MemoryCache.ResourceRemovedListener,
        EngineResource.ResourceListener {

    public <T, Z, R> LoadStatus load(Key signature, int width, int height, DataFetcher<T> fetcher,
            DataLoadProvider<T, Z> loadProvider, Transformation<Z> transformation, ResourceTranscoder<Z, R> transcoder,
            Priority priority, boolean isMemoryCacheable, DiskCacheStrategy diskCacheStrategy, ResourceCallback cb) {
        Util.assertMainThread();
        long startTime = LogTime.getLogTime();

        final String id = fetcher.getId();
        EngineKey key = keyFactory.buildKey(id, signature, width, height, loadProvider.getCacheDecoder(),
                loadProvider.getSourceDecoder(), transformation, loadProvider.getEncoder(),
                transcoder, loadProvider.getSourceEncoder());

        ...
    }

    ...
}

Look at line 11. As I said earlier, this id is actually the url address of the picture. So, here is the image url address obtained by calling fetcher.getId() method. As we already know in the previous article, fetcher is an example of HttpUrlFetcher. Let's take a look at the source code of its getId() method, as follows:

public class HttpUrlFetcher implements DataFetcher<InputStream> {

    private final GlideUrl glideUrl;
    ...

    public HttpUrlFetcher(GlideUrl glideUrl) {
        this(glideUrl, DEFAULT_CONNECTION_FACTORY);
    }

    HttpUrlFetcher(GlideUrl glideUrl, HttpUrlConnectionFactory connectionFactory) {
        this.glideUrl = glideUrl;
        this.connectionFactory = connectionFactory;
    }

    @Override
    public String getId() {
        return glideUrl.getCacheKey();
    }

    ...
}

As you can see, the getId() method calls GlideUrl's getCacheKey() method again. So where does this GlideUrl object come from? It's actually the url address of the image we passed in in the load() method, and Glide internally wraps the url address into a GlideUrl object.

Obviously, let's look at the source code for GlideUrl's getCacheKey() method, as follows:

public class GlideUrl {

    private final URL url;
    private final String stringUrl;
    ...

    public GlideUrl(URL url) {
        this(url, Headers.DEFAULT);
    }

    public GlideUrl(String url) {
        this(url, Headers.DEFAULT);
    }

    public GlideUrl(URL url, Headers headers) {
        ...
        this.url = url;
        stringUrl = null;
    }

    public GlideUrl(String url, Headers headers) {
        ...
        this.stringUrl = url;
        this.url = null;
    }

    public String getCacheKey() {
        return stringUrl != null ? stringUrl : url.toString();
    }

    ...
}

Here I've simplified the code a little bit to make it look simpler and clearer. The constructor of the GlideUrl class receives two types of parameters: a URL string and a URL object. Then the judgment logic in the getCacheKey() method is very simple. If the URL string is passed in, the string itself is returned directly. If the URL object is passed in, the result after the object toString() is returned.

In fact, I believe you have guessed the solution, because the logic in the getCacheKey() method is too straightforward, which is to return the url address of the image as the cache Key. In fact, we just need to rewrite the getCacheKey() method and add some logical judgments to solve the problem.

Create a MyGlideUrl that inherits from GlideUrl. The code is as follows:

public class MyGlideUrl extends GlideUrl {

    private String mUrl;

    public MyGlideUrl(String url) {
        super(url);
        mUrl = url;
    }

    @Override
    public String getCacheKey() {
        return mUrl.replace(findTokenParam(), "");
    }

    private String findTokenParam() {
        String tokenParam = "";
        int tokenKeyIndex = mUrl.indexOf("?token=") >= 0 ? mUrl.indexOf("?token=") : mUrl.indexOf("&token=");
        if (tokenKeyIndex != -1) {
            int nextAndIndex = mUrl.indexOf("&", tokenKeyIndex + 1);
            if (nextAndIndex != -1) {
                tokenParam = mUrl.substring(tokenKeyIndex + 1, nextAndIndex + 1);
            } else {
                tokenParam = mUrl.substring(tokenKeyIndex);
            }
        }
        return tokenParam;
    }

}

As you can see, here we rewrite the getCacheKey() method by adding a logic to remove this part of the token parameter from the image url address. So the getCacheKey() method gets a url address without the token parameter, so no matter how token changes, the cache Key of Glide is fixed.

Of course, with MyGlideUrl defined, we still have to use it. We can change the code to load the image to the following way:

Glide.with(this)
     .load(new MyGlideUrl(url))
     .into(imageView);

That is to say, we need to pass in this custom MyGlideUrl object in the load() method, instead of directly passing in the url string as before. Otherwise, Glide will use the original GlideUrl class internally, rather than our custom MyGlideUrl class.

Posted by Wien on Thu, 06 Jun 2019 16:04:48 -0700