Preface
LruCache is a memory-based cache framework provided by Android.LRU is an abbreviation for Least Recently Used, which has been used least recently.When a block of memory has been used infrequently recently, it is removed from the cache.In this article, we'll start with a brief introduction to the use of LruCache, and then we'll analyze its source code.
Catalog
1. Basic usage examples
2. LruCahce Source Analysis
3. Summary
1. Basic usage examples
First, let's briefly introduce how to use LruCache to implement memory caching.Below is an example of using LruCache.
What we're doing here is capturing a list of RecyclerView s.Because we need to store Bitmap for each item in the list, and then when Bitmap for all items in the list is available, draw it on a complete Bitmap in order and location.If we don't use LruCache, we can, of course, do that by placing the Bitmap of all the list items in a single List.However, there are drawbacks to that approach: because it is a strongly referenced type, it can cause OOM when memory is low.
In the following method, we first get one-eighth of the memory size as the size of the cache space to initialize the LruCache object, then take all the ViewHolder s from the RecyclerView adapter and get their corresponding Bitmap, and place them in LruCache as key-value pairs.When all the list items have Bitmap, we create the final Bitmap and draw the previous Bitmap on top of the final Bitmap in turn:
public static Bitmap shotRecyclerView(RecyclerView view) { RecyclerView.Adapter adapter = view.getAdapter(); Bitmap bigBitmap = null; if (adapter != null) { int size = adapter.getItemCount(); int height = 0; Paint paint = new Paint(); int iHeight = 0; final int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); //Use one-eighth of the memory as the cache space for the cache framework final int cacheSize = maxMemory / 8; LruCache<String, Bitmap> bitmaCache = new LruCache<>(cacheSize); for (int i = 0; i < size; i++) { RecyclerView.ViewHolder holder = adapter.createViewHolder(view, adapter.getItemViewType(i)); adapter.onBindViewHolder(holder, i); holder.itemView.measure( View.MeasureSpec.makeMeasureSpec(view.getWidth(), View.MeasureSpec.EXACTLY), View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED)); holder.itemView.layout(0, 0, holder.itemView.getMeasuredWidth(), holder.itemView.getMeasuredHeight()); holder.itemView.setDrawingCacheEnabled(true); holder.itemView.buildDrawingCache(); Bitmap drawingCache = holder.itemView.getDrawingCache(); if (drawingCache != null) { bitmaCache.put(String.valueOf(i), drawingCache); } height += holder.itemView.getMeasuredHeight(); } bigBitmap = Bitmap.createBitmap(view.getMeasuredWidth(), height, Bitmap.Config.ARGB_8888); Canvas bigCanvas = new Canvas(bigBitmap); Drawable lBackground = view.getBackground(); if (lBackground instanceof ColorDrawable) { ColorDrawable lColorDrawable = (ColorDrawable) lBackground; int lColor = lColorDrawable.getColor(); bigCanvas.drawColor(lColor); } for (int i = 0; i < size; i++) { Bitmap bitmap = bitmaCache.get(String.valueOf(i)); bigCanvas.drawBitmap(bitmap, 0f, iHeight, paint); iHeight += bitmap.getHeight(); bitmap.recycle(); } } return bigBitmap; }
Therefore, we can summarize the basic uses of LruCahce as follows:
First, you want to declare the size of a cache space, where we use one-eighth of the runtime memory as the size of the cache space
LruCache<String, Bitmap> bitmaCache = new LruCache<>(cacheSize);
However, one issue you should be aware of is the unit of cache space.Because LruCache's key-value pairs may have any type of value, you need to specify how the size of the type you pass in will be calculated.Later, when we analyze its source code, we will point out the problem of its units.A method for calculating the size of the incoming values is also provided in the API of LruCahce.We only need to override this method when instantiating an LruCache.Here we think that a Bitmap object takes up less than 1 KB of memory.
We can then use its put() and get() methods to insert and retrieve data from the cache, just like normal Map does:
bitmaCache.put(String.valueOf(i), drawingCache); Bitmap bitmap = bitmaCache.get(String.valueOf(i));
2. LruCahce Source Analysis
2.1 Before analysis: What do we need to consider when we implement an LruCache ourselves
Before we analyze the source code for LruCache, let's consider what we need to consider when we implement an LruCache ourselves to read the source code with questions.
Because we need to store the data and be able to pull it out of the cache based on the specified id, we need to use a hash table structure.Or use two arrays, one as a key and one as a value, and then use their indexes to map.However, the latter is not as efficient as the former.
In addition, we want to sort the inserted elements because we need to remove those that are least frequently used.We can use chained lists to do this, and whenever a data is used, we can move it to the head node of the chained list.This way, when the element to be inserted is larger than the maximum space in the cache, we remove the element at the end of the list to make room in the cache.
To combine these two points, we need a data structure that has both hash table and queue functions.LinkedHashMap has been provided in the Java collection for this purpose.
In fact, LruCache in Android is also implemented using LinkedHashMap.LinkedHashMap was expanded from HashMap.If you understand HashMap, its source code is not difficult to read.LinkedHashMap only builds on HashMap, putting each node in a two-way Chain table.Each time an element is added or deleted, the element being manipulated is moved to the end of the list.LruCahce in Android is an extension of LinkedHashMap, but the implementation of LruCache in Android has some clever things to learn from.
2.2 LruCache Source Code Analysis
From the above analysis, we know why LinkedHashMap was chosen as the underlying data structure.Below we will analyze some of these methods.The implementation of this class still has a lot of details to consider and is worth learning from.
Maximum free space for 2.2.1 cache
There are two fields in LruCache, size and maxSize. maxSize, which are assigned values in the construction method of LruCache to represent the maximum available space for the cache:
int cacheSize = 4 * 1024 * 1024; //4MiB, cacheSize is in KBLruCache <String, Bitmap > bitmapCache = new LruCache <String, Bitmap > (cacheSize) {protected int sizeOf(String key, Bitmap value) {return value.getByteCount(); } }};
Here we use 4MB to set the size of the cache space.We know that the principle of LruCache is that once you have specified the size of the space, if you continue inserting elements beyond the specified size, they will be removed to make room for new elements.Then, because the type of insertion is uncertain, it is up to the user to calculate the size of the object being inserted.
In the code above, we used Bitmap's getByteCount() method directly to get the size of Bitmap.At the same time, we noticed in the initial example that we didn't do that.In that case, a Bitmap will be calculated as 1KB.
Here sizeOf() is a protected method, apparently expecting the user to implement the logic of the calculation himself.Its default value is 1, and the units specified for maxSize by setting the cache size are the same:
protected int sizeOf(K key, V value) { return 1; }
Here we also need to mention that although this method is handed over to the user, it is not called directly from the source code of LruCache, but rather
private int safeSizeOf(K key, V value) { int result = sizeOf(key, value); if (result < 0) { throw new IllegalStateException("Negative size: " + key + "=" + value); } return result; }
Therefore, a check is added to prevent parameter errors.In fact, this consideration is very thoughtful. Imagine that if an illegal parameter is passed in, which results in an unexpected error, the error will be difficult to track.If we want to design an API for others to use and provide them with methods that they can override, we might want to use this design for reference.
get() method of 2.2.2 LruCache
Let's now analyze its get() method.It is used to get the corresponding value from LruCahce based on the specified key:
/** * 1). Gets the element corresponding to the specified key, and if it does not exist, creates one using the craete() method. * 2). When an element is returned, it is moved to the top of the queue. * 3). Returns n ull if it does not exist in the cache and cannot be created */public final V get(K key) { if (key == null) { throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { //If the return is not empty here, the returned element will be moved to the head of the queue, which is implemented in LinkedHashMap mapValue = map.get(key); if (mapValue != null) { //Cache Hits hitCount++; return mapValue; } //The cache was not hit, probably because the key-value pair was removed missCount++; } //The creation here is single threaded and the key specified at creation time may have been occupied by other key-value pairs V createdValue = create(key); if (createdValue == null) { return null; } //The purpose of this design is to prevent the specified key from being occupied by another value at the time of creation and to undo insertion if there is a conflict synchronized (this) { createCount++; //When inserting a new data into the table, the value corresponding to the key is returned, if not null mapValue = map.put(key, createdValue); if (mapValue != null) { //Conflict, and undo previous insertions map.put(key, mapValue); } else { size += safeSizeOf(key, createdValue); } } if (mapValue != null) { entryRemoved(false, key, createdValue, mapValue); return mapValue; } else { trimToSize(maxSize); return createdValue; } }
The current instance is locked to ensure thread safety when the value is obtained here.The create() method is used when the map's get() method cannot obtain the data.Because the specified key-value pair may not exist when it is not found, or because the cache is insufficient to remove, we need to provide a method for the user to handle this situation, which returns null by default. If the user overrides the create() method and returns a value that is not null, we need to insert the value into the hash table.
The logic inserted is also in the synchronization block.This is because the operation created may be too long and asynchronous.When we insert a value into the specified key again, it may already have a value.So if the return is not null when calling put() of a map, it means that the corresponding key already has a corresponding value and the insertion needs to be undone.Finally, when mapValue is not null, the entryRemoved() method is also called.This method is called back every time a key-value pair is removed from the hash table.
Finally, the trimToSize() method is called to ensure that the size of the cache does not exceed the value we specified after the new value is inserted.Items that are least recently used will be removed from the hash table when it is found that the cache that has been used exceeds the maximum cache size.
So how can I tell which item is the least recently used?Let's first look at the method definition of trimToSize():
public void trimToSize(int maxSize) { while (true) { K key; V value; synchronized (this) { if (size < 0 || (map.isEmpty() && size != 0)) { throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!"); } if (size <= maxSize) { break; } //Get the least recently used item to remove Map.Entry<K, V> toEvict = map.eldest(); if (toEvict == null) { break; } key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); } }
Obviously, here is the eldest() method of LinkedHashMap, which returns the following values:
public Map.Entry<K, V> eldest() { return head; }
This is the header node of LinkedHashMap.So why remove the head node?This does not conform to the principle of LRU. It is clear here that the header node has been removed directly.This is not really the case. Magic occurs in the get() method.In LruCache's get () method, we call LinkedHashMap's get () method, which in turn calls the following method when we get the value:
void afterNodeAccess(Node<K,V> e) { // move node to last LinkedHashMapEntry<K,V> last; if (accessOrder && (last = tail) != e) { LinkedHashMapEntry<K,V> p = (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after; p.after = null; if (b == null) head = a; else b.after = a; if (a != null) a.before = b; else last = b; if (last == null) head = p; else { p.before = last; last.after = p; } tail = p; ++modCount; } }
The logic here is to move the node returned from the get() method to the end of the two-way Chain table.Therefore, the least recently used node must be the head node.
3. Summary
That's our summary of the use and source of LruCache, and here we're actually only analyzing the get() process.Because this is the core of LruCache, it includes the process of inserting values and moving recently used items.As for the put() and remove() methods, they actually call the LinkedHashMap method directly from within.We will not analyze them here.When using this framework, however, be aware of the units of cached objects!