Summary
The last major commentary YYCache File structure, analysis of YYCache class related methods, this chapter mainly analyzes the memory cache class Y YYMemoryCache. This object maintains a dictionary object to access the cached data, and supports the limitation of cached capacity. When the cached data exceeds the specified size of memory capacity, some cached data will be deleted. This is mainly achieved by LRU algorithm.
LRU
The full name of LRU is Least recently used. Based on the principle of least recent use, LRU belongs to a cache elimination algorithm. The idea is to maintain a two-way linked list data structure. Each time there is new data to be cached, the cached data is packed into a node and inserted into the head of the two-way linked list. If the cached data in the linked list is accessed, the corresponding nodes of the data are also moved to the head of the linked list. This ensures that the data used (storage/access) is always at the front of the list. When the total amount of cached data exceeds the capacity, the end of the cached data node is deleted first, because the end of the data has been used at least. The following picture:
YYMemoryCache maintains a _YYLinkMap object, which implements caching and LRU functions. Here are the code comments:
@interface _YYLinkedMap : NSObject {
@package
CFMutableDictionaryRef _dic; //Hash dictionary to store cached data
NSUInteger _totalCost; //Total cache size
NSUInteger _totalCount; //Number of cached nodes
_YYLinkedMapNode *_head; //head node
_YYLinkedMapNode *_tail; //Tail node
BOOL _releaseOnMainThread; //Release in main thread
BOOL _releaseAsynchronously;//Release in asynchronous threads
}
@end
_ dic is a hash dictionary, which stores cached data. Head and tail are pointers to the head and tail nodes in the double linked list respectively. The node unit in the linked list is the _YLinkedMapNode object, which encapsulates the information of cached data.
@interface _YYLinkedMapNode : NSObject {
@package
__unsafe_unretained _YYLinkedMapNode *_prev; //A pointer to the forward node
__unsafe_unretained _YYLinkedMapNode *_next; //Pointer to the next node
id _key; //Cached data key
id _value; //Cached data value
NSUInteger _cost; //Node occupancy
NSTimeInterval _time; //Node operation timestamp
}
@end
The following is an analysis of the main methods of the _YLinkedMap object:
-
insertNodeAtHead: Method
This method first stores the newly inserted data nodes in the dictionary, and takes the key in the node as the key of the dictionary. Then update the total size _total Cost and the total number of nodes _total Count, and place the nodes in the head of the list.
- (void)insertNodeAtHead:(_YYLinkedMapNode *)node { CFDictionarySetValue(_dic, (__bridge const void *)(node->_key), (__bridge const void *)(node)); //Put it in a dictionary _totalCost += node->_cost; //Update total size _totalCount++; //Update total if (_head) { //Nodes are placed at the head of the list node->_next = _head; _head->_prev = node; _head = node; } else { _head = _tail = node; } }
-
bringNodeToHead: Method
This method moves the node to the head of the list, because the scenario in which the method is called is that the node already exists in the dictionary, so there is no need to add a new dictionary.
- (void)bringNodeToHead:(_YYLinkedMapNode *)node { if (_head == node) return; if (_tail == node) { _tail = node->_prev; _tail->_next = nil; } else { node->_next->_prev = node->_prev; node->_prev->_next = node->_next; } node->_next = _head; node->_prev = nil; _head->_prev = node; _head = node; }
-
removeNode method and removeTailNode method
The removeNode method deletes data nodes from dictionaries and linked lists, and updates the total size _totalCost and the total number of nodes _totalCount. The removeTailNode method deletes the tail nodes from the list and the nodes from the dictionary.
-
removeAll method
This method deletes all nodes in the list and all nodes from the dictionary.
YYMemoryCache
YYMemoryCache implements memory caching. Here are the member variables it maintains:
pthread_mutex_t _lock;
_YYLinkedMap *_lru;
dispatch_queue_t _queue;
_ Lock is mutually exclusive. When multithreaded execution code is involved, the following block of code is mutually exclusive by using the pthread_mutex_lock(&_lock) method, so that other threads will be blocked until pthread_mutex_unlock(&_lock) is called. As follows:
pthread_mutex_lock(&_lock);
//Code block 1
pthread_mutex_unlock(&_lock);
pthread_mutex_lock(&_lock);
//Code block 2
pthread_mutex_unlock(&_lock);
Thread A executes code block 1 and thread B executes code block 2. If thread A executes code block 1 first and _lock is locked, thread B is blocked until thread A executes code block 1, then pthread_mutex_unlock(_lock) is called and thread B begins to execute code block 2. Because the operation of caching is very easy to involve multithreaded calls, it needs to be controlled by pthread_mutex_lock. For testing the performance of various locks, YYCache's author ibireme is in his Blog It is expounded in this paper.
_ lru is used for data caching and the lru algorithm is implemented. Here's an analysis of YYMemoryCache's main methods:
-
Initialization
Initialize by calling init method, create lru object, and set some parameters, including the number of caching nodes, total cost limit, timestamp limit, the interval of boundary detection and so on. As follows:
- (instancetype)init { self = super.init; pthread_mutex_init(&_lock, NULL); _lru = [_YYLinkedMap new]; _queue = dispatch_queue_create("com.ibireme.cache.memory", DISPATCH_QUEUE_SERIAL); _countLimit = NSUIntegerMax; _costLimit = NSUIntegerMax; _ageLimit = DBL_MAX; _autoTrimInterval = 5.0; ... [self _trimRecursively]; return self; }
If these limit parameters are not set, the default values are the largest, and these parameters and the _trimRecursively method are used to detect the boundary of the cache space, as mentioned below.
-
Store data
Call setObject: forKey: method to store cached data, code as follows:
- (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost { if (!key) return; if (!object) { [self removeObjectForKey:key]; return; } pthread_mutex_lock(&_lock); //Lock up _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key)); //Extracting Nodes from a Dictionary NSTimeInterval now = CACurrentMediaTime(); if (node) { //If it can be retrieved, it indicates that there is cached data corresponding to key in the linked list before it is retrieved. //Update TotCost _lru->_totalCost -= node->_cost; _lru->_totalCost += cost; node->_cost = cost; node->_time = now; //Update node access time node->_value = object; //Update cached data stored in nodes [_lru bringNodeToHead:node]; //Move the node to the head of the list } else { //If it cannot be retrieved, it means that there is no cached data corresponding to key in the linked list before. node = [_YYLinkedMapNode new]; //Create new nodes node->_cost = cost; node->_time = now; //Time to Set Nodes node->_key = key; //Setting the key of the node node->_value = object; //Set the cache data stored in the node [_lru insertNodeAtHead:node]; //Add new nodes to the list header } if (_lru->_totalCost > _costLimit) { dispatch_async(_queue, ^{ [self trimToCost:_costLimit]; }); } if (_lru->_totalCount > _countLimit) { _YYLinkedMapNode *node = [_lru removeTailNode]; ... } pthread_mutex_unlock(&_lock); //Unlock }
First, it determines whether the key and object are empty, and if the object is empty, deletes the data corresponding to the key in the cache. Then find the cache data corresponding to key from the dictionary, which can be divided into two cases. If access to the node indicates the existence of the cache data, then according to the principle of least recent use, move the node of this operation to the head of the list, and update the access time of the node. If you can't access the nodes, it means that it is the first time to add keys and data. You need to create a new node, store the nodes in the dictionary, and add the header of the list. cost is specified, default is 0.
-
Access data
Call objectForKey: Method to access cached data, code comments are as follows:
- (id)objectForKey:(id)key { if (!key) return nil; pthread_mutex_lock(&_lock); _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key)); //Read the corresponding key node from the dictionary if (node) { node->_time = CACurrentMediaTime(); //Update node access time [_lru bringNodeToHead:node]; //Move the node to the head of the list } pthread_mutex_unlock(&_lock); return node ? node->_value : nil; }
This method obtains the cached data from the dictionary. If the data corresponding to the key exists, the access time is updated. According to the principle of least recent use, the node of this operation is moved to the head of the list. If it does not exist, it returns directly to nil.
-
boundary detection
YYCache processes whether the cached data exceeds the capacity through LRU algorithm. First, at initialization, the _trimRecursively method is called, and by default, it is called every five seconds through the dispatch_after method. Here are the code comments:
- (void)_trimRecursively { __weak typeof(self) _self = self; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(_autoTrimInterval * NSEC_PER_SEC)), dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ __strong typeof(_self) self = _self; if (!self) return; [self _trimInBackground]; //Executing boundary detection in asynchronous queues [self _trimRecursively]; //Call this method recursively }); }
_ trimInBackground calls _trimToCost, _trimToCount and _trimToAge methods for detection, respectively.
_ The trimToCost method determines whether the sum of total Cost occupied by all nodes in the list is greater than costLimit, and if it exceeds, deletes the nodes from the end of the list until the total Cost is less than or equal to costLimit. The code comments are as follows:
- (void)_trimToCost:(NSUInteger)costLimit { BOOL finish = NO; ... NSMutableArray *holder = [NSMutableArray new]; while (!finish) { if (pthread_mutex_trylock(&_lock) == 0) { if (_lru->_totalCost > costLimit) { _YYLinkedMapNode *node = [_lru removeTailNode]; //Delete the End Node if (node) [holder addObject:node]; } else { finish = YES; //TotCost <= costLimit, completed } pthread_mutex_unlock(&_lock); } else { usleep(10 * 1000); //10 ms } } ... }
The cost of each node is artificially specified, default is 0, and costLimit defaults to NSUIntegerMax, so by default, the _trimToCost method does not delete the end node.
_ The trimToCount method determines whether the sum of all nodes in the list is greater than countLimit, and if it is more than countLimit, deletes the nodes from the end of the list until the sum of the nodes is less than or equal to countLimit. The code comments are as follows:
- (void)_trimToCount:(NSUInteger)countLimit { BOOL finish = NO; ... NSMutableArray *holder = [NSMutableArray new]; while (!finish) { if (pthread_mutex_trylock(&_lock) == 0) { if (_lru->_totalCount > countLimit) { _YYLinkedMapNode *node = [_lru removeTailNode]; //Delete the End Node if (node) [holder addObject:node]; } else { finish = YES; //TotCount <= countLimit, detection completed } pthread_mutex_unlock(&_lock); } else { usleep(10 * 1000); //10 ms } } ... }
At initialization, countLimit defaults to NSUIntegerMax. If countLimit is not specified, the total number of nodes will never exceed this limit, so the _trimToCount method does not delete the end node.
_ The trimToAge method traverses the nodes in the list and deletes those nodes whose time interval with now is greater than that of ageLimit. The code is as follows:
- (void)_trimToAge:(NSTimeInterval)ageLimit { BOOL finish = NO; ... NSMutableArray *holder = [NSMutableArray new]; while (!finish) { if (pthread_mutex_trylock(&_lock) == 0) { if (_lru->_tail && (now - _lru->_tail->_time) > ageLimit) { //Interval greater than ageLimit _YYLinkedMapNode *node = [_lru removeTailNode]; //Delete the End Node if (node) [holder addObject:node]; } else { finish = YES; } pthread_mutex_unlock(&_lock); } else { usleep(10 * 1000); //10 ms } } ... }
Because of the nodes in the list from head to tail, the access time is from late to early, so the time interval between the tail node and now time is large, and the tail node is deleted from the tail node. The default value of ageLimit is DBL_MAX. If ageLimit is not specified artificially, the nodes in the list will not be deleted.
-
Thread synchronization
YYCache guarantees data synchronization when multithreading operations cache by adding mutually exclusive logic to the method. For example, add code in setObject:forKey:withCost: method and objectForKey:
- (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost { pthread_mutex_lock(&_lock); //Operate linked list, write cached data pthread_mutex_unlock(&_lock); } - (id)objectForKey:(id)key { pthread_mutex_lock(&_lock); //Accessing cached data pthread_mutex_unlock(&_lock); }
If there are threads A and B, thread A is locked when writing cache, thread B is blocked when reading cached data. Thread B can read cached data only after thread A completes the operation of writing cache and calls pthread_mutex_unlock. At this time, the new cached data has been written, which ensures the synchronization of operation data.
YYCache uses mutual exclusion to ensure the synchronization of multithreaded access data and the security of code execution.
summary
YYMemoryCache operates on the memory cache. Compared with hard disk cache, YYMemoryCache needs I/O operation, which is much faster in performance. Therefore, YYMemoryCache is preferred when accessing the cache.