Local caching is often used in real business implementation to reduce access to real storage. We design different caching modes for different usage scenarios, but the basic implementation is a huge Map/Table (different languages). The following uses go, the overall implementation is through a, according to different needs, LFU, LRU redesign map inside the specific entry struct, save the number needed to do elimination, the following from different angles of use to illustrate.
Local caching of common saved configuration items
In practical use, some mid-stage service/service discoveries need to pull more etcd configurations (and possibly other storage), but as a whole, they are still limited. There is no need to consider the elimination of key at this time, and there is not too much memory to be occupied locally. The goal is instead to avoid long-term access to etcd or storage and to ensure that the configuration has a mechanism for real-time updates.
There are two ways
- pull regularly
- Local need to maintain the full number of configurations, need to know all configurations in advance, inflexible
- Pulling all configurations at the same time is a burden on storage and may need to be scattered randomly.
- After reading, cache and set an expiration time
- After the expiration of the next visit to retrieve
- Single pull failure returns expired cache values
Relatively speaking, the second method is better, and this implementation is relatively simple.
import ( "sync" "time" ) var localCache *Cache = &Cache{} type Cache struct { m sync.Map } type Entry struct { Expire time.Time Value interface{} } func (c *Cache) Set(key string, value interface{}, expiration time.Duration) { entry := &Entry{ Expire: time.Now().Add(expiration), Value: value, } c.m.Store(key, entry) } func (c *Cache) Get(key string) (value interface{}, expired bool) { v, ok := c.m.Load(key) e, ok := v.(*Entry) if !ok { return nil, false } return e.Value, e.Expire.Before(time.Now()) }
Core services with a large number of repetitive requests
The underlying core services are mainly some CRUD operations, there will be centralized access to certain resources, such as some core UGC content and users, such as Weibo/twitter Big V (in fact, the flow of information itself is pull-pull combination, here is when users click on details), if their information is directly on redis/mc, will be It will be a hot_key, resulting in unhealthy storage. LFU/LRU should be selected according to the usage scenario.
- If it is hot content, such as financial news, real-time is relatively high. Nobody paid attention to the news half an hour ago. That should be the LRU model.
- If it is header user information, real-time is not so high, but care about the frequency of access, such as star's personal page visits in most of the time is uniform and high (hot search can not cover, anyway, high frequency), that is more sensitive to the frequency, at this time will use LFU.
LRU is more used than star. groupCache
Others have implemented the Quick Use Edition based on this. golang-lru
LFU algorithm with more usage is less lfu-go It's more commonly used.
However, these cache s are more customized, in the real environment we may need to change, even combined with lfu and lru to use. The main parameters that need to be changed should include at least MaxLength and Expired Time (some business data are real-time, and removes alone may never remove).
Later, we found that there are also relatively customized Cache implementations. BigCache
The idea of sharing greatly enhances the support of BigMap. The principle and performance analysis can be seen in the author's document https://allegro.tech/2016/03/writing-fast-cache-service-in-go.html.
The annotations in ReadMe are clear enough to refer to
config := bigcache.Config { // number of shards (must be a power of 2) Shards: 1024, // time after which entry can be evicted LifeWindow: 10 * time.Minute, // Interval between removing expired entries (clean up). // If set to <= 0 then no action is performed. // Setting to < 1 second is counterproductive — bigcache has a one second resolution. CleanWindow: 5 * time.Minute, // rps * lifeWindow, used only in initial memory allocation MaxEntriesInWindow: 1000 * 10 * 60, // max entry size in bytes, used only in initial memory allocation MaxEntrySize: 500, // prints information about additional memory allocation Verbose: true, // cache will not allocate more memory than this limit, value in MB // if value is reached then the oldest entries can be overridden for the new ones // 0 value means no size limit HardMaxCacheSize: 8192, // callback fired when the oldest entry is removed because of its expiration time or no space left // for the new entry, or because delete was called. A bitmask representing the reason will be returned. // Default value is nil which means no callback and it prevents from unwrapping the oldest entry. OnRemove: nil, // OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left // for the new entry, or because delete was called. A constant representing the reason will be passed through. // Default value is nil which means no callback and it prevents from unwrapping the oldest entry. // Ignored if OnRemove is specified. OnRemoveWithReason: nil, }
If the service is purely some IO operations, such as reading and writing database, then the value of Cache can be set larger, which can reduce network IO, speed up the call and reduce the pressure on storage.