Golang provides the object pool Pool in sync.Usually everyone calls this an object pool.It is well known that go is automatically garbage collected, which greatly reduces the programming burden.But gc is a double-edged sword.For example, they are friendly to beginning programmers, but later on, as the project becomes larger and larger, memory maintenance issues will gradually become exposed.
Sync.Pool is a pool of temporary objects that can be stored or retrieved.Provide API s such as New, Get, Put to the outside world.In this paper, sync.Pool is analyzed.
What is the purpose of Pool design?
Pool is used to save and reuse temporary objects to reduce memory allocation and CG pressure.
code implementation
The best way to implement code is to look at the source code (src/pkg/sync/pool.go).
The data structure is defined as follows:
Pool structure:
type Pool struct { noCopy noCopy local unsafe.Pointer // local fixed-size per-P pool, actual type is [P]poolLocal localSize uintptr // size of the local array victim unsafe.Pointer // local from previous cycle victimSize uintptr // size of victims array // New optionally specifies a function to generate // a value when Get would otherwise return nil. // It may not be changed concurrently with calls to Get. New func() interface{} }
Interpret the meaning of each member:
noCopy: Prevent sync.Pool from being copied
Pointer to local:poolLocal array
localSize:poolLocal array size
Pointer to victim:poolvictim array
victimSize:poolvictim array size
New: Function pointer requests specific objects to facilitate user customization of various types of objects
poolLocalInternal structure:
// Local per-P Pool appendix. type poolLocalInternal struct { private interface{} // Can be used only by the respective P. shared poolChain // Local P can pushHead/popHead; any P can popTail. }
Interpret the meaning of each member:
private: can only be used by the corresponding P
shared: Local P can push/popHead; any P can poolChain
The poolLocal structure:
type poolLocal struct { poolLocalInternal // Prevents false sharing on widespread platforms with // 128 mod (cache line size) = 0 . // Complete the poolLocal to a multiple of two cached rows to prevent false sharing. // Each cache row has 64 bytes, or 512 bit s // Currently our processor typically has 32 * 1024 / 64 = 512 cached rows pad [128 - unsafe.Sizeof(poolLocalInternal{})%128]byte }
A poolLocal is bound to a P, that is, a P holds a poolLocal.Each poolLocal is an even multiple of the size of the cached row.
Pool exposes three main interfaces
New func() interface{} func (p *Pool) Put(x interface{}) func (p *Pool) Get() interface{}
Put
The process of Put is to put temporary objects inside a Pool.private pools are preferred for elements; shared pools are preferred if privates are not empty.The source code is as follows:
// Put adds x to the pool. func (p *Pool) Put(x interface{}) { if x == nil { return } if race.Enabled { if fastrand()%4 == 0 { // Randomly drop x on floor. return } race.ReleaseMerge(poolRaceAddr(x)) race.Disable() } l, _ := p.pin() if l.private == nil { l.private = x x = nil } if x != nil { l.shared.pushHead(x) } runtime_procUnpin() if race.Enabled { race.Enable() } }
Get
A poolLocal is selected from the poolLocal slice of per-P first.The source code is as follows:
// If Get would otherwise return nil and p.New is non-nil, Get returns // the result of calling p.New. func (p *Pool) Get() interface{} { if race.Enabled { race.Disable() } l, pid := p.pin() // Get from private first x := l.private l.private = nil if x == nil { // Try to pop the head of the local shard. We prefer // the head over the tail for temporal locality of // reuse. x, _ = l.shared.popHead() if x == nil { // If not, get a new cache object x = p.getSlow(pid) } } runtime_procUnpin() if race.Enabled { race.Enable() if x != nil { race.Acquire(poolRaceAddr(x)) } } // If getSlow is still not available, New1 if x == nil && p.New != nil { x = p.New() } return x } func (p *Pool) getSlow(pid int) interface{} { // See the comment in pin regarding ordering of the loads. size := atomic.LoadUintptr(&p.localSize) // load-acquire locals := p.local // load-consume // Try to steal one element from other procs. for i := 0; i < int(size); i++ { l := indexLocal(locals, (pid+i+1)%int(size)) if x, _ := l.shared.popTail(); x != nil { return x } } // Try the victim cache. We do this after attempting to steal // from all primary caches because we want objects in the // victim cache to age out if at all possible. size = atomic.LoadUintptr(&p.victimSize) if uintptr(pid) >= size { return nil } locals = p.victim l := indexLocal(locals, pid) if x := l.private; x != nil { l.private = nil return x } for i := 0; i < int(size); i++ { l := indexLocal(locals, (pid+i)%int(size)) if x, _ := l.shared.popTail(); x != nil { return x } } // Mark the victim cache as empty for future gets don't bother // with it. atomic.StoreUintptr(&p.victimSize, 0) return nil } // pin pins the current goroutine to P, disables preemption and // returns poolLocal pool for the P and the P's id. // Caller must call runtime_procUnpin() when done with the pool. func (p *Pool) pin() (*poolLocal, int) { pid := runtime_procPin() // In pinSlow we store to local and then to localSize, here we load in opposite order. // Since we've disabled preemption, GC cannot happen in between. // Thus here we must observe local at least as large localSize. // We can observe a newer/larger local, it is fine (we must observe its zero-initialized-ness). s := atomic.LoadUintptr(&p.localSize) // load-acquire l := p.local // load-consume if uintptr(pid) < s { return indexLocal(l, pid), pid } return p.pinSlow() } func (p *Pool) pinSlow() (*poolLocal, int) { // Retry under the mutex. // Can not lock the mutex while pinned. runtime_procUnpin() allPoolsMu.Lock() defer allPoolsMu.Unlock() pid := runtime_procPin() // poolCleanup won't be called while we are pinned. s := p.localSize l := p.local if uintptr(pid) < s { return indexLocal(l, pid), pid } if p.local == nil { allPools = append(allPools, p) } // If GOMAXPROCS changes between GCs, we re-allocate the array and lose the old one. size := runtime.GOMAXPROCS(0) local := make([]poolLocal, size) atomic.StorePointer(&p.local, unsafe.Pointer(&local[0])) // store-release atomic.StoreUintptr(&p.localSize, uintptr(size)) // store-release return &local[pid], pid }
There are three sources to get objects:
1. Select objects from private first.
2. If not, it is obtained from the shared pool.
3. If not, use the New method to create a new one
The order in which objects are acquired is first from the private pool, then from the shared pool if unsuccessful, and then from the New method if unsuccessful, that is, from the system's Heap memory.
CleanUp implementation
Register the poolCleanup function.The source code is as follows:
func init() { runtime_registerPoolCleanup(poolCleanup) }
Here's how Pool's cleanup function, poolCleanup(), cleans up Pool. The source code is as follows:
func poolCleanup() { // This function is called with the world stopped, at the beginning of a garbage collection. // It must not allocate and probably should not call any runtime functions. // Because the world is stopped, no pool user can be in a // pinned section (in effect, this has all Ps pinned). // Drop victim caches from all pools. for _, p := range oldPools { p.victim = nil p.victimSize = 0 } // Move primary cache to victim cache. for _, p := range allPools { p.victim = p.local p.victimSize = p.localSize p.local = nil p.localSize = 0 } // The pools with non-empty primary caches now have non-empty // victim caches and no pools have primary caches. oldPools, allPools = allPools, nil } var ( allPoolsMu Mutex // allPools is the set of pools that have non-empty primary // caches. Protected by either 1) allPoolsMu and pinning or 2) // STW. allPools []*Pool // oldPools is the set of pools that may have non-empty victim // caches. Protected by STW. oldPools []*Pool )
Memory cannot be allocated within this function and no runtime function can be called.In fact, all objects are set to nil, waiting for GC to do automatic recycling.
summary
The Get method of sync.Pool does not guarantee anything about the acquired object, because the values placed in the local subpool may be deleted anywhere and the caller will not be notified.The main purpose of sync.Pool is to increase the reuse rate of temporary objects and reduce the GC burden.
(WeChat Public Number [Programmed Ape Code])
(add my micro-signal, add notes to group, enter program ape coding communication group, get learning materials, get daily dry goods)
WeChat Public Number, where Linux c/c++, Python, Go Language, data structure and algorithms, network programming related knowledge, common programmer tools.There are also news bulletins focusing on daily politics, people's livelihood, culture and entertainment so as to know the world immediately!