Original from: http://www.tanjp.com/archives/144 (immediate fix and update)
Doublecachequeue
Double buffer, so the idea is to have two buffers (A and B for short). These two buffers are always one for the producer and one for the consumer. When the buffer triggers a condition, a switch is made (the previously written by the producer is transferred to the consumer fetching, and the previously fetched by the consumer is transferred to the producer writing). The triggering condition is generally based on fast consumption (to avoid too much data accumulation in memory), so when the consumption buffer is empty, the exchange is triggered. Since producers and consumers do not operate on the same buffer at the same time (no conflict), there is no need to synchronize / mutually exclusive operations when reading and writing each data unit.
PS: multi thread security. When the speed of production or consumption is unbalanced, the number of exchanges is less, which can significantly improve performance.
Partial code implementation:
template<typename tpType> class DoubleCacheQueue { typedef std::queue< tpType > Queue; typedef spin_mutex Mutex; public: explicit DoubleCacheQueue() : mn_count(0) , mp_push_queue(0) , mp_pop_queue(0) , mc_queue_A() , mc_queue_B() { mp_push_queue = &mc_queue_A; mp_pop_queue = &mc_queue_B; } bool push(const tpType & po_val) { std::lock_guard<Mutex> lock(mo_mutex_push); mp_push_queue->push(po_val); ++mn_count; return true; } bool pop(tpType & po_val) { std::lock_guard<Mutex> lock(mo_mutex_pop); if (!mp_pop_queue->empty()) { //Data can be taken out directly po_val = std::move(mp_pop_queue->front()); mp_pop_queue->pop(); --mn_count; return true; } else { { //Try to exchange data std::lock_guard<Mutex> lock(mo_mutex_push); if (mp_push_queue->empty()) { return false; //No data for both queues } //Data available for exchange std::queue< tpType > * zp_tmp = mp_push_queue; mp_push_queue = mp_pop_queue; mp_pop_queue = zp_tmp; } //Take out data after exchange po_val = std::move(mp_pop_queue->front()); mp_pop_queue->pop(); --mn_count; return true; } } protected: std::atomic<uint32> mn_count; Queue * mp_push_queue; Queue * mp_pop_queue; Queue mc_queue_A; Queue mc_queue_B; Mutex mo_mutex_push; Mutex mo_mutex_pop; };
Spinmutex (SpinMutex)
Spin lock is a kind of lock used to protect multithreaded shared resources. It is different from mutex in that when the spin lock attempts to acquire the ownership of the lock, it will repeatedly check whether the lock is available in the form of busy waiting. In multiprocessor environment, using spin lock instead of mutual exclusive lock can improve the performance of programs with short lock holding time.
class spin_mutex { public: spin_mutex() = default; void lock() { while (mn_flag.test_and_set(std::memory_order_acquire)); } void unlock() { mn_flag.clear(std::memory_order_release); } private: std::atomic_flag mn_flag = ATOMIC_FLAG_INIT; };