Let's talk about the actual scene. In our development, we have the following scenarios
a) Close idle connections. There are many client connections in the server, which need to be closed after a period of idle time.
b) Caching. Objects in the cache need to be removed from the cache because they exceed the idle time.
c) Task timeout processing. When the network protocol sliding window requests reply interaction, processing requests that have not responded to the timeout.
One clumsy way is to use a background thread to traverse all objects and check them one by one. This clumsy method is simple and easy to use, but when the number of objects is too large, there may be performance problems, the check interval is not set well, the interval is too large, affecting accuracy, and the number of small objects will lead to efficiency problems. And it can't be processed in time-out sequence.
For this scenario, DelayQueue is the best choice.
DelayQueue is an interesting class provided in java.util.concurrent. Very clever, very good! However, neither java doc nor Java SE 5.0 source provides Sample. When I first read the Scheduled ThreadPool Executor source code, I found DelayQueue useful. Subsequently, in practical work, the application of session timeout management, network response communication protocol request timeout processing.
This article will introduce DelayQueue and list the application scenarios. It also provides an implementation of Delayed interface and Sample code.
DelayQueue is a BlockingQueue whose specialization parameter is Delayed. (For those who don't know BlockingQueue, go to BlockingQueue and read this article.)
Delayed extends the Comparable interface. The benchmark for comparison is the time value of delay. The return value of the implementation class getDelay of Delayed interface should be a fixed value. DelayQueue is implemented internally using PriorityQueue.
DelayQueue = BlockingQueue + PriorityQueue + Delayed
The key elements of DelayQueue are BlockingQueue, PriorityQueue, Delayed. It can be said that DelayQueue is a BlockingQueue implemented with Priority Queue, and the baseline value of priority queue is time.
Their basic definitions are as follows
public interface Comparable<T> { public int compareTo(T o); }
public interface Delayed extends Comparable<Delayed> { long getDelay(TimeUnit unit); }
public class DelayQueue<E extends Delayed> implements BlockingQueue<E> { private final PriorityQueue<E> q = new PriorityQueue<E>(); }
The implementation inside DelayQueue uses a priority queue. When the DelayQueue offer method is called, the Delayed object is added to the priority queue q. As follows:
public boolean offer(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { E first = q.peek(); q.offer(e); if (first == null || e.compareTo(first) < 0) available.signalAll(); return true; } finally { lock.unlock(); } }
DelayQueue's take method takes the first of priority queue q out (peek), and if the delay threshold is not reached, await is processed. As follows:
public E take() throws InterruptedException { final ReentrantLock lock = this.lock; lock.lockInterruptibly(); try { for (;;) { E first = q.peek(); if (first == null) { available.await(); } else { long delay = first.getDelay(TimeUnit.NANOSECONDS); if (delay > 0) { long tl = available.awaitNanos(delay); } else { E x = q.poll(); assert x != null; if (q.size() != 0) available.signalAll(); // wake up other takers return x; } } } } finally { lock.unlock(); } }
The following is Sample, a simple implementation of caching. There are three classes Pair, DelayItem and Cache. As follows:
public class Pair<K, V> { public K first; public V second; public Pair() {} public Pair(K first, V second) { this.first = first; this.second = second; } }
--------------
The following is the implementation of Delayed
import java.util.concurrent.Delayed; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; public class DelayItem<T> implements Delayed { /** Base of nanosecond timings, to avoid wrapping */ private static final long NANO_ORIGIN = System.nanoTime(); /** * Returns nanosecond time offset by origin */ final static long now() { return System.nanoTime() - NANO_ORIGIN; } /** * Sequence number to break scheduling ties, and in turn to guarantee FIFO order among tied * entries. */ private static final AtomicLong sequencer = new AtomicLong(0); /** Sequence number to break ties FIFO */ private final long sequenceNumber; /** The time the task is enabled to execute in nanoTime units */ private final long time; private final T item; public DelayItem(T submit, long timeout) { this.time = now() + timeout; this.item = submit; this.sequenceNumber = sequencer.getAndIncrement(); } public T getItem() { return this.item; } public long getDelay(TimeUnit unit) { long d = unit.convert(time - now(), TimeUnit.NANOSECONDS); return d; } public int compareTo(Delayed other) { if (other == this) // compare zero ONLY if same object return 0; if (other instanceof DelayItem) { DelayItem x = (DelayItem) other; long diff = time - x.time; if (diff < 0) return -1; else if (diff > 0) return 1; else if (sequenceNumber < x.sequenceNumber) return -1; else return 1; } long d = (getDelay(TimeUnit.NANOSECONDS) - other.getDelay(TimeUnit.NANOSECONDS)); return (d == 0) ? 0 : ((d < 0) ? -1 : 1); } }
Following is the implementation of Cache, including put and get methods, as well as executable main functions.
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.DelayQueue; import java.util.concurrent.TimeUnit; import java.util.logging.Level; import java.util.logging.Logger; public class Cache<K, V> { private static final Logger LOG = Logger.getLogger(Cache.class.getName()); private ConcurrentMap<K, V> cacheObjMap = new ConcurrentHashMap<K, V>(); private DelayQueue<DelayItem<Pair<K, V>>> q = new DelayQueue<DelayItem<Pair<K, V>>>(); private Thread daemonThread; public Cache() { Runnable daemonTask = new Runnable() { public void run() { daemonCheck(); } }; daemonThread = new Thread(daemonTask); daemonThread.setDaemon(true); daemonThread.setName("Cache Daemon"); daemonThread.start(); } private void daemonCheck() { if (LOG.isLoggable(Level.INFO)) LOG.info("cache service started."); for (;;) { try { DelayItem<Pair<K, V>> delayItem = q.take(); if (delayItem != null) { // Timeout Object Processing Pair<K, V> pair = delayItem.getItem(); cacheObjMap.remove(pair.first, pair.second); // compare and remove } } catch (InterruptedException e) { if (LOG.isLoggable(Level.SEVERE)) LOG.log(Level.SEVERE, e.getMessage(), e); break; } } if (LOG.isLoggable(Level.INFO)) LOG.info("cache service stopped."); } // Add cached objects public void put(K key, V value, long time, TimeUnit unit) { V oldValue = cacheObjMap.put(key, value); if (oldValue != null) q.remove(key); long nanoTime = TimeUnit.NANOSECONDS.convert(time, unit); q.put(new DelayItem<Pair<K, V>>(new Pair<K, V>(key, value), nanoTime)); } public V get(K key) { return cacheObjMap.get(key); } // Test Entry Function public static void main(String[] args) throws Exception { Cache<Integer, String> cache = new Cache<Integer, String>(); cache.put(1, "aaaa", 3, TimeUnit.SECONDS); Thread.sleep(1000 * 2); { String str = cache.get(1); System.out.println(str); } Thread.sleep(1000 * 2); { String str = cache.get(1); System.out.println(str); } } }
Running Sample, the main function executes two lines of output, the first action aaa, and the second action null.