As we all know, before JDK 1.5, in order to achieve business concurrency in Java, programmers were usually required to complete code implementation independently. Of course, some open source frameworks provided these functions, but these functions are still not easy to use. When designing concurrent Java multithreaded programs with high quality, in order to prevent the occurrence of deadlock, such as wait(), notify() and synchronized before using java, we often adopt some more complex security strategies whenever we need to consider performance, deadlock, fairness, resource management and how to avoid the harm of thread security. Fortunately, after the advent of JDK 1.5, Doug Lea finally launched the java.util.concurrent toolkit for our poor little programmers to simplify and complete concurrently. By doing so, developers will effectively reduce race conditions and deadlock threads. The concurrent package solves these problems well and provides us with a more practical concurrent program model.
Executor: Executor of specific Runnable tasks.
Executor Service: A thread pool manager, whose implementation classes are various, I will introduce some of them. We can submit Runnable,Callable to the pool for scheduling.
Semaphore: A counting semaphore
ReentrantLock: A reusable mutex lock Lock, similar to synchronized, but much more powerful.
Future: Interfaces to interact with Runnable and Callable, such as the results returned after the execution of a thread, etc., and cancel termination threads are also provided.
BlockingQueue: Blocking queues.
Completion Service: Extension of ExecutorService to obtain thread execution results
CountDownLatch: A synchronization helper class that allows one or more threads to wait until a set of operations are performed in other threads.
Cyclic Barrier: A synchronous auxiliary class that allows a group of threads to wait for each other until they reach a common barrier point
Future: Future represents the result of asynchronous computation.
Scheduled Executor Service: An Executor Service that can schedule commands to run or execute periodically after a given delay.
Next, I will introduce them one by one.
Explanation of Executors'Main Methods
New Fixed ThreadPool
Create a pool of reusable fixed threads to run them in a shared unbounded queue (only if the request comes in, it will wait in a queue for execution). If any thread terminates due to a failure during execution before closure, a new thread will take its place to perform subsequent tasks (if necessary).
New Cached ThreadPool
Create a thread pool that creates new threads as needed, but reuse them when previously constructed threads are available. For programs that perform many short-term asynchronous tasks, these thread pools usually improve program performance. Calling execute will reuse previously constructed threads if they are available. If no existing thread is available, create a new thread and add it to the pool. Terminate and remove threads that have not been used for 60 seconds from the cache. Therefore, thread pools that remain idle for a long time do not use any resources. Note that you can use the ThreadPoolExecutor constructor to create thread pools with similar properties but different details, such as timeout parameters.
New Single ThreadExecutor (single background thread)
Create an Executor that uses a single worker thread to run it in an unbounded queue. (Note that if this single thread is terminated because of a failure during execution prior to closure, a new thread will perform subsequent tasks instead of it if necessary). It ensures that tasks are executed sequentially and that no more than one thread is active at any given time. Unlike other equivalent newFixedThreadPool(1), other threads can be used without reconfiguring the executor returned by this method.
These methods return ExecutorService objects, which can be understood as a thread pool.
The function of this thread pool is relatively perfect. The submit() task can be submitted to terminate the thread pool shutdown().
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MyExecutor extends Thread {
private int index;
public MyExecutor(int i){
this.index=i;
}
public void run(){
try{
System.out.println("["+this.index+"] start....");
Thread.sleep((int)(Math.random()*1000));
System.out.println("["+this.index+"] end.");
}
catch(Exception e){
e.printStackTrace();
}
}
public static void main(String args[]){
ExecutorService service=Executors.newFixedThreadPool(4);
for(int i=0;i<10;i++){
service.execute(new MyExecutor(i));
//service.submit(new MyExecutor(i));
}
System.out.println("submit finish");
service.shutdown();
}
}
Although some information has been printed, it is not very clear how the thread pool works. Let's increase the sleeping time by 10 times.
Thread.sleep((int)(Math.random()*10000));
Looking at it again, it's clear that only four threads can be executed. When a thread has been executed, a new thread will be executed. That is to say, after we submit all threads, the thread pool will wait for the final shutdown. We also find that submitted threads are placed in an "unbounded queue". This is an ordered queue.
In addition, it uses Executors static function to generate a fixed thread pool, as the name implies, thread pool will not release, even if it is Idle.
This can cause performance problems, such as if the thread pool is 200 in size, when all threads are used up, all threads will remain in the pool, and the corresponding memory and thread switching (while(true)+sleep cycle) will increase.
If this problem is to be avoided, it must be constructed directly using ThreadPoolExecutor(). You can set "maximum number of threads", "minimum number of threads" and "idle thread keepAlive time" just like a common thread pool.
This is the basic use of thread pools.
Semaphore
A counting semaphore. Conceptually, semaphores maintain a permission set. If necessary, each acquire() is blocked before the license is available, and then the license is obtained. Each release() adds a license, which may release a blocking acquirer. However, without using the actual license object, Semaphore only counts the number of licenses available and takes corresponding actions.
Semaphore is commonly used to limit the number of threads that can access certain resources (physical or logical). For example, the following classes use signal quantities to control access to content pools:
Here is a practical situation. Everyone queues up to go to the toilet. There are only two places in the toilet. Ten people need to queue up.
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Semaphore;
public class MySemaphore extends Thread {
Semaphore position;
private int id;
public MySemaphore(int i,Semaphore s){
this.id=i;
this.position=s;
}
public void run(){
try{
if(position.availablePermits()>0){
System.out.println("customer["+this.id+"]Enter the toilet, there are vacancies");
}
else{
System.out.println("customer["+this.id+"]Enter the toilet, no vacancies, line up");
}
position.acquire();
System.out.println("customer["+this.id+"]Get pit position");
Thread.sleep((int)(Math.random()*1000));
System.out.println("customer["+this.id+"]Finished use");
position.release();
}
catch(Exception e){
e.printStackTrace();
}
}
public static void main(String args[]){
ExecutorService list=Executors.newCachedThreadPool();
Semaphore position=new Semaphore(2);
for(int i=0;i<10;i++){
list.submit(new MySemaphore(i+1,position));
}
list.shutdown();
position.acquireUninterruptibly(2);
System.out.println("After use, it needs cleaning.");
position.release(2);
}
}
ReentrantLock
A reusable mutex lock Lock has the same basic behavior and semantics as implicit monitor lock using synchronized methods and statements, but it is more powerful.
ReentrantLock will be owned by threads that have recently successfully acquired the lock and have not released it. When the lock is not owned by another thread, the thread calling lock will successfully acquire the lock and return. If the current thread already has the lock, this method returns immediately. The isHeldByCurrentThread() and getHoldCount() methods can be used to check if this happens.
This kind of construction method accepts an optional fair parameter.
When set to true, these locks tend to grant access to threads with the longest waiting time under contention among multiple threads. Otherwise, this lock will not guarantee any particular access order.
Compared with default settings (using unfair locking), programs that use fair locking exhibit very low overall throughput (i.e., very slow, often extremely slow) when accessing many threads, but have little difference in gaining the balance between locking and guaranteeing lock allocation. However, it should be noted that fair locking does not guarantee the fairness of thread scheduling. Therefore, one of the many threads that use fair locking may have multiple chances of success, when other active threads are not processed and currently do not hold locks. Also note that the untimely tryLock method does not use fair settings. Because even if other threads are waiting, this method can succeed as long as the lock is available.
It is recommended that you always practice immediately and use try blocks to call lock. In the previous/subsequent constructions, the most typical code is as follows:
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
My example:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.locks.ReentrantLock;
public class MyReentrantLock extends Thread{
TestReentrantLock lock;
private int id;
public MyReentrantLock(int i,TestReentrantLock test){
this.id=i;
this.lock=test;
}
public void run(){
lock.print(id);
}
public static void main(String args[]){
ExecutorService service=Executors.newCachedThreadPool();
TestReentrantLock lock=new TestReentrantLock();
for(int i=0;i<10;i++){
service.submit(new MyReentrantLock(i,lock));
}
service.shutdown();
}
}
class TestReentrantLock{
private ReentrantLock lock=new ReentrantLock();
public void print(int str){
try{
lock.lock();
System.out.println(str+"Get");
Thread.sleep((int)(Math.random()*1000));
}
catch(Exception e){
e.printStackTrace();
}
finally{
System.out.println(str+"release");
lock.unlock();
}
}
}
BlockingQueue
Queue supports two additional operations: the waiting queue becomes null when the element is retrieved, and the waiting space becomes available when the element is stored.
BlockingQueue does not accept null elements. Some implementations throw NullPointerException when trying to add, put, or offer a null element. Null is used as a warning value indicating that the poll operation failed.
BlockingQueue can be capacity-limited. It can have a remainingCapacity at any given time, beyond which additional elements cannot be put without blocking.
BlockingQueue without any internal capacity constraints always reports the remaining capacity of Integer.MAX_VALUE.
The BlockingQueue implementation is mainly used for producer-user queues, but it also supports the Collection interface. Therefore, for example, it is possible to remove any element from the queue using remove(x).
However, such operations are usually not performed effectively and can only be used occasionally in a planned way, such as when queuing information is cancelled.
The BlockingQueue implementation is thread-safe. All queuing methods can use internal locking or other forms of concurrency control to achieve their goals automatically.
However, a large number of Collection operations (addAll, containsAll, retainAll and removeAll) do not need to be automatically executed unless specifically specified in the implementation.
Therefore, for example, after adding only some elements in c, addAll(c) may fail (throwing an exception).
BlockingQueue essentially does not support any "close" or "shutdown" operation to indicate that no items are added.
The requirement and use of this function tend to depend on implementation. For example, a common strategy is to insert special end-of-stream or poison objects into the producer and interpret them according to the time the user takes them.
The following example demonstrates the basic functionality of this blocking queue.
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
public class MyBlockingQueue extends Thread {
public static BlockingQueue<String> queue = new LinkedBlockingQueue<String>(3);
private int index;
public MyBlockingQueue(int i) {
this.index = i;
}
public void run() {
try {
queue.put(String.valueOf(this.index));
System.out.println("{" + this.index + "} in queue!");
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
ExecutorService service = Executors.newCachedThreadPool();
for (int i = 0; i < 10; i++) {
service.submit(new MyBlockingQueue(i));
}
Thread thread = new Thread() {
public void run() {
try {
while (true) {
Thread.sleep((int) (Math.random() * 1000));
if(MyBlockingQueue.queue.isEmpty())
break;
String str = MyBlockingQueue.queue.take();
System.out.println(str + " has take!");
}
} catch (Exception e) {
e.printStackTrace();
}
}
};
service.submit(thread);
service.shutdown();
}
}
———— Implementation results
{0} in queue!
{1} in queue!
{2} in queue!
{3} in queue!
0 has take!
{4} in queue!
1 has take!
{6} in queue!
2 has take!
{7} in queue!
3 has take!
{8} in queue!
4 has take!
{5} in queue!
6 has take!
{9} in queue!
7 has take!
8 has take!
5 has take!
9 has take!
CompletionService
Services that separate the production of new asynchronous tasks from the use of the results of completed tasks. Tasks performed by producer submit. User take's completed tasks,
The results of these tasks are processed in the order in which they are accomplished. For example, Completion Service can be used to manage asynchronous IO and perform read tasks submitted as part of a program or system.
Then, when the read operation is completed, other operations are performed in different parts of the program, and the order of execution may be different from the order requested.
Typically, a Completion Service relies on a single Executor to actually perform tasks, in which case,
Completion Service manages only one internal completion queue. The Executor CompletionService class provides an implementation of this method.
import java.util.concurrent.Callable;
import java.util.concurrent.CompletionService;
import java.util.concurrent.ExecutorCompletionService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MyCompletionService implements Callable<String> {
private int id;
public MyCompletionService(int i){
this.id=i;
}
public static void main(String[] args) throws Exception{
ExecutorService service=Executors.newCachedThreadPool();
CompletionService<String> completion=new ExecutorCompletionService<String>(service);
for(int i=0;i<10;i++){
completion.submit(new MyCompletionService(i));
}
for(int i=0;i<10;i++){
System.out.println(completion.take().get());
}
service.shutdown();
}
public String call() throws Exception {
Integer time=(int)(Math.random()*1000);
try{
System.out.println(this.id+" start");
Thread.sleep(time);
System.out.println(this.id+" end");
}
catch(Exception e){
e.printStackTrace();
}
return this.id+":"+time;
}
}
CountDownLatch
A synchronization helper class that allows one or more threads to wait until a set of operations are performed in other threads.
Initialize CountDownLatch with the given count. Because the countDown() method is called, the await method is blocked until the current count reaches zero.
After that, all waiting threads are released, and all subsequent calls to await are immediately returned. This phenomenon occurs only once - counts cannot be reset. If you need to reset the count, consider using Cyclic Barrier.
CountDownLatch is a general synchronization tool that has many uses. Count 1 initialized Count DownLatch is used as a simple on/off latch.
Or entry: All await-calling threads wait at the entry until the entry is opened by calling countDown().
CountDownLatch initialized with N allows a thread to wait until N threads complete an operation, or until a certain operation completes N times.
A useful feature of CountDownLatch is that it does not require the thread calling the countDown method to continue until the count reaches zero.
Before all threads can pass, it simply prevents any thread from continuing to pass through an await.
The following example is written by someone else. It's very vivid.
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestCountDownLatch {
public static void main(String[] args) throws InterruptedException {
// Beginning reciprocal lock
final CountDownLatch begin = new CountDownLatch(1);
// End reciprocal lock
final CountDownLatch end = new CountDownLatch(10);
// Ten players
final ExecutorService exec = Executors.newFixedThreadPool(10);
for (int index = 0; index < 10; index++) {
final int NO = index + 1;
Runnable run = new Runnable() {
public void run() {
try {
begin.await();//Obstruction all the time
Thread.sleep((long) (Math.random() * 10000));
System.out.println("No." + NO + " arrived");
} catch (InterruptedException e) {
} finally {
end.countDown();
}
}
};
exec.submit(run);
}
System.out.println("Game Start");
begin.countDown();
end.await();
System.out.println("Game Over");
exec.shutdown();
}
}
CountDownLatch's most important methods are countDown() and await(), the former mainly counts down once, while the latter waits for the countdown to 0, and if it does not reach 0, it has to block and wait.
CyclicBarrier
A synchronous auxiliary class that allows a group of threads to wait for each other until they reach a common barrier point.
Cyclic Barrier is useful in programs involving a set of fixed-size threads that have to wait for each other from time to time. Because the barrier can be reused after releasing the waiting thread, it is called a circular barrier.
Cyclic Barrier supports an optional Runable command after the last thread in a set of threads arrives (but before releasing all threads).
This command runs only once at each barrier point. This barrier operation is useful if the shared state is updated before all participating threads continue.
Example usage: Here is an example of using barrier in parallel decomposition design, a classic example of a tour group:
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class TestCyclicBarrier {
// Time for hiking: Shenzhen, Guangzhou, Shaoguan, Changsha, Wuhan
private static int[] timeWalk = { 5, 8, 15, 15, 10 };
// Self-driving travelling
private static int[] timeSelf = { 1, 3, 4, 4, 5 };
// The tour bus
private static int[] timeBus = { 2, 4, 6, 6, 7 };
static String now() {
SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
return sdf.format(new Date()) + ": ";
}
static class Tour implements Runnable {
private int[] times;
private CyclicBarrier barrier;
private String tourName;
public Tour(CyclicBarrier barrier, String tourName, int[] times) {
this.times = times;
this.tourName = tourName;
this.barrier = barrier;
}
public void run() {
try {
Thread.sleep(times[0] * 1000);
System.out.println(now() + tourName + " Reached Shenzhen");
barrier.await();
Thread.sleep(times[1] * 1000);
System.out.println(now() + tourName + " Reached Guangzhou");
barrier.await();
Thread.sleep(times[2] * 1000);
System.out.println(now() + tourName + " Reached Shaoguan");
barrier.await();
Thread.sleep(times[3] * 1000);
System.out.println(now() + tourName + " Reached Changsha");
barrier.await();
Thread.sleep(times[4] * 1000);
System.out.println(now() + tourName + " Reached Wuhan");
barrier.await();
} catch (InterruptedException e) {
} catch (BrokenBarrierException e) {
}
}
}
public static void main(String[] args) {
// Three tour groups
CyclicBarrier barrier = new CyclicBarrier(3);
ExecutorService exec = Executors.newFixedThreadPool(3);
exec.submit(new Tour(barrier, "WalkTour", timeWalk));
exec.submit(new Tour(barrier, "SelfTour", timeSelf));
//When we annotate the following code, we find that the program is blocked and cannot continue running.
exec.submit(new Tour(barrier, "BusTour", timeBus));
exec.shutdown();
}
}
The most important attribute of Cyclic Barrier is the number of participants, and the most important method is await(). When await() is called by all threads, it means that these threads can continue to execute, otherwise they will wait.
Future
Future represents the result of asynchronous computation. It provides a way to check whether the calculation has been completed, to wait for the completion of the calculation, and to retrieve the results of the calculation.
After calculation, only get method can be used to retrieve the results. If necessary, this method can be blocked before calculation is completed. Cancellation is performed by cancel method.
Other methods are also provided to determine whether the task is normally completed or cancelled. Once the calculation is completed, the calculation cannot be cancelled.
If you use Future for cancelability without providing available results, you can declare Future
import static java.util.concurrent.TimeUnit.SECONDS;
import java.util.Date;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
public class TestScheduledThread {
public static void main(String[] args) {
final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
final Runnable beeper = new Runnable() {
int count = 0;
public void run() {
System.out.println(new Date() + " beep " + (++count));
}
};
// Run after 1 second and every 2 seconds
final ScheduledFuture beeperHandle = scheduler.scheduleAtFixedRate(beeper, 1, 2, SECONDS);
// Run after 2 seconds and wait 5 seconds each time after the last task runs.
final ScheduledFuture beeperHandle2 = scheduler.scheduleWithFixedDelay(beeper, 2, 5, SECONDS);
// Close the task in 30 seconds and close Scheduler
scheduler.schedule(new Runnable() {
public void run() {
beeperHandle.cancel(true);
beeperHandle2.cancel(true);
scheduler.shutdown();
}
}, 30, SECONDS);
}
}
In this way, we have summarized the more important functions under concurrent package, hoping to help us understand.