Summary and thinking of Java concurrent programming detailed series of best examples of Java Concurrent Programming
writing high-quality concurrent code is a very difficult task. The Java language has built-in pairs since the first version Multithreading Our support was great in those years, but when we Concurrent programming With a deeper understanding and more practice, there will be more schemes and better choices to realize concurrent programming. This article is a summary and Reflection on concurrent programming, and also shares a little experience on how to write concurrent code in Java versions after 5.
Why concurrency
Concurrency is actually a decoupling strategy, which helps us separate what to do (goal) from when to do (opportunity). This can significantly improve the throughput (get more CPU scheduling time) and structure of the application (multiple parts of the program work together). Anyone who has done java web development knows that the Servlet program in Java Web adopts single instance with the support of Servlet container Multithreading The Servlet container handles the concurrency problem for you.
Misunderstanding and positive solution
the most common pair Concurrent programming There are the following misunderstandings:
-Concurrency always improves performance (concurrency can significantly improve program performance when the CPU has a lot of idle time, but when there are a large number of threads, frequent scheduling switching between threads will degrade the performance of the system)
-Writing concurrent programs does not need to modify the original design (the decoupling of purpose and timing often has a great impact on the system structure)
-Don't pay attention to concurrency when using Web or EJB containers (only by understanding what the container is doing can we use the container better)
the following statements are the objective understanding of Concurrency:
-Writing concurrent programs adds extra overhead to the code
-Correct concurrency is very complex, even for very simple problems
-Defects in concurrency are not easy to reproduce or be found
-Concurrency often requires fundamental changes to the design strategy
Principles and skills of concurrent programming
Single responsibility principle
Separate concurrency related code from other code (concurrency related code has its own development, modification, and tuning life cycle).
Limit data scope
When two threads modify the same field of a shared object, they may interfere with each other, resulting in unexpected behavior. One of the solutions is to construct critical areas, but the number of critical areas must be limited.
Use data copy
Data copy is a good way to avoid sharing data. The copied objects are only treated as read-only. A class named CopyOnWriteArrayList is added in the java.util.concurrent package of Java 5. It is a subtype of the List interface, so you can think of it as a thread safe version of ArrayList. It uses the method of copy on write to create data copies for operations to avoid problems caused by concurrent access to shared data.
Threads should be as independent as possible
Let threads exist in their own world and do not share data with other threads. Anyone with Java Web development experience knows that servlets are based on single instances Multithreading The data related to each request is passed in through the parameters of the service method (or doGet or doPost method) of the Servlet subclass. As long as the code in the Servlet only uses local variables, the Servlet will not cause synchronization problems. The controller of Spring MVC does the same. The objects obtained from the request are passed in as method parameters rather than as class members. Obviously, the practice of Struts 2 is just the opposite. Therefore, the Action class as the controller in Struts 2 corresponds to an instance of each request.
Pre Java 5 Concurrent programming
Java's thread model is based on preemptive thread scheduling, that is:
- All threads can easily share objects in the same process.
- Any thread that can reference these objects can modify them.
- To protect data, objects can be locked.
Java concurrency based on threads and locks is too low-level, and the use of locks is often very evil, because it is equivalent to turning all concurrency into queuing.
before Java 5, the synchronized keyword can be used to realize the function of lock. It can be used on code blocks and methods, indicating that the thread must obtain the appropriate lock before executing the whole code block or method. For non static methods (member methods) of a Class, this means that the lock of the object instance should be obtained. For static methods (Class methods) of a Class, the lock of the Class object of the Class should be obtained. For synchronous code blocks, the programmer can specify that the lock of the object should be obtained.
whether it is a synchronization code block or a synchronization method, only one thread can enter at a time. If other threads try to enter (whether the same synchronization block or different synchronization blocks), the JVM will suspend them (put them into the lock pool). This structure is called critical section in concurrency theory. Here we can summarize the functions of synchronizing and locking with synchronized in Java:
- Only objects can be locked, not basic data types
- A single object in the locked object array is not locked
- A synchronized method can be thought of as a synchronized(this) {...} code block that contains the entire method
- The static synchronization method locks its Class object
- The synchronization of the inner class is independent of the outer class
- The synchronized modifier is not part of the method signature, so it cannot appear in the method declaration of an interface
- Asynchronous methods don't care about the state of the lock. They can still run when the synchronous method runs
- The locks implemented by synchronized are reentrant locks.
within the JVM, in order to improve efficiency, each thread running at the same time will have a cache copy of the data it is processing. When we use synchronized for synchronization, what is really synchronized is the memory block representing the locked object in different threads (the copy data will remain synchronized with the main memory. Now you know why to use the term synchronization), In short, after the synchronization block or synchronization method is executed, any changes to the locked object should be written back to the main memory before releasing the lock; After entering the synchronization block to get the lock, the data of the locked object is read from the main memory. The data copy of the thread holding the lock must be synchronized with the data view in the main memory.
in the initial version of Java, there was a keyword called volatile, which is a simple synchronous processing mechanism, because variables modified by volatile follow the following rules:
- The value of a variable is always read out from main memory before it is used.
- Changes to variable values are always written back to main memory after completion.
using volatile keyword can prevent incorrect optimization assumptions of the compiler in a multithreaded environment (the compiler may optimize variables that will not change in a thread into constants), but only variables that do not depend on the current state (value at reading time) during modification should be declared as volatile variables.
invariant pattern is also a design that can be considered in concurrent programming. Make the state of the object unchanged. If you want to modify the state of the object, you will create a copy of the object and write the change to the copy without changing the original object. In this way, there will be no inconsistent state. Therefore, the invariant object is thread safe. The String class that we use very frequently in Java adopts this design. If you are not familiar with invariant patterns, you can read chapter 34 of Dr. Yan Hong's Book Java and patterns. At this point, you may also realize the significance of the final keyword.
Concurrent programming in Java 5
No matter what direction Java will develop or become busy in the future, Java 5 is definitely an extremely important version in the history of java development. We will not discuss the various language features provided by this version here (if you are interested, you can read my other article "the 20th year of Java: looking at the development of programming technology from the evolution of java version"), However, we must thank Doug lea for providing his landmark masterpiece java.util.concurrent package in Java 5. Its emergence gives Java Concurrent Programming more choices and better working methods. Doug Lea's masterpiece mainly includes the following contents:
- Better thread safe container
- Thread pool and related tool classes
- Optional non blocking solution
- Lock and semaphore mechanism for display
Let's interpret these things one by one.
Atomic class
Under the java.util.concurrent package in Java 5, there is an atomic sub package, in which there are several classes starting with atomic, such as AtomicInteger and AtomicLong. They take advantage of the characteristics of modern processors and can complete atomic operations in a non blocking way. The code is as follows:
/** ID sequence generator */ public class IdGenerator { private final AtomicLong sequenceNumber = new AtomicLong(0); public long next() { return sequenceNumber.getAndIncrement(); } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Display lock
The locking mechanism based on the synchronized keyword has the following problems:
- There is only one type of lock, and it works the same for all synchronous operations
- The lock can only be obtained at the beginning of the code block or method and released at the end
- Threads are either locked or blocked. There is no other possibility
Java 5 reconstructs the lock mechanism and provides displayed locks, which can improve the lock mechanism in the following aspects:
- You can add different types of locks, such as read locks and write locks
- You can lock in one method and unlock in another
- You can use tryLock to try to obtain the lock. If you can't get the lock, you can wait, go back or do something else. Of course, you can also give up the operation after timeout
The displayed locks implement the java.util.concurrent.Lock interface, which mainly has two implementation classes:
- ReentrantLock - a slightly more flexible reentrant lock than synchronized
- ReentrantReadWriteLock - a reentrant lock with better performance when there are many read operations and few write operations
For how to use display locks, please refer to my Java interview series Java interview question set 51-70 The code of question 60 in. There is only one thing to be reminded. It is best to call the unlocked method unlock in the finally block, because this is the best place to release external resources and, of course, the best place to release the lock, because the lock may be released to give other threads a chance to run regardless of normal exceptions.
CountDownLatch
CountDownLatch is a simple synchronization mode, which allows a thread to wait for one or more threads to complete their work, so as to avoid various problems caused by concurrent access to critical resources. Here's a piece of code from someone else (I've refactored it) to demonstrate how CountDownLatch works.
import java.util.concurrent.CountDownLatch; /** * Worker * @author Luo Hao * */ class Worker { private String name; // name private long workDuration; // Work duration /** * constructor */ public Worker(String name, long workDuration) { this.name = name; this.workDuration = workDuration; } /** * finish the work */ public void doWork() { System.out.println(name + " begins to work..."); try { Thread.sleep(workDuration); // Time to simulate job execution with sleep } catch(InterruptedException ex) { ex.printStackTrace(); } System.out.println(name + " has finished the job..."); } } /** * Test thread * @author Luo Hao * */ class WorkerTestThread implements Runnable { private Worker worker; private CountDownLatch cdLatch; public WorkerTestThread(Worker worker, CountDownLatch cdLatch) { this.worker = worker; this.cdLatch = cdLatch; } @Override public void run() { worker.doWork(); // Get the workers to work cdLatch.countDown(); // The number of countdowns after the work is completed is reduced by 1 } } class CountDownLatchTest { private static final int MAX_WORK_DURATION = 5000; // Maximum working time private static final int MIN_WORK_DURATION = 1000; // Minimum working time // Generate random working hours private static long getRandomWorkDuration(long min, long max) { return (long) (Math.random() * (max - min) + min); } public static void main(String[] args) { CountDownLatch latch = new CountDownLatch(2); // Create a countdown latch and specify a countdown count of 2 Worker w1 = new Worker("Luo Hao", getRandomWorkDuration(MIN_WORK_DURATION, MAX_WORK_DURATION)); Worker w2 = new Worker("Da Chui Wang", getRandomWorkDuration(MIN_WORK_DURATION, MAX_WORK_DURATION)); new Thread(new WorkerTestThread(w1, latch)).start(); new Thread(new WorkerTestThread(w2, latch)).start(); try { latch.await(); // Wait for the countdown latch to decrease to 0 System.out.println("All jobs have been finished!"); } catch (InterruptedException e) { e.printStackTrace(); } } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
ConcurrentHashMap
ConcurrentHashMap is the version of HashMap in the concurrent environment. You may want to ask, since you can obtain thread safe mapping containers through Collections.synchronizedMap, why do you still need ConcurrentHashMap? Because the thread safe HashMap obtained through the Collections tool class locks the entire container object when reading and writing data, other threads using the container can no longer obtain the lock of the object, which means that they have to wait until the previous thread that obtained the lock leaves the synchronized code block before they have the opportunity to execute. In fact, HashMap determines the bucket for storing key value pairs through hash function (the bucket is introduced to solve hash conflict). When modifying HashMap, you don't need to lock the whole container, just lock the "bucket" to be modified. The data structure of HashMap is shown in the following figure.
in addition, ConcurrentHashMap also provides atomic operation methods, as shown below:
- putIfAbsent: if there is no corresponding key value pair mapping, add it to the HashMap.
- remove: if the key exists and the value is equal to the current state (the equals comparison result is true), the key value pair mapping is removed atomically
- Replace: replace the atomic operation of the element in the map
CopyOnWriteArrayList
CopyOnWriteArrayList is an alternative to ArrayList in a concurrent environment. CopyOnWriteArrayList avoids the problems caused by concurrent access by adding copy on write semantics, that is, any modification operation will create a copy of the list at the bottom, which means that the existing iterators will not encounter unexpected modifications. This method is very useful for scenarios where there is no strict read-write synchronization, because it provides better performance. Remember to minimize the use of locks, because it will inevitably lead to performance degradation (the same is true for concurrent access to data in the database. If you can, you should give up pessimistic locks and use optimistic locks). CopyOnWriteArrayList obviously gains time by sacrificing space (in the computer world, time and space are usually irreconcilable contradictions. We can sacrifice space to improve efficiency and obtain time. Of course, we can also sacrifice time to reduce the use of space).
you can verify whether CopyOnWriteArrayList is a thread safe container through the operation of the following two sections of code.
import java.util.ArrayList; import java.util.List; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; class AddThread implements Runnable { private List<Double> list; public AddThread(List<Double> list) { this.list = list; } @Override public void run() { for(int i = 0; i < 10000; ++i) { list.add(Math.random()); } } } public class Test05 { private static final int THREAD_POOL_SIZE = 2; public static void main(String[] args) { List<Double> list = new ArrayList<>(); ExecutorService es = Executors.newFixedThreadPool(THREAD_POOL_SIZE); es.execute(new AddThread(list)); es.execute(new AddThread(list)); es.shutdown(); } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
The above code will generate ArrayIndexOutOfBoundsException at runtime. Try to replace the ArrayList in line 25 of the above code with CopyOnWriteArrayList and run again.
List<Double> list = new CopyOnWriteArrayList<>();
- 1
Queue
queue is a wonderful concept everywhere. It provides a simple and reliable way to distribute resources to processing units (or allocate work units to resources to be processed, depending on the way you look at the problem). Many concurrent programming models in the implementation rely on queues because it can transfer units of work between threads.
BlockingQueue in Java 5 is a very useful tool in a concurrent environment. When calling the put method to insert elements into the queue, if the queue is full, it will make the thread inserting elements wait for the queue to make room; When the take method is called to fetch elements from the queue, if the queue is empty, the thread that fetches the elements will be blocked.
BlockingQueue can be used to implement the producer consumer concurrency model (described in the next section). Of course, before Java 5, thread scheduling can also be realized through wait and notify. Compare the two codes to see what's good about reconstructing concurrent code based on existing concurrency tool classes.
- Implementation based on wait and notify
import java.util.ArrayList; import java.util.List; import java.util.UUID; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; /** * Common constant * @author Luo Hao * */ class Constants { public static final int MAX_BUFFER_SIZE = 10; public static final int NUM_OF_PRODUCER = 2; public static final int NUM_OF_CONSUMER = 3; } /** * Work tasks * @author Luo Hao * */ class Task { private String id; // Task number public Task() { id = UUID.randomUUID().toString(); } @Override public String toString() { return "Task[" + id + "]"; } } /** * consumer * @author Luo Hao * */ class Consumer implements Runnable { private List<Task> buffer; public Consumer(List<Task> buffer) { this.buffer = buffer; } @Override public void run() { while(true) { synchronized(buffer) { while(buffer.isEmpty()) { try { buffer.wait(); } catch(InterruptedException e) { e.printStackTrace(); } } Task task = buffer.remove(0); buffer.notifyAll(); System.out.println("Consumer[" + Thread.currentThread().getName() + "] got " + task); } } } } /** * producer * @author Luo Hao * */ class Producer implements Runnable { private List<Task> buffer; public Producer(List<Task> buffer) { this.buffer = buffer; } @Override public void run() { while(true) { synchronized (buffer) { while(buffer.size() >= Constants.MAX_BUFFER_SIZE) { try { buffer.wait(); } catch(InterruptedException e) { e.printStackTrace(); } } Task task = new Task(); buffer.add(task); buffer.notifyAll(); System.out.println("Producer[" + Thread.currentThread().getName() + "] put " + task); } } } } public class Test06 { public static void main(String[] args) { List<Task> buffer = new ArrayList<>(Constants.MAX_BUFFER_SIZE); ExecutorService es = Executors.newFixedThreadPool(Constants.NUM_OF_CONSUMER + Constants.NUM_OF_PRODUCER); for(int i = 1; i <= Constants.NUM_OF_PRODUCER; ++i) { es.execute(new Producer(buffer)); } for(int i = 1; i <= Constants.NUM_OF_CONSUMER; ++i) { es.execute(new Consumer(buffer)); } } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- Implementation based on BlockingQueue
import java.util.UUID; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; /** * Common constant * @author Luo Hao * */ class Constants { public static final int MAX_BUFFER_SIZE = 10; public static final int NUM_OF_PRODUCER = 2; public static final int NUM_OF_CONSUMER = 3; } /** * Work tasks * @author Luo Hao * */ class Task { private String id; // Task number public Task() { id = UUID.randomUUID().toString(); } @Override public String toString() { return "Task[" + id + "]"; } } /** * consumer * @author Luo Hao * */ class Consumer implements Runnable { private BlockingQueue<Task> buffer; public Consumer(BlockingQueue<Task> buffer) { this.buffer = buffer; } @Override public void run() { while(true) { try { Task task = buffer.take(); System.out.println("Consumer[" + Thread.currentThread().getName() + "] got " + task); } catch (InterruptedException e) { e.printStackTrace(); } } } } /** * producer * @author Luo Hao * */ class Producer implements Runnable { private BlockingQueue<Task> buffer; public Producer(BlockingQueue<Task> buffer) { this.buffer = buffer; } @Override public void run() { while(true) { try { Task task = new Task(); buffer.put(task); System.out.println("Producer[" + Thread.currentThread().getName() + "] put " + task); } catch (InterruptedException e) { e.printStackTrace(); } } } } public class Test07 { public static void main(String[] args) { BlockingQueue<Task> buffer = new LinkedBlockingQueue<>(Constants.MAX_BUFFER_SIZE); ExecutorService es = Executors.newFixedThreadPool(Constants.NUM_OF_CONSUMER + Constants.NUM_OF_PRODUCER); for(int i = 1; i <= Constants.NUM_OF_PRODUCER; ++i) { es.execute(new Producer(buffer)); } for(int i = 1; i <= Constants.NUM_OF_CONSUMER; ++i) { es.execute(new Consumer(buffer)); } } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
the code is much more elegant after using BlockingQueue.
Concurrency model
before continuing the following discussion, let's review the following concepts:
concept | explain |
---|---|
Critical resources | There are a fixed number of resources in a concurrent environment |
mutex | Access to resources is exclusive |
hunger | One or a group of threads cannot make progress for a long time or forever |
deadlock | Two or more threads wait for each other to finish |
Livelock | The thread that wants to execute always finds that other threads are executing so that it can't execute for a long time or forever |
after reviewing these concepts, we can explore the following concurrency models.
Producer consumer
one or more producers create some work and put it in a buffer or queue, and one or more consumers get the work from the queue and complete it. The buffer or queue here is a critical resource. When the buffer or queue is full, production will be blocked; When the buffer or queue is empty, consumers will be blocked. The scheduling of producers and consumers is completed by exchanging signals with each other.
Reader writer
when there is a shared resource that mainly provides information for readers, it will be updated by the writer occasionally, but the throughput of the system needs to be considered, and the problems of hunger and old resources can not be updated should be prevented. In this concurrency model, how to balance readers and writers is the most difficult. Of course, this problem is still a hot topic so far. I'm afraid we must provide appropriate solutions according to specific scenarios without a universal method (unlike what we see in China's scientific research literature).
The Dinning Philosophers
in 1965, Edsger Wybe Dijkstra, winner of the Turing prize for Dutch computer scientist, proposed and solved what he called the synchronization problem of philosophers' meals. This problem can be simply described as follows: five philosophers sit around a round table, and each philosopher has a plate of macaroni in front of him. Because the macaroni is slippery, it takes two forks to hold it. There is a fork between two adjacent plates, as shown in the figure below. There are two alternating periods of activity in the life of philosophers: eating and thinking. When a philosopher feels hungry, he tries to take the fork on his left and right twice, one at a time, but in no order. If you succeed in getting two forks, start eating. After eating, put down the fork and continue thinking.
replace the philosophers in the above problems with threads and the forks with competing critical resources. The above problem is the problem of threads competing for resources. Without careful design, the system will have problems such as deadlock, livelock, throughput decline and so on.
the following is the code to solve the dining problem of philosophers with Semaphore primitives, using the Semaphore class in the Java 5 concurrency Toolkit (the code is not beautiful enough, but it is enough to illustrate the problem).
//import java.util.concurrent.ExecutorService; //import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; /** * Up and down questions for storing thread shared semaphores * @author Luo Hao * */ class AppContext { public static final int NUM_OF_FORKS = 5; // Number of forks (resources) public static final int NUM_OF_PHILO = 5; // Number of threads public static Semaphore[] forks; // Semaphore of fork public static Semaphore counter; // Philosopher's semaphore static { forks = new Semaphore[NUM_OF_FORKS]; for (int i = 0, len = forks.length; i < len; ++i) { forks[i] = new Semaphore(1); // The semaphore of each fork is 1 } counter = new Semaphore(NUM_OF_PHILO - 1); // If there are N philosophers, only N-1 people are allowed to take the fork at the same time } /** * Get a fork * @param index The first few philosophers * @param leftFirst Do you want to get the fork on the left first * @throws InterruptedException */ public static void putOnFork(int index, boolean leftFirst) throws InterruptedException { if(leftFirst) { forks[index].acquire(); forks[(index + 1) % NUM_OF_PHILO].acquire(); } else { forks[(index + 1) % NUM_OF_PHILO].acquire(); forks[index].acquire(); } } /** * Put the fork back * @param index The first few philosophers * @param leftFirst Would you like to put back the fork on the left first * @throws InterruptedException */ public static void putDownFork(int index, boolean leftFirst) throws InterruptedException { if(leftFirst) { forks[index].release(); forks[(index + 1) % NUM_OF_PHILO].release(); } else { forks[(index + 1) % NUM_OF_PHILO].release(); forks[index].release(); } } } /** * philosopher * @author Luo Hao * */ class Philosopher implements Runnable { private int index; // number private String name; // name public Philosopher(int index, String name) { this.index = index; this.name = name; } @Override public void run() { while(true) { try { AppContext.counter.acquire(); boolean leftFirst = index % 2 == 0; AppContext.putOnFork(index, leftFirst); System.out.println(name + "Eating Spaghetti (macaroni)..."); // Take two forks and you can eat AppContext.putDownFork(index, leftFirst); AppContext.counter.release(); } catch (InterruptedException e) { e.printStackTrace(); } } } } public class Test04 { public static void main(String[] args) { String[] names = { "Luo Hao", "Da Chui Wang", "Zhang Sanfeng", "Guo Yang", "Li Mochou" }; // Names of five philosophers // ExecutorService es = Executors.newFixedThreadPool(AppContext.NUM_OF_PHILO); // Create a fixed size thread pool // for(int i = 0, len = names.length; i < len; ++i) { // es.execute(new Philosopher(i, names[i])); // Start thread // } // es.shutdown(); for(int i = 0, len = names.length; i < len; ++i) { new Thread(new Philosopher(i, names[i])).start(); } } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
concurrency problems in reality are basically these three models or variants of these three models.
Test concurrent code
The testing of concurrent code is also very difficult. It is so difficult that we don't need to explain it. So here we just discuss how to solve this difficult problem. We suggest that you write some tests that can find problems and run these tests under different configurations and loads frequently. Don't ignore any failed tests. Defects in threaded code may occur only once in tens of thousands of tests. Specifically, there are several precautions:
- Don't attribute the failure of the system to accidental events, just like when you can't pull shit, you can't blame the earth for its lack of gravity.
- Let non concurrent code work first. Don't try to find defects in concurrent and non concurrent code at the same time.
- Write thread code that can run in different configuration environments.
- Write thread code that is easy to adjust, so you can adjust threads to optimize performance.
- Make the number of threads more than the number of CPUs or CPU cores, so that potential problems in the process of CPU scheduling switching will be exposed.
- Let concurrent code run on different platforms.
- Add some auxiliary test code to the concurrent code by means of automation or hard coding.
Concurrent programming in Java 7
TransferQueue is introduced in Java 7. It has a method called transfer more than BlockingQueue. If the receiving thread is waiting, the operation can hand over the task to it immediately, otherwise it will be blocked until the thread taking the task appears. You can use TransferQueue instead of BlockingQueue because it can achieve better performance.
I forgot one thing just now. Java 5 also introduces Callable interface, Future interface and FutureTask interface. They can also build concurrent applications. The code is as follows.
import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; public class Test07 { private static final int POOL_SIZE = 10; static class CalcThread implements Callable<Double> { private List<Double> dataList = new ArrayList<>(); public CalcThread() { for(int i = 0; i < 10000; ++i) { dataList.add(Math.random()); } } @Override public Double call() throws Exception { double total = 0; for(Double d : dataList) { total += d; } return total / dataList.size(); } } public static void main(String[] args) { List<Future<Double>> fList = new ArrayList<>(); ExecutorService es = Executors.newFixedThreadPool(POOL_SIZE); for(int i = 0; i < POOL_SIZE; ++i) { fList.add(es.submit(new CalcThread())); } for(Future<Double> f : fList) { try { System.out.println(f.get()); } catch (Exception e) { e.printStackTrace(); } } es.shutdown(); } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
the Callable interface is also a single method interface. Obviously, it is a callback method, which is similar to the callback function in functional programming. Before Java 8, Lambda expressions could not be used in Java to simplify this functional programming. Unlike the Runnable interface, the callback method call method of the Callable interface returns an object that can obtain information at the end of thread execution in the Future tense. The call method in the above code is to return the calculated average value of 10000 random decimals between 0 and 1. We get this return value through an object of the Future interface. In the latest java version, the Callable interface and Runnable interface are annotated with @ functional interface, that is, it can create interface objects in a functional programming way (Lambda expression).
the following are the main methods of Future interface:
- get(): get the result. If the result is not ready, the get method will block until the result is obtained; Of course, you can also set the blocking timeout through parameters.
- cancel(): cancel before the end of the operation.
- isDone(): it can be used to judge whether the operation ends.
Java 7 also provides a fork/join framework, which can realize the automatic scheduling of tasks in the thread pool, and this scheduling is transparent to users. In order to achieve this effect, the task must be decomposed in the way specified by the user, and then the execution results of the decomposed small tasks must be combined into the execution results of the original tasks. This obviously uses the idea of divide and conquer. The following code uses the branch / merge framework to calculate the sum of 1 to 10000. Of course, for such a simple task, the branch / merge framework is not needed at all, because the branch and merge itself will bring some overhead, but here we just explore how to use the branch / merge framework in the code, Let our code make full use of the powerful computing power of modern multi-core CPU.
import java.util.concurrent.ForkJoinPool; import java.util.concurrent.Future; import java.util.concurrent.RecursiveTask; class Calculator extends RecursiveTask<Integer> { private static final long serialVersionUID = 7333472779649130114L; private static final int THRESHOLD = 10; private int start; private int end; public Calculator(int start, int end) { this.start = start; this.end = end; } @Override public Integer compute() { int sum = 0; if ((end - start) < THRESHOLD) { // When the problem is decomposed to a solvable level, the calculation results are directly obtained for (int i = start; i <= end; i++) { sum += i; } } else { int middle = (start + end) >>> 1; // Split the task in two Calculator left = new Calculator(start, middle); Calculator right = new Calculator(middle + 1, end); left.fork(); right.fork(); // Note: because this is a recursive task decomposition, it means that the next two will be divided into four and four into eight sum = left.join() + right.join(); // Merge the results of the two subtasks } return sum; } } public class Test08 { public static void main(String[] args) throws Exception { ForkJoinPool forkJoinPool = new ForkJoinPool(); Future<Integer> result = forkJoinPool.submit(new Calculator(1, 10000)); System.out.println(result.get()); } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
with the advent of Java 7, the default array sorting algorithm in Java is no longer the classic quick sort (double pivot quick sort). The new sorting algorithm is TimSort, which is a mixture of merge sort and insert sort. TimSort can make full use of the multi-core characteristics of modern processors through the branch merge framework to obtain better performance (shorter sorting time) .
reference
- Benjamin J. Evans, etc, The Well-Grounded Java Developer. Jul 21, 2012
- Robert Martin, Clean Code. Aug 11, 2008.
- Doug Lea, Concurrent Programming in Java: Design Principles and Patterns. 1999