Prompt (click jump)
2.1 what is thread safety?
2.2 atomicity
2.2.1 race conditions
2.2.2 race condition in delay initialization
2.2.3 composite operation
2.3 locking mechanism
2.4 use lock to protect state
2.5 activity and performance
The main synchronization mechanisms in Java are keyword synchronized,volatile variable, display lock, atomic variable.
2.1 what is thread safety?
- Thread safety is that when multiple threads access a class, the class always shows the correct behavior.
- The following code shows the stateless object (class), which should have no variable variable or state. So it must be thread safe.
/** *Because accessing this method does not affect the result of another thread's access calculation, it is thread safe. * That is to say, when multiple threads access this method, they will not interfere with each other or share variables. * It does not contain shared variables and references in other domains, and the temporary state only exists in local variables on the thread stack during calculation. */ @ThreadSafe public class StatelessFactorizer implements Servlet { public void service(ServletRequest req, ServletResponse resp){ BigInteger i = extractFromRequest(req);//Acquisition number BigInteger[] factors = factor(i);//Factoring servlet s encodeIntResponse(resp,factors);//Return factor } }
2.2 atomicity
The so-called atomicity, either one execution, other threads can not interfere, or do not execute.
/** * Count the number of requests processed * This is not thread safe. Suppose that the initial value of count is 5. When multiple threads execute service s, they change the value to 6, which results in * serious error. */ @NotThreadSafe public class UnsafeCountingFactorizer implements Servlet { private long count = 0; public long getCount(){ return count; } public void service(ServletRequest req, ServletResponse resp){ BigInteger i = extractRequest(req); BigInteger[] factors = factors(i); ++count; encodeIntResponse(resp,factors); } }
The above code analyzes the reason:
In this class, there is a shared variable count, and the operation of + + count is not atomic. There are three independent operations in the "read modify write" operation sequence, and the final result depends on the previous count state. When n (n > 2) threads arrive, they will read into the state before count, and then conduct + + operation. The final write is just count+1, which should be count+n.
2.2.1 race conditions
- Race conditions are found when the correctness of a calculation depends on the execution sequence of multiple threads.
- The common race condition types are: "check before execution", "read modify write". The essence is to make a subsequent judgment or calculation based on the observation results that may fail, because there may be other threads to change your original observation results in the process of checking and executing.
2.2.2 race condition in delay initialization
An example is given for "check before execute". Take the example of an unlocked lazy one.
/** * Because the existence of if else has "check before execute", there is a race condition. Can cause unsafe threads. */ @NotThreadSafe public class LazyInitRace { //Private attributes private LazyInitRace instance =null; //Private constructor private LazyInitRace(){ }; //Exposure method public LazyInitRace getInstance(){ if(instance == null){ instance = new LazyInitRace(); } return instance; } }
2.2.3 composite operation
- The previous "check before execute" and "read modify write" are called composite operations.
- In 2.2 unsafecountingfactorizer and 2.2.2 lazyinitrace, the operations of "read modify write" and "check before execute" need to include a set of atomic (or indivisible) operations. To avoid the problem of race condition, it is necessary to prevent other threads from using the variable in some way when a thread modifies the variable, so as to ensure that other threads can only reach and modify the state before or after the modification report is completed, rather than in the process of modifying the state.
- The atomic variable class is used here to make the composite operation atomic.
/** *Some atomic variable classes in java.util.concurrent.atomic package are used. */ @ThreadSafe public class CountingFactorizer implements Servlet { private final AtomicLong count = new AtomicLong(0);//Using the class of atomic variable to realize the change of atomic state public long getCount(){ return count.get(); } @Override public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractRequest(req); BigInteger[] factors = factors(i); count.incrementAndGet();//Atomic increment current value encodeIntResponse(resp,factors); } }
Explanation of atomic variable class:
some atomic variable classes are included in the javautil. Concurrent package, which are used to realize the atomic state conversion on the value and object reference. By using AtomicLong instead of long type counter, we can ensure that all access operations to counter state are atomic. Because the state of the Servlet is the state of the counter and the counter is thread safe, the Servlet here is also thread safe.
the underlying AtomicLong actually uses the keyword volatile.
public class AtomicLong extends Number implements java.io.Serializable { private static final jdk.internal.misc.Unsafe U = jdk.internal.misc.Unsafe.getUnsafe(); private static final long VALUE = U.objectFieldOffset(java.util.concurrent.atomic.AtomicLong.class, "value"); private volatile long value; public final long get() { return value; } //Atomic increment current value public final long incrementAndGet() { return U.getAndAddLong(this, VALUE, 1L) + 1L; } }
2.3 locking mechanism
Adding a state to stateless is managed by a thread safe object. This class is thread safe (counting factor in 2.2.3). However, there are multiple state variables, even if each of them changes from atomic to atomic, this class is still unsafe.
/** * If the last number and result of the cache are the same, they will be returned directly. * The variables are wrapped as thread safe, but the whole class is thread unsafe because there is a dependency order between variables. * In fact, there are some competition conditions. The operation of the method needs to be changed to atomic (or locked) */ @NotThreadSafe public class UnsafeCachingFactorizer implements Servlet { //AtomicReference is a thread safe class that replaces object references private final AtomicReference<BigInteger> lastNumber = new AtomicReference<>();//@1 private final AtomicReference<BigInteger[]> lastFactors = new AtomicReference<>();//@2 public void service(ServletRequest req, ServletResponse resp){//@3 BigInteger i = extractFromRequest(req); if(i.equals(lastNumber.get())){//@3.1 encodeIntoResponse(resp,lastFactors.get()); }else {//@3.2 BigInteger[] factors = factor(i); lastNumber.set(i); lastFactors.set(factors); encodeIntoResponse(resp,lastFactors.get()); } } }
Code thinking
- @1 and @ 2 line variables are wrapped with atomic variable classes to prevent "read modify write" problems. @The 3-line method still makes the thread unsafe. If one thread enters line @ 3.2 to modify the lastNumber, and another thread enters line @ 3 at this time to execute line @ 3.1, then the lastfacts obtained is unchanged last time. So this method must be atomic or locked.
- If you want to ensure the consistency of the state, you must change all the state changes of multiple variables into atomicity and complete the modification at one time.
2.3.1 built in lock
- Java provides a built-in locking mechanism to support atomicity: synchronization of code blocks.
Synchronization code block lock is the object of method call, and static synchronized method takes Class object as lock. - Each Java object can be used as a lock to achieve synchronization. These locks are called intrinsic locks or monitor locks. The thread automatically obtains the lock before entering the synchronization code block, and releases the lock automatically when exiting the synchronization code block. The only way to obtain an internal lock is to enter a synchronized block or method protected by the lock.
- Java's built-in lock is equivalent to a kind of mutex (or mutex, because only one thread can execute the code block protected by the built-in lock at a time, so the synchronized code block protected by the lock will be executed in atomic mode, and multiple threads will not interfere with each other when executing the code block. It is impossible for any thread executing a synchronized code block to see another thread executing a synchronized code block protected by the same lock.
/** * Using synchronized to lock methods */ @ThreadSafe public class SynchronizedFactorizer implements Servlet { //AtomicReference is a thread safe class that replaces object references private final AtomicReference<BigInteger> lastNumber = new AtomicReference<>(); private final AtomicReference<BigInteger[]> lastFactors = new AtomicReference<>(); public synchronized void service(ServletRequest req, ServletResponse resp){ BigInteger i = extractFromRequest(req); if(i.equals(lastNumber.get())){ encodeIntoResponse(resp,lastFactors.get()); }else { BigInteger[] factors = factor(i); lastNumber.set(i); lastFactors.set(factors); encodeIntoResponse(resp,lastFactors.get()); } } }
Code thinking
- When a thread executes service(), it can only execute the service() method if it obtains the built-in lock of the current object, and call the two properties of the current object, lastNumber and lastFactors, to change its status.
- The keyword synchronized is used in the program to modify the service method, so only one thread can execute the service method at the same time. (because one thread gets the built-in lock of the current object, other threads have to wait.) the current synchronized factor is thread safe. However, this method is too extreme, because multiple clients can not use factor decomposition Servlet at the same time, and the service responsiveness is very low.
2.3.2 reentrant
- When a thread requests a lock held by another thread, the requesting thread blocks. The built-in lock is reentrant, that is, if a thread tries to acquire a lock already held by itself, the request will succeed. "Heavy person" means that the granularity of lock acquisition is thread rather than "call".
-
Principle of reentry:
An acquisition count value and an owner thread are associated for each lock. When the count is 0, the lock is considered to be not held by any thread. When a thread requests a lock that is not held, the JVM will record the owner of the lock and set the acquisition count value to 1. If the same thread acquires the lock again, the count value will increase, and when the thread exits the synchronization code block, the counter will decrease accordingly. When the count is 0, the lock will be released.
/** *If the built-in lock is not reentrant, the code will deadlock. */ public class Widget { public synchronized void doSomething(){} } class LoggingWidget extends Widget{ public synchronized void dosomething(){ System.out.println(toString()+"calling doSomehing"); super.doSomething(); } }
Code thinking:
The subclass rewrites the synchronized method of the parent class, and then calls the method in the parent class. If there is no reentrant lock, the code will produce deadlock. Because the dosomething methods in Widget and LoggingWidget are synchronized, each dosomething method will acquire the lock of Widget before execution. However, if the built-in lock is not reentrant, the lock on Widget will not be acquired when calling super doSomething, because the lock has been held, so the thread will stop forever and wait for a permanent lock. A lock that is far from available. Reentry avoids this deadlock.
Question:
1. Whose lock does the thread acquire? Suppose that new has A LoggingWidget type object A, has it acquired the lock of the current object A? (I understand that in Java, locks are based on objects, and each object has its own lock.)
It is said in the book of gods and demons that every dosomething method will obtain the lock of the Widget before execution. Shouldn't its subclass get the lock of LoggingWidget?
2.4 use lock to protect state
- In the 2.3.1 program, lastNumber and lastFactors are both protected by the built-in lock of the Servlet's current object.
- Each object has a built-in lock, just to avoid explicitly creating the lock object.
- Not all data need to be protected by locks. Only variable data accessed by multiple threads at the same time need to be protected by locks.
2.5 activity and performance
- In the 2.3 unsafe caching factorizer code, we improve the performance by introducing caching mechanism into the factorization Servlet. Shared state needs to be used in the cache, so state integrity needs to be maintained through synchronization. However, if you use the synchronization method in synchronized factor, the code execution performance will be very poor. In 2.3.1, synchronizedfactor synchronizes the whole method directly, so that only one thread can execute at a time, which deviates from the original intention of Servlet design. In fact, it needs to process multiple requests at the same time.
@ThreadSafe public class CachedFactorizer implements Servlet { private BigInteger lastNumber;//Number of last cache private BigInteger[] lastFactors;//Factor to cache last lastNumber private long hits;//Number of requests executed private long cachehits;//All requests are directly the same as lastNumber, and the number of requests using lastFactors //Get the number of hits public synchronized long getHits(){ return hits; } //Get the number of requests that hit the cache public synchronized double getCachehits(){ return (double)cachehits/(double)hits; } //Servlet s for service public void service(ServletRequest req, ServletResponse resp){ BigInteger i = extractFromRequest(req);//Get calculated number BigInteger[] factors = null;//Store the factor of i, every request will refresh to null //Lock code block /*1.++hits It is a read modify write compliant operation, making it an atomic operation. * 2.The if statement here has a "check before execute" compliant operation, which is also thread unsafe, so locking is also an atomic operation. * Tip: this flag can only be executed after obtaining the lock of the current object */ synchronized (this){ ++hits; if (i.equals(lastNumber)) { ++cachehits; factors = lastFactors.clone(); } } if(factors==null){ factors = factor(i);//Factor of i /** * If the cache is not hit this time, refresh lastNumber and lastFactors * Lock it and atomize its operation */ synchronized (this){ lastNumber = i; lastFactors = factors.clone(); } } encodeIntResponse(req,resp); } }
- Instead of using AtomicLong type hit counters in cachedfactorer, a long type variable is used. Of course, AtomicLong type can also be used. For atomic operation on a single variable, = Atomic variable is very useful. But since we have used synchronous code block to construct atomic operation, using two different synchronization mechanisms will not only bring confusion, but also no performance or security benefits. Therefore, atomic variable is not used here.
Consider when using locks:
-
The restructured cachedfactorer achieves a balance between simplicity (synchronizing the entire method) and concurrency (synchronizing the shortest possible code path). There is a certain overhead in obtaining and releasing locks, so if the synchronization code block is decomposed too carefully (for example, + + hits is decomposed into its own synchronization code block), it is usually not good, although it will not destroy atomicity. The cachedfactorer needs to hold a lock when accessing state variables or during the execution of composite operations. But release the lock before performing a long factoring operation. This ensures thread safety and does not affect concurrency too much, and the code path in each synchronization block is "short enough." In order to determine the reasonable size of the synchronized code block, we need to make a trade-off between various design requirements.
-
In general, there are mutual constraints between simplicity and performance. When implementing a synchronization strategy, be sure not to blindly sacrifice simplicity for performance (which may compromise security).
-
When performing long-term calculations or operations that may not be completed quickly (for example, network I/O or console I/O), be sure not to hold locks.