Java lock mechanism

Keywords: Java Back-end

In the multithreaded environment, programs often have some thread safety problems. Therefore, Java provides some thread synchronization mechanisms to solve the safety problems, such as synchronized Lock and Lock lock.

Pessimistic lock and optimistic lock

We can roughly divide locks into two categories:

  • Pessimistic lock

  • Optimistic lock

As the name suggests, pessimistic locking always assumes the worst case. Every time you get data, you think that other threads will modify it, so you will lock it every time you get data, so other threads will be blocked when they want to modify the data until you get the lock. For example, table locks, row locks, read locks and write locks in MySQL database, synchronized and ReentrantLock in Java, etc.

Optimistic locking always assumes the best situation. Every time you get data, you think that other threads will not modify it, so it will not be locked. However, when modifying data, you need to judge whether other threads have modified data during this period. If not, it will be modified normally. If so, this modification will fail. Common optimistic locks include version number control, CAS algorithm, etc.

Pessimistic lock application

The cases are as follows:

public 
class 
LockDemo 
{



    
static int count 
= 
0
;



    
public 
static 
void 
main
(String
[
] args
) throws InterruptedException 
{

        List
<Thread
> threadList 
= 
new 
ArrayList
<
>
(
)
;

        
for 
(int i 
= 
0
; i 
< 
50
; i
++
) 
{

            Thread thread 
= 
new 
Thread
(
(
) 
-
> 
{

                
for 
(int j 
= 
0
; j 
< 
1000
; 
++j
) 
{

                    count
++
;

                
}

            
}
)
;

            thread
.
start
(
)
;

            threadList
.
add
(thread
)
;

        
}

        
// Wait for all threads to complete execution

        
for 
(Thread thread 
: threadList
) 
{

            thread
.
join
(
)
;

        
}

        System
.out
.
println
(count
)
;

    
}


}

A total of 50 threads are opened in the program, and the shared variable count is operated + + in the threads. Therefore, if there is no thread safety problem, the final result should be   50000, but there must be a thread safety problem in the program. The running result is:

48634

To solve thread safety problems, you can use the synchronized keyword:

Wrap the operation of modifying the count variable with the synchronized keyword, so that when a thread is performing + + operation, other threads cannot perform + + at the same time, and can only wait for the previous thread to execute 1000 times before continuing to execute, so as to ensure that the final result is   50000.

Using ReentrantLock can also solve thread safety problems:

public 
class 
LockDemo 
{



    
static int count 
= 
0
;



    
public 
static 
void 
main
(String
[
] args
) throws InterruptedException 
{

        List
<Thread
> threadList 
= 
new 
ArrayList
<
>
(
)
;

        Lock lock 
= 
new 
ReentrantLock
(
)
;

        
for 
(
int i 
= 
0
; i 
< 
50
; i
++
) 
{

            Thread thread 
= 
new 
Thread
(
(
) 
-
> 
{

                
// Use the ReentrantLock keyword to solve thread safety problems

                lock
.
lock
(
)
;

                
try 
{

                    
for 
(
int j 
= 
0
; j 
< 
1000
; 
++j
) 
{

                        count
++
;

                    
}

                
} 
finally 
{

                    lock
.
unlock
(
)
;

                
}

        
//java learning exchange: 737251827   After entering, you can get learning resources and ask questions about leaders with ten years of development experience for free!

            
}
)
;

            thread
.
start
(
)
;

            threadList
.
add
(thread
)
;

        
}

        
for 
(
Thread thread 
: threadList
) 
{

            thread
.
join
(
)
;

        
}

        System
.out
.
println
(count
)
;

    
}


}

These two locking mechanisms are the concrete implementation of pessimistic locking. No matter whether other threads will modify at the same time, they are locked directly to ensure atomic operation.

Optimistic lock application

Because thread scheduling is extremely consuming operating system resources, we should try to avoid thread switching in continuous blocking and wake-up, resulting in optimistic locks.

In the database table, we often set a version field, which is the embodiment of optimistic lock. Suppose the data content of a data table is as follows:

+----+------+----------+ ------- +
| id | name | password | version |
+----+------+----------+ ------- +
|  1 | zs   | 123456   |    1    |
+----+------+----------+ ------- +

How does it avoid thread safety problems?

Suppose that two threads A and B want to modify this data, they will execute the following sql statement:

select version 
from e_user where name 
= 
'zs'
;



update e_user 
set password 
= 
'admin'
,version 
= version 
+ 
1 where name 
= 
'zs' and version 
= 
1
;

First, both threads find that the version number of zs user is 1. Then thread A performs the update operation. At this time, change the user's password to admin and add 1 to the version number. Then thread B performs the update operation. At this time, the version number is 2, so the update must fail. Therefore, thread B fails, java training It can only get the version number again and update it. This is optimistic lock. We do not lock the program and database, but it can still ensure thread safety.

CAS

Still take the program that started adding as an example. In Java, we can also implement it in a special way:

public 
class 
LockDemo 
{



    
static AtomicInteger count 
= 
new 
AtomicInteger
(
0
)
;



    
public 
static 
void 
main
(String
[
] args
) throws InterruptedException 
{

        List
<Thread
> threadList 
= 
new 
ArrayList
<
>
(
)
;

        
for 
(int i 
= 
0
; i 
< 
50
; i
++
) 
{

            Thread thread 
= 
new 
Thread
(
(
) 
-
> 
{

                
for 
(int j 
= 
0
; j 
< 
1000
; 
++j
) 
{

                    
// Using AtomicInteger to solve thread safety problems

                    count
.
incrementAndGet
(
)
;

                
}

            
}
)
;

            thread
.
start
(
)
;

            threadList
.
add
(thread
)
;

        
}

        
for 
(Thread thread 
: threadList
) 
{

            thread
.
join
(
)
;

        
}

        System
.out
.
println
(count
)
;

    
}


}

Why can the AtomicInteger class solve thread safety problems?

Let's take a look at the source code:

public final int 
incrementAndGet
(
) 
{

    
return unsafe
.
getAndAddInt
(
this
, valueOffset
, 
1
) 
+ 
1
;


}

When count calls incrementAndGet() method, it actually calls getAndAddInt() method of UnSafe class:

public final int 
getAndAddInt
(
Object var1
, long var2
, int var4
) 
{

    int var5
;

    
do 
{

        var5 
= 
this
.
getIntVolatile
(var1
, var2
)
;

    
} 
while
(
!
this
.
compareAndSwapInt
(var1
, var2
, var5
, var5 
+ var4
)
)
;



    
return var5
;


}

There is A loop in the getAndAddInt() method, and the key code is here. Let's assume that thread A enters the method at this time. At this time, var1 is the AtomicInteger object (the initial value is 0), the value of var2 is 12 (this is A memory offset, we don't need to care), and the value of var4 is 1 (ready to add 1 to count).

First, the data value in main memory can be obtained through AtomicInteger object and memory offset:

var5 = this.getIntVolatile(var1, var2);

Get the value of var5 as 0, and then the program will judge:

!this.compareAndSwapInt(var1, var2, var5, var5 + var4)

compareAndSwapInt() is a local method. Its function is to compare and exchange, that is, to judge whether the value of var1 is the same as that of var5 taken out from main memory. At this time, it must be the same. Therefore, the value of var5+var4 will be assigned to var1 and return true, and the reverse of true will be false. Therefore, the loop ends and the final method returns 1.

This is A normal operation process. However, when concurrency occurs, the processing is different. Suppose that thread A executes the getAndAddInt() method:

public final int 
getAndAddInt
(
Object var1
, long var2
, int var4
) 
{

    int var5
;

    
do 
{

        var5 
= 
this
.
getIntVolatile
(var1
, var2
)
;

    
} 
while
(
!
this
.
compareAndSwapInt
(var1
, var2
, var5
, var5 
+ var4
)
)
;



    
return var5
;


}

At this time, thread A obtains the value of var1 as 0 (var1 is the shared variable AtomicInteger). When thread A is preparing to execute, thread B executes first. Thread B obtains the value of var1 as 0 and the value of var5 as 0. It is successful. At this time, the value of var1 becomes 1; At this time, it is thread A's turn to execute. It obtains the value of var5 as 1. At this time, the value of var1 is not equal to the value of var5. This plus 1 operation will fail and re-enter the loop. At this time, the value of var1 has changed. At this time, the value of var5 is also 1. It is relatively successful, so it increases the value of var1 to 2. If another thread modifies the value of var1 in main memory before obtaining var5, Then this operation will fail again and the program will enter the cycle again.

This is to use the spin method to realize an optimistic lock. Because it does not lock, it saves the resources of thread scheduling, but it should also avoid the situation that the program is spinning all the time.

Write a spin lock

public 
class 
LockDemo 
{



    
private AtomicReference
<Thread
> atomicReference 
= 
new 
AtomicReference
<
>
(
)
;



    
public 
void 
lock
(
) 
{

        
// Gets the current thread object

        Thread thread 
= Thread
.
currentThread
(
)
;

        
// Spin waiting

        
while 
(

!atomicReference
.
compareAndSet
(
null
, thread
)
) 
{

        
}

    
}



    
public 
void 
unlock
(
) 
{

        
// Gets the current thread object

        Thread thread 
= Thread
.
currentThread
(
)
;

        atomicReference
.
compareAndSet
(thread
, 
null
)
;

    
}


//java learning exchange: 737251827   After entering, you can get learning resources and ask questions about leaders with ten years of development experience for free!



    
static int count 
= 
0
;



    
public 
static 
void 
main
(String
[
] args
) throws InterruptedException 
{

        LockDemo lockDemo 
= 
new 
LockDemo
(
)
;

        List
<Thread
> threadList 
= 
new 
ArrayList
<
>
(
)
;

        
for 
(
int i 
= 
0
; i 
< 
50
; i
++
) 
{

            Thread thread 
= 
new 
Thread
(
(
) 
-
> 
{

                lockDemo
.
lock
(
)
;

                
for 
(
int j 
= 
0
; j 
< 
1000
; j
++
) 
{

                    count
++
;

                
}

                lockDemo
.
unlock
(
)
;

            
}
)
;

            thread
.
start
(
)
;

            threadList
.
add
(thread
)
;

        
}

        
// Wait for the thread to finish executing

        
for 
(
Thread thread 
: threadList
) 
{

            thread
.
join
(
)
;

        
}

        System
.out
.
println
(count
)
;

    
}


}

Using the CAS principle, a spin lock can be easily implemented. First, the initial value in the AtomicReference must be null, so the first thread will successfully put the object of the current thread into the AtomicReference after calling the lock() method. At this time, if another thread calls the lock() method, it will fall into a circular wait because the thread object is different from the object in the AtomicReference, Until the first thread completes the + + operation and calls the unlock() method, the thread will set the AtomicReference value to null. At this time, other threads can jump out of the loop.

Through CAS mechanism, we can simulate the effect of locking without adding locks, but its disadvantages are also obvious:

  • Cyclic waiting takes up CPU resources

  • Atomic operation of only one variable can be guaranteed

  • ABA problems will arise

Posted by hmemnon on Thu, 02 Dec 2021 18:47:56 -0800