I. GIL global interpreter lock
'''
python interpreter:
-Cpython c language
- Jpython java
1. GIL: global interpreter lock
-For multiple threads opened in the same process, only one thread can execute at the same time, because Cpython's memory management is not thread safe.
-GIL global interpreter lock is essentially a mutually exclusive lock to ensure data security
Definition:
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple
native threads from executing Python bytecodes at once. This lock is necessary mainly
because CPython's memory management is not thread-safe. (However, since the GIL
exists, other features have grown to depend on the guarantees that it enforces.)
Conclusion: in Cpython interpreter, the multithreading opened under the same process can only be executed by one thread at the same time, and the advantage of multithreading cannot be utilized.
Advantages and disadvantages of GIL global interpreter:
Advantage:
Secure data
Disadvantages:
In a single process, multiple threads can be opened at the expense of execution efficiency, so parallelism cannot be realized, only concurrency can be realized
-IO intensive: with multithreading
-Compute intensive: using multiple processes
'''
import time
from threading import Thread, Lock
lock = Lock()
n = 100
def task():
lock.acquire()
global n
m = n
time.sleep(1)
n = m - 1
lock.release()
if __name__ == '__main__':
list1 = []
for line in range(10):
t = Thread(target=task)
t.start()
list1.append(t)
for t in list1:
t.join()
print(n)
# Check the document to see if the memory can be cleaned manually
# import gc
# -To view extracurricular questions:
# -Domestic: open source China, CSDN, cnblogs, https://www.v2ex.com/
# -Overseas: Stack Overflow, github, Google
II. Use multithreading to improve efficiency
from threading import Thread
from multiprocessing import Process
import time
'''
IO Using multithreading in dense environment
//Using multiprocess in computing intensive environment
IO Intensive tasks, 4 per task s
- Single core:
- Turn on multithreading to save resources
- Multicore:
- Multithreading:
- Start 4 sub threads: 16 s
- Multi process:
- Start 4 subprocesses: 16 s + Time to apply for resource consumption
//Computing intensive tasks, 4s per task
- Single core:
- Starting a thread saves more resources than a process
- Multicore:
//Multithreading:
- Start 4 sub threads: 16 s
//Multi process:
- Open multiple processes: 4 s
'''
# def task1():
# #Calculate + = 1 for 1000000 words
# i = 10
# for line in range(1000000):
# i += 1
#
#
# def task2():
# time.sleep(2)
#
#
# if __name__ == '__main__':
#
# # 1. Start multi process
# # Test computing intensive
# start_time = time.time()
# list1 = []
# for line in range(6):
# p = Process(target=task1)
# p.start()
# list1.append(p)
#
# for p in list1:
# p.join()
#
# end_time = time.time()
#
# #Consumption time
# print(f 'multiprocess computing intensive consumption time: {end ﹣ u time - start ﹣ time}')
# #Multi process intensive consumption time: 1.4906916618347168
#
# # Test IO intensive
# start_time = time.time()
# list1 = []
# for line in range(6):
# p = Process(target=task2)
# p.start()
# list1.append(p)
#
# for p in list1:
# p.join()
#
# end_time = time.time()
#
# #Consumption time
# print(f 'multiprocess IO type elapsed time: {end ﹣ time - start ﹣ time}')
#
#
#
#
# #2. Enable multithreading
# #Test computing intensive
# start_time = time.time()
# list1 = []
# for line in range(6):
# t = Thread(target=task1)
# t.start()
# list1.append(t)
#
# for t in list1:
# t.join()
#
# end_time = time.time()
# print(f 'multithreaded computing intensive consumption time: {end ﹐ time - start ﹐ time}')
# #Multi thread intensive consumption time: 0.41376233100891113
#
#
# #Test IO intensive
# start_time = time.time()
# list1 = []
# for line in range(6):
# t = Thread(target=task2)
# t.start()
# list1.append(t)
#
# for t in list1:
# t.join()
#
# end_time = time.time()
# print(f 'multithreaded IO intensive consumption time: {end ﹐ time - start ﹐ time}')
#
# Compute intensive tasks
def task1():
# Calculate 1000000 times + = 1
i = 10
for line in range(10000000):
i += 1
# IO intensive tasks
def task2():
time.sleep(3)
if __name__ == '__main__':
# 1. Test multiple processes:
# Test computing intensive
start_time = time.time()
list1 = []
for line in range(6):
p = Process(target=task1)
p.start()
list1.append(p)
for p in list1:
p.join()
end_time = time.time()
# Consumption time: 5.33872389793396
print(f'Compute intensive consumption time: {end_time - start_time}')
# Test IO intensive
start_time = time.time()
list1 = []
for line in range(6):
p = Process(target=task2)
p.start()
list1.append(p)
for p in list1:
p.join()
end_time = time.time()
# Consumption time: 4.517091751098633
print(f'IO Intensive consumption time: {end_time - start_time}')
# 2. Test multithreading:
# Test computing intensive
start_time = time.time()
list1 = []
for line in range(6):
p = Thread(target=task1)
p.start()
list1.append(p)
for p in list1:
p.join()
end_time = time.time()
# Consumption time: 5.988943815231323
print(f'Compute intensive consumption time: {end_time - start_time}')
# Test IO intensive
start_time = time.time()
list1 = []
for line in range(6):
p = Thread(target=task2)
p.start()
list1.append(p)
for p in list1:
p.join()
end_time = time.time()
# Consumption time: 3.00256085395813
print(f'IO Intensive consumption: {end_time - start_time}')
//Conclusion:
# The comparison between 1 and 3 shows that multi process (multi CPU in multi-core) is used in computing intensive environment
# The comparison between 2 and 3 shows that multithreading (multiple CPU s in the case of multi-core) is used in IO intensive environment
# All use multithreading (single core and single CPU)
Three, Association
'''
1,What is synergy?
- Process: resource units
- Threads: execution units
- Concurrency: single thread
- stay IO In the case of intensive, the highest efficiency can be improved by using the process
//Note: the process is not any unit, just a programmer's YY
//Summary: multiprocess > multithread > let each thread realize the co process (concurrent under single thread)
//Purpose of the process:
- Manual implementation of "encounter IO switch + Save state "to deceive the operating system and make the operating system mistakenly think that it is not IO Operation, will CPU The execution authority of
'''
import time
def task1():
time.sleep(1)
def task2():
time.sleep(3)
def task3():
time.sleep(5)
def task4():
time.sleep(7)
def task5():
time.sleep(9)
#IO switch (gevent) + save state encountered
from gevent import monkey #monkey patch
monkey.patch_all() #Monitor whether all tasks have IO operations
from gevent import spawn #Spawn (task)
from gevent import joinall
import time
def task1():
print('start from task1....')
time.sleep(1)
print('end from task1....')
def task2():
print('start from task2....')
time.sleep(1)
print('end from task2....')
def task3():
print('start from task3....')
time.sleep(1)
print('end from task3....')
if __name__ == '__main__':
start_time = time.time()
sp1 = spawn(task1)
sp2 = spawn(task2)
sp3 = spawn(task3)
# sp1.start()
# sp2.start()
# sp3.start()
# sp1.join()
# sp2.join()
# sp3.join()
joinall([sp1, sp2, sp3]) #Equivalent to the above six steps
end_time = time.time()
print(f'Consumption time:{end_time - start_time}')
# start from task1....
# start from task2....
# start from task3....
# end from task1....
# end from task2....
# end from task3....
# Consumption time: 1.0085582733154297