1.0 task_io_service
In boost.asio source reading (1), the code has been viewed in task_io_service.
Specific operations call void task_io_service::init_task()
Starting from init_task, this paper examines the impact of the creation of basic_socket_acceptor on task_io_service.
1.1 init_task
Direct code
void task_io_service::init_task() { mutex::scoped_lock lock(mutex_); if (!shutdown_ && !task_) { 1. task_ = &use_service<reactor>(this->get_io_service()); 2. op_queue_.push(&task_operation_); 3. wake_one_thread_and_unlock(lock); } }
- Create task_, in fact task_is epoll_reactor.
- Place member task_operation_in the queue
- Wake up a thread and unlock the current operation.
It is clear here that io_service uses synchronous queue op_queue (lock - > entry - > wake-up - > unlock) in basic_socket_acceptor.
When created, task_operation_is queued, and in io_service - > run (), this variable will be manipulated.
1.2 task_io_service::run
Go straight to the theme
std::size_t task_io_service::run(boost::system::error_code& ec) { // ...... thread_info this_thread; this_thread.private_outstanding_work = 0; // ...... for (; do_run_one(lock, this_thread, ec); lock.lock()) //...... }
There is a queue inside thread_info, that is to say, io_is called in multithreading.service::run time
Each thread will have its own private queue.
std::size_t task_io_service::do_run_one { while (!stopped_) { if (!op_queue_.empty()) { // Prepare to execute first handler from queue. operation* o = op_queue_.front(); op_queue_.pop(); bool more_handlers = (!op_queue_.empty()); 1. if (o == &task_operation_) { task_interrupted_ = more_handlers; if (more_handlers && !one_thread_) wakeup_event_.unlock_and_signal_one(lock); else lock.unlock(); 2. task_cleanup on_exit = { this, &lock, &this_thread }; (void)on_exit; task_->run(!more_handlers, this_thread.private_op_queue); } else { std::size_t task_result = o->task_result_; if (more_handlers && !one_thread_) wake_one_thread_and_unlock(lock); else lock.unlock(); // Ensure the count of outstanding work is decremented on block exit. 4. work_cleanup on_exit = { this, &lock, &this_thread }; (void)on_exit; // Complete the operation. May throw an exception. Deletes the object. o->complete(*this, ec, task_result); return 1; } } else { wakeup_event_.clear(lock); 5. wakeup_event_.wait(lock); } } return 0; }
- If the queue content of the current operation is task_operation_then it ends up in task_-> run, which is epoll_wait.
At the same time, if the current queue is not empty and multithreaded, the thread will continue to wake up to perform operations, avoiding the delay caused by epoll_wait blocking (although interruptible).
From here, you can see that the creation of basic_socket_acceptor enables you to enter epoll_wait to handle related events when io_service-> run. - Like line4, each of them performs the corresponding destructive operation after defining a variable. At the same time, the content of the thread's private queue is added to the public queue op_queue_and the task_operation_that just left the queue is added to the end of the queue (task_cleanup) to make io_service enter task-> run again after dealing with the events before the queue.
- line5 indicates that when the queue is empty, the thread goes to sleep and waits to wake up.
2. summary
- In this article, you can see that the creation of basic_socket_acceptor enables io_service to enter epoll_wait during cyclic events.
Of course, if the user deletes the acceptor, it will not change the logic of io_service - > run itself. That is to say, if the acceptor is not needed, the io_service will either stop or restart to avoid unnecessary loss of efficiency on the epoll_wait. - task_io_service itself has a public queue, and each execution thread has its own private queue, which will be processed before each event is processed.
The content of the private queue is first added to the public queue. To some extent, the content of public queues is higher than that of private queues. - All events in asio will eventually return to the public queue and wait for execution.