swoole battle in 2018 7-process details

Keywords: PHP Redis brew Mac

Following the last chapter 2018 swoole 6-asynchronous redis

This presentation swoole process management module

Create child process

New process.php

<?php
$process = new swoole_process(function(swoole_process $pro) {
    echo 'swoole Create process' . PHP_EOL;
}, false); // If set to true, the terminal will not display standard output

$pid = $process->start(); // Created a subprocess
echo $pid . PHP_EOL; // Sub process id

swoole_process::wait();
☁  process  php process.php
67540
 swoole creation process

Call external program

process.php

<?php
$process = new swoole_process(function(swoole_process $pro) {
    $pro->exec("/usr/local/opt/php@7.1/bin/php", [__DIR__.'/http_server.php']);
}, false); // If set to true, the terminal will not display standard output

$pid = $process->start(); // Created a subprocess
echo $pid . PHP_EOL; // Sub process id

swoole_process::wait();

http_server.php

<?php
$http = new swoole_http_server('0.0.0.0', 9502);

$http->on('request', function ($request, $response) {
    $response->header("Content-Type", "text/html; charset=utf-8");
    $time = date('Y-m-d H:i:s', time());
    $response->end("<h1>{$time}--This is swoole Provided http Service. You need to restart the service to take effect after modifying the code</h1>");
});

$http->start();
☁  process  php process.php
68526
[2018-07-27 16:38:53 @68526.0]  TRACE   Create swoole_server host=0.0.0.0, port=9502, mode=3, type=1

Browser access http://127.0.0.1:9502/

View process tree

The pstree tool allows you to view the relationships of related processes

brew install pstree # Installing pstree on mac
ps aux | grep process.php  # Get process id
pstree -p 69932 # Show process tree

Practical cases of multi process

If you use php to grab web content, the traditional way is to use a for loop to traverse the URLs one by one. Assuming that each url takes one second, six URLs will take six seconds, which is too inefficient. Through the process management module of swoole, we can achieve multi process crawling, each process is responsible for a url, so as to complete the task of crawling in one second

New process URU curl.php

<?php
$startTime = time();
echo "Program start time:" . date("H:i:s") . PHP_EOL;
$workers = [];
$urls = [
    'http://www.zhihu.com',
    'http://www.baidu.com',
    'http://www.jianshu.com',
    'http://www.huxiu.com',
    'http://www.qq.com',
];

for ($i = 0; $i < count($urls); $i++) {
    // Start one subprocess at a time
    $process = new swoole_process(function (swoole_process $worker) use($i, $urls) {
        $content = getContent($urls[$i]);
        $worker->write($content . PHP_EOL);
    }, true);
    $pid = $process->start();
    $workers[$pid] = $process;
}

foreach ($workers as $process) {
    echo $process->read();
}

// Analog data acquisition, 1 second
function getContent($url) {
    sleep(1);
    return $url . " Execution completed..." . PHP_EOL;
}
$runTime = time() - $startTime;
echo "Program execution time is{$runTime}second" . PHP_EOL;

Execution result:

☁  process  php process_curl.php
 Program start time: 17:13:54
 http://www.zhihu.com execution completed

http://www.baidu.com execution completed

http://www.jianshu.com execution completed

Http://www.huxu.com execution completed

http://www.qq.com execution completed

Program execution time is 1 second

If you think this article is helpful to you, like it or enjoy a cup of coffee money, your recognition is very important to me

Posted by sandsquid on Fri, 31 Jan 2020 12:04:36 -0800