Three I/O multiplexing modes of Linux: ——epoll

Keywords: Linux less socket Programming

I/O multiplexing function

select
poll
Epoll < Unique I/O Reuse of Linux >
Next, our last I/O reuse, which is unique to Linux, is a simple analysis of three I/O reuses.

epoll

epoll is no longer a function, but a set of functions.

int epoll_create(int seize);//Create a kernel event table that is maintained by the kernel without user maintenance
int epoll_ctl(int epfd,int op,,int fd,struct epoll_event *event);//That is to register events into the kernel event table
int epoll_wait(int epfd,struct epoll_event *events,int eventslen,int timeout);//epoll monitoring

epoll_create

Kernel Event Table: Create a table in the system kernel to record events on file descriptors that users care about. For select or and poll, event tables are created in user space. Each operation needs to go from user space to kernel space, copy data from kernel space to user space, and epoll generates kernel event table once less than the other two. Copy.
epoll_create return value - 1 - Failure
Returns the identifier of the kernel event table successfully

epoll_ctl

int epfd//Kernel Event Table Identifier, obtained by epoll_create
int op // macro, specifying the type of operation, implements a general algorithm on the same function, that is, function object. OP has three macros, respectively:
EPOLL_CTL_ADD registers events on fd in the event table
EPOLL_CTL_MOD Change Events Registered on fd
EPOLL_CTL_DEL deletes events registered on fd
int fd // file descriptor to be operated on, pass in a sockfd or c file descriptor
The parameter struct epoll_event*event// specifies the event, which is struct epoll_event*type. The following is the definition of struct epoll_event*

struct epoll_event
{
uint32_t events;Events of User Concern
epoll_data_t data;//Consortium, here's the fd that users care about
}

For the events that epoll is concerned about, the supporting events are basically the same, adding E before the poll events, but epoll has additional EPOLLET and EPOLLONESHOT, which lay the foundation for the efficiency of epoll. New friends can see the LT and ET patterns of epoll. Next time, we will talk about these, here we first learn about three I/O reuse.
epoll_data_t is defined as follows:

typedef union epoll_data
{
void *ptr;
int fd;//File Descriptors of User Concern
uint32_t u32;
uint64_t u64;
}

The common use is fd, which identifies the file descriptor of interest, and other non-common ones. At the same time, because of union, if you want to save FD and other data, you have to use ptr to point to the structure, which is to reserve an expandable interface for future programming.

epoll_wait

events: An array specified by the user, len a length, and the number of elements specified in the array
This array is used to store the ready file descriptor information filled by the kernel when epoll_wait returns, that is, to accept all ready file descriptors, which limits the number of processing at a time.
wait return value 0 timeout, - 1 failure, > 0 ready file descriptor number

#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <string.h>

#include <sys/types.h>
#include <sys/socket.h>
#include <sys/select.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <sys/epoll.h>

#define FDMAXNUM 100

void DealClientData(int fd,int epfd,short events)
{
	if(events & EPOLLRDHUP)
	{
		epoll_ctl(epfd,EPOLL_CTL_DEL,fd,NULL);
		close(fd);
	}
	else if(events & EPOLLIN)
	{
		char buff[128] = {0};
		int n = recv(fd,buff,127,0);
		if(n <= 0)
		{
			printf("Love is over");
			close(fd);
			return;
		}
		printf("%s\n", buff);
		send(fd,"OK",2,0);
	}
}

void GetClientLink(int fd,int epfd,struct epoll_event *event,struct sockaddr_in cli)
{
	int len = sizeof(cli);
	int c = accept(fd,(struct sockaddr*)&cli,&len);
	assert(c != -1);

	event->events = EPOLLIN | EPOLLRDHUP;
	event->data.fd = c;
	epoll_ctl(epfd,EPOLL_CTL_ADD,c,event);
}

void DealFinshEvent(int sockfd,int epfd,struct epoll_event *events,
					int n,struct sockaddr_in cli,struct epoll_event *event)
{
	int i = 0;
	for(;i < n;++i)
	{
		int fd = events[i].data.fd;
		if(fd == sockfd)
		{
			GetClientLink(fd,epfd,event,cli);
		}
		else
		{
			DealClientData(fd,epfd,events[i].events);
		}
	}
}



int main(int argc, char const *argv[])
{
	int sockfd = socket(AF_INET,SOCK_STREAM,0);
	assert(sockfd != -1);

	struct sockaddr_in ser,cli;
	memset(&ser,0,sizeof(ser));
	
	ser.sin_family = AF_INET;
	ser.sin_port = htons(6000);
	ser.sin_addr.s_addr = inet_addr("127.0.0.1");//New Change


	int res = bind(sockfd,(struct sockaddr*)&ser,sizeof(ser));
	assert(res != -1);

	listen(sockfd,5);

	int epfd = epoll_create(5);
	assert(epfd != -1);

	struct epoll_event event;
	
	event.data.fd = sockfd;
	event.events = EPOLLIN;

	epoll_ctl(epfd,EPOLL_CTL_ADD,sockfd,&event);

	while(1)
	{
		struct epoll_event events[FDMAXNUM];
		int n = epoll_wait(epfd,events,FDMAXNUM,-1);//Wait a event
		if(n <= 0)
		{
			printf("This Client is unlink");
			continue;
		}
		DealFinshEvent(sockfd,epfd,events,n,cli,&event);
	}
	close(sockfd);
	return 0;
}

The judgment of EPOLLIN and EPOLLRDHUP is similar to poll, but epoll is obviously much more efficient than poll.
Where epoll is more efficient than poll:

  1. Events of interest to users are stored directly in the kernel event table, and epoll_wait does not need to be copied from user space to kernel space every time.
  2. When epoll_wait returns, only the ready file descriptor needs to be copied to an array of user space. select and poll need to return all file descriptors, whether ready or not. Efficient.
  3. The time complexity of epoll user detection ready file descriptor is O(1), and some advantages inherit the advantages of poll, which is better than select. Poll is fbs[n] linked list in kernel implementation. Epoll is a red-black binary tree
  4. epoll uses callback mode, the bottom of the returned events array is the linked list, poll and select are event polling

The difference of three kinds of I/O multiplexing:

  1. The callback method of EPOLL is suitable for focusing on the situation of more descriptors and less ready returns at one time, because callback functions also need to be stacked and triggered too frequently.
  2. Polling in POLL is suitable for situations where there are many descriptors and many returns are ready at one time.
  3. The epoll kernel message queue implementation is a red-black tree, poll is linked list, select is internal

Posted by ddemore on Wed, 24 Apr 2019 19:51:34 -0700