Java IO learning note 8: multiplexing to Netty

Keywords: Java Netty Back-end

Author: Grey

Original address: Java IO learning note 8: multiplexing to Netty

Multiplex multithreading The method is still a little troublesome. Netty helped us encapsulate it, greatly simplifying the coding complexity. Next, get familiar with the basic use of netty.

Netty + implements a version of the code for communication between client and server in the simplest way of blocking, and then reconstructs the writing method officially recommended by netty.

The first step is to introduce the netty dependency package.

<dependency>
   <groupId>io.netty</groupId>
   <artifactId>netty-all</artifactId>
   <version>4.1.65.Final</version>
</dependency>

Prepare sender

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;

import java.net.InetSocketAddress;

import static java.nio.charset.StandardCharsets.UTF_8;

/**
 * @author <a href="mailto:410486047@qq.com">Grey</a>
 * @since
 */
public class NettyClientSync {
    public static void main(String[] args) throws Exception {
        NioEventLoopGroup thread = new NioEventLoopGroup(1);
        NioSocketChannel client = new NioSocketChannel();
        thread.register(client);
        ChannelPipeline p = client.pipeline();
        p.addLast(new MyInHandler());
        ChannelFuture connect = client.connect(new InetSocketAddress("192.168.205.138", 9090));
        ChannelFuture sync = connect.sync();
        ByteBuf buf = Unpooled.copiedBuffer("hello server".getBytes());
        ChannelFuture send = client.writeAndFlush(buf);
        send.sync();
        sync.channel().closeFuture().sync();
        System.out.println("client over....");
    }

    static class MyInHandler extends ChannelInboundHandlerAdapter {
        @Override
        public void channelRegistered(ChannelHandlerContext ctx) {
            System.out.println("client  register...");
        }

        @Override
        public void channelActive(ChannelHandlerContext ctx) {
            System.out.println("client active...");
        }

        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            ByteBuf buf = (ByteBuf) msg;
            CharSequence str = buf.getCharSequence(0, buf.readableBytes(), UTF_8);
            System.out.println(str);
            ctx.writeAndFlush(buf);
        }
    }
}

This client mainly sends data to the server 192.168.205.138:9090 and starts a server:

[root@io ~]# nc -l 192.168.205.138 9090

Then start the client, and the server can receive the data sent by the client:

[root@io ~]# nc -l 192.168.205.138 9090
hello server

This is a client implemented by netty. Let's look at the writing of the server:

import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;

import java.net.InetSocketAddress;

/**
 * @author <a href="mailto:410486047@qq.com">Grey</a>
 * @since
 */
public class NettyServerSync {
    public static void main(String[] args) throws Exception {
        NioEventLoopGroup thread = new NioEventLoopGroup(1);
        NioServerSocketChannel server = new NioServerSocketChannel();
        thread.register(server);
        ChannelPipeline p = server.pipeline();
        p.addLast(new MyAcceptHandler(thread, new NettyClientSync.MyInHandler()));
        ChannelFuture bind = server.bind(new InetSocketAddress("192.168.205.1",9090));
        bind.sync().channel().closeFuture().sync();
        System.out.println("server close....");
    }

    static class MyAcceptHandler extends ChannelInboundHandlerAdapter {


        private final EventLoopGroup selector;
        private final ChannelHandler handler;

        public MyAcceptHandler(EventLoopGroup thread, ChannelHandler myInHandler) {
            this.selector = thread;
            this.handler = myInHandler;
        }

        @Override
        public void channelRegistered(ChannelHandlerContext ctx) {
            System.out.println("server registered...");
        }

        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            SocketChannel client = (SocketChannel) msg;
            ChannelPipeline p = client.pipeline();
            p.addLast(handler);
            selector.register(client);
        }
    }
}

Start the server, connect to the server through a client, and send some data to the server

[root@io ~]# nc 192.168.205.1 9090
hello 
hello

The server can sense the client connection and receive the data sent by the client

client  register...
client active...
hello

However, if such a server receives another client connection and the client continues to send some data, the server will report an error:

An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.ChannelPipelineException: git.snippets.io.netty.NettyClientSync$MyInHandler is not a @Sharable handler, so can't be added or removed multiple times.

The reason is clear in this blog: Netty ChannelHandler error

We can find that whenever new data is readable, a handler is added to the pipeline of the channel. Here, childHandler is added. It is worth noting that when we initialize, the childHandler is the same instance, which means that different channels will use the same handler. This should be avoided from the perspective of netty design. Because a great advantage of netty is that each channel has its own bound eventloop and channelHandler, which can ensure the serial execution of code without considering the problem of concurrent synchronization. Therefore, there is a method called checkMultiplicity to check this problem. What should I do? Netty's code: child.pipeline().addLast(childHandler) uses the same handler. How can we create different handlers for each channel?
It's very simple. Just write a class to inherit ChannelInitializer. ChannelInitializer is a special class. You can imagine it as a collection of many channelhandler s, and this class is @ Shareable. After inheriting this class, you can create separate handlers or even multiple handlers for each channel.

The solution is also very simple. You only need to add @ Sharable annotation to the handler passed in by the server

@ChannelHandler.Sharable
static class MyInHandler extends ChannelInboundHandlerAdapter{
 ...
}

However, if @ Sharable is added to each service handler, it will be very difficult to expand. Netty provides a class labeled @ Sharable without any business functions: ChannelInitializer. Each business handler only needs to rewrite its initChannel() method. We can modify the codes of NettyClientSync and NettyServerSync, And modify it with the writing method recommended by netty.

Client changed to:

import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;

import java.net.InetSocketAddress;

/**
 * @author <a href="mailto:410486047@qq.com">Grey</a>
 * @since
 */
public class NettyClient {
    public static void main(String[] args) throws InterruptedException {
        NioEventLoopGroup group = new NioEventLoopGroup(1);
        Bootstrap bs = new Bootstrap();
        ChannelFuture fu = bs
                .group(group).channel(NioSocketChannel.class)
                .handler(new ChannelInitializer<NioSocketChannel>() {
                    @Override
                    protected void initChannel(NioSocketChannel nioSocketChannel) throws Exception {
                        ChannelPipeline pipeline = nioSocketChannel.pipeline();
                        pipeline.addLast(new NettyClientSync.MyInHandler());
                    }
                }).connect(new InetSocketAddress("192.168.205.138", 9090));
        Channel client = fu.channel();
        ByteBuf buf = Unpooled.copiedBuffer("Hello Server".getBytes());
        ChannelFuture future = client.writeAndFlush(buf);
        future.sync();
    }
}

Start a server, and then start the above client code. The server can receive information

[root@io ~]# nc -l 192.168.205.138 9090
Hello Server

Next, modify the server code:

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;

import java.net.InetSocketAddress;

/**
 * @author <a href="mailto:410486047@qq.com">Grey</a>
 * @since
 */
public class NettyServer {
    public static void main(String[] args) throws InterruptedException {
        NioEventLoopGroup group = new NioEventLoopGroup(1);
        ServerBootstrap bs = new ServerBootstrap();
        ChannelFuture bind = bs
                .group(group, group)
                .channel(NioServerSocketChannel.class)
                .childHandler(new ChannelInitializer<NioSocketChannel>() {
                    @Override
                    protected void initChannel(NioSocketChannel nioServerSocketChannel) throws Exception {
                        ChannelPipeline pipeline = nioServerSocketChannel.pipeline();
                        pipeline.addLast(new NettyClientSync.MyInHandler());
                    }
                }).bind(new InetSocketAddress("192.168.205.1", 9090));
        bind.sync().channel().closeFuture().sync();
    }
}

Start the server code, then connect to the server through the client and send some data:

[root@io ~]# nc 192.168.205.1 9090
sdfasdfas
sdfasdfas

It can be received normally.

Source code: Github

Posted by Biocide on Wed, 10 Nov 2021 07:12:50 -0800