Simply find that there is no article on the Internet about Netty3 more complete source parsing, so I read the official documents, in order to enhance memory, translated into Chinese, with appropriate simplification.
Original Document Address: Netty3 Document
Chapter 1 Begins
1. Before you start
demo runs on two premises: the latest version of Netty3 and JDK1.5 or more
2. Write a Discard Server
The simplest protocol is the Discard protocol, which ignores all received data and does not respond.Start with the handler implementation of Netty handling I/O events:
public class DiscardServerHandler extends SimpleChannelHandler { @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) { e.getCause().printStackTrace(); Channel ch = e.getChannel(); ch.close(); } }
- DiscardServerHandler inherits SimpleChannelHandler, an implementation of ChannelHandler;
- The messageReceived method receives a parameter of type MessageEvent that contains the client data received;
- The exceptionCaught method is called when an I/O error occurs or an error is thrown during event handling, usually including actions to record the error information and close the channel.
Next, write a main method to turn on services using DiscardServerHandler:
public class DiscardServer { public static void main(String[] args) throws Exception { ChannelFactory factory = new NioServerSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); ServerBootstrap bootstrap = new ServerBootstrap(factory); bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() { return Channels.pipeline(new DiscardServerHandler()); } }); bootstrap.setOption("child.tcpNoDelay", true); bootstrap.setOption("child.keepAlive", true); bootstrap.bind(new InetSocketAddress(8080)); } }
- ChannelFactory is the factory that creates and manages Channel and its associated resources, which handles all I/O requests and executes I/O to generate ChannelEvent s.However, instead of creating I/O threads on its own, it takes threads from the thread pool specified when the construction method is called.Server-side applications use the NioServerSocketChannelFactory;
- ServerBootstrap is a help class for setting up the server side;
- When the server receives a new connection, the specified ChannelPipelineFactory creates a new ChannelPipeline, which contains a DiscardServerHandler object.
- You can set specific parameters for the Channel implementation, and the option prefixed with "child." represents the ServerSocketChannel that the application receives on the Channel instead of the server itself;
- All that remains is the binding port to start the service, which can bind multiple different ports.
3. Study the data received
We can test the service with the "telnet localhost 8080" command, but because it's a Discard service, we don't know if it's working properly.So we modified the service to print out the data we received.
@Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { ChannelBuffer buf = (ChannelBuffer) e.getMessage(); while(buf.readable()) { System.out.println((char) buf.readByte()); System.out.flush(); } }
- ChannelBuffer is Netty's basic data structure for storing bytes, similar to NIO's ByteBuffer, but easier to use and more flexible.Netty, for example, allows you to combine multiple ChannelBuffers into one with as few memory copies as possible.
4. Write an Echo service
A service is usually responsive to requests.Next, we try to write a service that implements the Echo protocol, returning the received data back to the client:
@Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { Channel ch = e.getChannel(); ch.write(e.getMessage()); }
- MessageEvent inherits ChannnelEvent, a ChannnelEvent holding a reference to its associated Channel.We can get this Channel and then call the write method to write the data back to the client.
5. Write a Time Service
This time we implement a time protocol that returns a 32-bit integer number without requiring any data to be requested and closes the connection after sending.Because we ignore the request data and only need to send messages when the connection is established, we cannot use the messageReceived method this time but override the channelConnected method:
@Override public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) { Channel ch = e.getChannel(); ChannelBuffer time = ChannelBuffers.buffer(4); time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L)); ChannelFuture f = ch.write(time); f.addListener(new ChannelFutureListener() { public void operationComplete(ChannelFuture future) { Channel ch = future.getChannel(); ch.close(); } }); }
- The channelConnected method is called at the time the connection is established, and we write a 32-bit integer number representing the current time in seconds.
- We used the ChannelBuffers tool class to allocate a ChannelBuffer with a capacity of 4 bytes to store this 32-bit integer number.
- Then we write ChannelBuffer into Channel... Wait, where is the flip method?In NIO, don't we call ByteBuffer's flip method before writing to the channel?ChannelBuffer does not have this method because it has two pointers, one for reading and one for writing.Write index increases while read index does not change when data is written to ChannelBuffer.Read and write indexes are independent of each other.By contrast, Netty's Channel Buffer is easier to use than NIO's buffer.
- Another thing to note is that ChannelBuffer's write method returns a ChannelFuture object.It represents an I/O operation that has not yet occurred because all operations in Netty are asynchronous.Therefore, we must notify ChannelFuture of the completion of the operation before we can close Channel.Oh, yes, the close method also returns ChannelFuture...
- So the question arises, how do we get the notification that the operation is complete?Simply add a ChannelFutureListener to the returned ChannelFuture object, where we create an anonymous internal class of ChannelFutureListener that closes Channel when the operation completes.
6. Write a time client
We also need a client that can translate integer numbers into dates by following the time protocol.The only difference between Netty servers and clients is that they require different Bootstrap s and ChannelFactory:
public static void main(String[] args) throws Exception { String host = args[0]; int port = Integer.parseInt(args[1]); ChannelFactory factory = new NioClientSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); ClientBootstrap bootstrap = new ClientBootstrap(factory); bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() { return Channels.pipeline(new TimeClientHandler()); } }); bootstrap.setOption("tcpNoDelay", true); bootstrap.setOption("keepAlive", true); bootstrap.connect(new InetSocketAddress(host, port)); }
- NioClientSocketChannelFactory, used to create a client Channel;
- ClientBootstrap is the corresponding part of ServerBootStrap on the client side;
- It is important to note that the "child." prefix is not required when setting parameters, and the client SocketChannel does not have a parent Channel.
- Corresponding to the bind method on the server side, here we need to call the connect method.
In addition, we need a ChannelHandler implementation that translates the 32-bit integer numbers returned by the receiving server into dates, prints them out, and disconnects them:
public class TimeClientHandler extends SimpleChannelHandler { @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { ChannelBuffer buf = (ChannelBuffer) e.getMessage(); long currentTimeMillis = buf.readInt() * 1000L; System.out.println(new Date(currentTimeMillis)); e.getChannel().close(); } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) { e.getCause().printStackTrace(); e.getChannel().close(); } }
It looks simple, doesn't it?However, this handler sometimes throws an IndexOutOfBoundsException during the actual run.In the next section we will discuss why this is the case.
7. Processing Stream-based Transport
7.1, A small warning about Socket Buffer
In a stream-based transmission like TCP/IP, the receive data is stored in a socket receive cache.However, this cache is not a queue in packets, but a queue in bytes.This means that even if two separate messages are sent, the operating system treats them as a byte string.Therefore, there is no guarantee that what you read will be the same as what you write on the other end.Therefore, whether it is a client or a server, the data received needs to be structured in accordance with the application logic.
7.2. First Solution
Back to the previous time client problem, the 32-bit integer number is small, but it can also be split, especially as traffic increases, so does the likelihood of being split. A simple way to do this is to create a cumulative cache internally until you receive four bytes.
private final ChannelBuffer buf = dynamicBuffer(); @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { ChannelBuffer m = (ChannelBuffer) e.getMessage(); buf.writeBytes(m); if (buf.readableBytes() >= 4) { long currentTimeMillis = buf.readInt() * 1000L; System.out.println(new Date(currentTimeMillis)); e.getChannel().close(); } }
- ChannelBuffers.dynamicBuffer() returns an auto-expanding ChannelBuffer;
- All received data is accumulated in this dynamic cache;
- handler needs to check if the cache is full of 4 bytes before continuing with business logic; otherwise, Netty will continue to call messageReceive after the data continues to arrive.
7.3, Second Solution
The first scenario has many problems, such as a complex protocol consisting of multiple variable-length domains, in which case the handler of the first scenario cannot be supported.
You'll find that you can add multiple ChannelHandlers to ChannelPipeline, and with this feature, you can split a bloated ChannelHandler into multiple modular ChannelHandlers, which reduces the complexity of your application.For example, you can split the TimeClientHandler into two handler s:
- TimeDecoder, responsible for segmentation issues;
- The original short version of TimeClientHandler.
Netty provides extensible classes to help you implement TimeDecoder:
public class TimeDecoder extends FrameDecoder { @Override protected Object decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) { if (buffer.readableBytes() < 4) { return null; } return buffer.readBytes(4); } }
- FrameDecoder is an implementation of ChannelHandler designed to handle segmentation issues;
- FrameDecoder calls the decode method each time it receives new data, carrying an internally maintained cumulative cache;
- If null is returned, it indicates that the current data receipt is insufficient and FrameDecoder will call the method again when the amount of data is sufficient.
- If a non-null object is returned, the FrameDecoder discards the remaining data in the cumulative cache.You do not need to provide batch decoding, and FrameDecoder will continue to call the decode method until null is returned.
After splitting, we need to modify the Channel PipelineFactory implementation of TimeClient:
bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() { return Channels.pipeline( new TimeDecoder(), new TimeClientHandler()); } });
Netty also provides Replaying Decoder to further simplify decoding:
public class TimeDecoder extends ReplayingDecoder<VoidEnum> { @Override protected Object decode( ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer, VoidEnum state) { return buffer.readBytes(4); } }
In addition, Netty provides a set of out-of-the-box decoders that allow you to easily implement most protocols:
- org.jboss.netty.example.factorial for binary protocols;
- org.jboss.netty.example.telnet for line-based text protocols.
8. Replace ChannelBuffer with POJO
The demo above uses ChannelBuffer as the basic data structure for the protocol message. In this section we use POJO instead of ChannelBuffer.Separating the code that extracts information from ChannelBuffer from the handler will make the handler more maintainable and reusable.It is not easy to see this advantage in the demo above, but separation is necessary in practice.
First, we define a type of UnixTime:
public class UnixTime { private final int value; public UnixTime(int value) { this.value = value; } public int getValue() { return value; } @Override public String toString() { return new Date(value * 1000L).toString(); } }
Now we can modify the TimeDecoder to return a UnixTime instead of ChannelBuffer:
@Override protected Object decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) { if (buffer.readableBytes() < 4) { return null; } return new UnixTime(buffer.readInt()); }
If the encoder is changed, the corresponding TimeClientHandler will not continue to use ChannelBuffer:
@Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { UnixTime m = (UnixTime) e.getMessage(); System.out.println(m); e.getChannel().close(); }
The same technique can be applied to the server-side TimeServerHandler:
@Override public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) { UnixTime time = new UnixTime((int)(System.currentTimeMillis() / 1000)); ChannelFuture f = e.getChannel().write(time); f.addListener(ChannelFutureListener.CLOSE); }
This can be done only if there is an encoder that can translate UnixTime objects into ChannelBuffer:
public class TimeEncoder extends SimpleChannelHandler { public void writeRequested(ChannelHandlerContext ctx, MessageEvent e) { UnixTime time = (UnixTime) e.getMessage(); ChannelBuffer buf = buffer(4); buf.writeInt(time.getValue()); Channels.write(ctx, e.getFuture(), buf); } }
- An encoder overrides the writeRequested method to intercept a write request.One thing to note here is that although there is also a MessageEvent object in the writeRequested method parameter and one in the messageReceived parameter of the client TimeClientHandler, they are interpreted differently.A ChannelEvent can be either an upstream or a downstream event, depending on the flow direction of the event.The MessageEvent in the messageReceived method is an upstream event, while the writeRequested method is a downstream event.
- When converting a POJO class to a ChannelBuffer, you need to forward the ChannelDownstreamHandler, or TimeServerHandler, that was previously in Channel Pipeline.Channels provides several helpful ways to create and send ChanenlEvent s.
Similarly, TimeEncoder needs to join the ChannelPipeline on the server side:
bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() { return Channels.pipeline( new TimeServerHandler(), new TimeEncoder()); } });
9. Close your app
In order to close the I/O threads and gracefully exit the application, we need to release the resources allocated by ChannelFactory.
A typical network application is shut down in three steps:
- Close all server socket connections;
- Close all non-server socket connections (including client sockets and sockets received by the server);
- Release all resources used by ChannelFactory.
Apply to TimeClient:
ChannelFuture future = bootstrap.connect(...); future.awaitUninterruptibly(); if (!future.isSuccess()) { future.getCause().printStackTrace(); } future.getChannel().getCloseFuture().awaitUninterruptibly(); factory.releaseExternalResources();
- The CilentBootStrap's connect method returns a ChannelFuture, which is notified when a connection attempt succeeds or fails.It also holds references to Channels associated with connection attempts;
- ChannelFuture.awaitUninterruptibly() waits for ChannelFuture to determine if the connection was successfully attempted;
- If the connection fails, we print out the reason for the failure.ChannelFuture.getCause() returns the cause of failure if the connection is neither successful nor cancelled;
- After handling the connection attempts, we still need to wait for the connection to close.Each Channel has its own closeFuture to notify you that the connection is closed and then you can do something about it.CloeFuture will be notified even if the connection attempt fails because Channel will automatically close after the connection fails.
- When all connections are closed, all that remains is to release the resources used by ChannelFactory.The release process is simple, calling its release External Resources method, and all associated NIO Selector s and thread pools will be automatically closed.
Closing a client is easy. What about a server?You need to unbind from the port and close all received connections.If you need a data structure that keeps track of active connections, Netty provides ChannelGroup.
ChannelGroup is a special extension of the Java Collection API that represents a set of open Channels.If a Channel is added to the ChannelGroup and the Channel is closed, it is automatically removed from the ChannelGroup.You can do batch operations on Channels in the same ChannelGroup, such as closing all Channels when the service is closed.
To track open socket s, you need to modify the TimeServerHandler to add the newly opened Channel to the global ChannelGroup variable.ChannelGroup is thread safe.
@Override public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) { TimeServer.allChannels.add(e.getChannel()); }
Now that we have automatically maintained a list of all active Channel s, it is as easy to shut down the server as it is to shut down the client.
public class TimeServer { static final ChannelGroup allChannels = new DefaultChannelGroup("time-server"); public static void main(String[] args) throws Exception { ... ChannelFactory factory = ...; ... ServerBootstrap bootstrap = ...; ... Channel channel = bootstrap.bind(new InetSocketAddress(8080)); allChannels.add(channel); waitForShutdownCommand(); ChannelGroupFuture future = allChannels.close(); future.awaitUninterruptibly(); factory.releaseExternalResources(); } }
- The DefaultChannelGroup construction method receives a group name as a parameter, which is its unique identity;
- The bind method of ServerBootstrap returns a Channel whose service-side binding specifies the local address. Calling the close method of Channel will unbind it from the local address.
- All Channel types can be added to ChannelGroup, whether they are received by the client, the service, or the service.Because you can close both the bound Channel and the received Channel when the server shuts down;
- waitForShutdownCommand() is a fictitious method of waiting for the shutdown signal.
- We can unify Channels in ChannelGroup, where we call the close method, which is equivalent to unbinding a service-side Channel and closing all received Channels asynchronously.The close method returns a ChannelGroupFuture similar to ChannelFuture that notifies us that all connections have been successfully closed.
10. Summary
In this section, we take a quick look at Netty and demonstrate how to write a working network application using Netty.
The next section describes Netty in more detail.