Network in Docker

Keywords: Operation & Maintenance network Docker Tomcat Linux

Talking about the network in Linux
  • Connectivity between namespace s

Namespace is a feature supported after Linux 2.6.x kernel, which is mainly used for resource isolation. With namespace, a Linux system can abstract multiple network subsystems. Each subsystem has its own network device, protocol stack, etc., which do not affect each other.

  • What if there is a need for communication between namespace s
# View network card information
ip link show
# perhaps
ip a
# Network card configuration directory
cd /etc/sysconfig/network-scripts/

#Add temporary ip to network card
ip addr add 192.168.xxx.xxx/24 dev eth0
#Delete ip
ip addr delete 192.168.xxx.xxx/24 dev eth0
Isolate the network card
# Create network namespace
ip netns add ns1
# see
ip netns list
# View the network card information of the current namespace
ip netns exec ns1 ip a
# boot adapter 
ip netns exec ns1 ifup  lo
veth pair Technology (virtual Ethernet Pair)

Eth pair is a pair of virtual device interfaces. Different from tap/tun devices, it appears in pairs. One end is connected to the protocol stack, and the other end is connected to each other. Because of this feature, it often acts as a Bridge to connect various virtual network devices, such as "the connection between two namespace s", "the connection between Bridge and OVS", "the connection between Docker containers", etc., so as to build a very complex virtual network structure.

ip link add veth-ns1 type veth peer name veth-ns2
#Bind network namespace
ip link set veth-ns1 netns ns1
# Add ip to network card
ip netns exec ns1 ip addr add 192.xxx.xxx.10/24 dev veth-ns1
# boot adapter 
ip netns exec ns1 ip link set veth-ns1 up
# ns2 is the same, slightly
# ns1 ping ns2
ip netns exec ns1 ping 192.168.xxx.xxx

	# Corresponding network card part information in two namespaces
	veth-ns1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   inet 192.xxx.xxx.10/24 scope global veth-ns1
      valid_lft forever preferred_lft forever
----
veth-ns2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   inet 192.xxx.xxx.11/24 scope global veth-ns2
      valid_lft forever preferred_lft forever

   veth-ns1@if4  <--->veth-ns2@if5 Pair
Look at docker again

Through the above theories, we can demonstrate whether docker also uses the technology of Veth pair.

Start 2 docker containers
 docker run -d mytomcat01 -p 8001:8080 tomcat
 docker run -d mytomcat02 -p 8002:8080 tomcat
# Physical host information
8: veth131fbae@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
   inet6 aaaa::aaaa:aaaa:aaaa:aaaa/64 scope link 
      valid_lft forever preferred_lft forever
10: veth61cf0b5@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
   inet6 aaaa::aaaa:aaaa:aaa:aaa/64 scope link 
      valid_lft forever preferred_lft forever
      
# Enter the docker container respectively
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
   inet 172.xxx.xxx.2/16 brd 172.xxx.255.255 scope global eth0
      valid_lft forever preferred_lft forever


9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
   link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
   inet 172.xxx.xxx.3/16 brd 172.xxx.255.255 scope global eth0
      valid_lft forever preferred_lft forever

It can be seen that the container in docker and the physical host also have such network card information in pairs. veth131fbae@if7 <--> eth0 @if8 ,veth61cf0b5@if9 <---> eth0@if10

Network mode in Docker
docker network ls

NETWORK ID          NAME                DRIVER               SCOPE
15ea9d89d616        bridge              bridge               local
1c62b0fd4212        host                host                 local
375f1ab17a8b        none                null                 local

1. bridge
 - Default usage
2.host 
 - Access containers can be accessed directly without mapping
3. none
- Only one local network card information

#Create, default to bridge
docker network create  net01
# View details
docker network inspect net01
[
   {
       "Name": "net01",
       "Id": "b9fee626fdee",
       "Created": "2019-12-19T10:50:59.641869093Z",
       "Scope": "local",
       "Driver": "bridge",
       "EnableIPv6": false,
       "IPAM": {
           "Driver": "default",
           "Options": {},
           "Config": [
               {
                   "Subnet": "x.0.0.0/16",
                   "Gateway": "x.x.x.x" #Gateway information
               }
           ]
       },
       "Internal": false,
       "Attachable": false,
       "Ingress": false,
       "ConfigFrom": {
           "Network": ""
       },
       "ConfigOnly": false,
       "Containers": { # Container information
           "c0f8db51792b9b394b": {
               "Name": "tomcat03",
               "EndpointID": "9a966183a87",
               "MacAddress": "",
               "IPv4Address": "x.x.x.x/16",
               "IPv6Address": ""
           }
       },
       "Options": {},
       "Labels": {}
   }
]



# Use custom networks for containers
docker run -d --name tomcat03 --network net01 -p 8082:8080 tomcat
#You can compare the ip changes used by the container with those not used

#But at the moment, we can't ping the container created through the network of the default docker0,
//Because the customized network is not in the same network segment as docker0, it cannot be ping ed. tomcat03 is in the customized network,
tomcat01 The default network is used,that tomcat03 and tomcat01 Of course not ping Pass.
# View the default bridge of the current docker
docker inspect bridge

[
   {
       "Name": "bridge",
       "Id": "15ea9d89d6165304b561b",
       "Created": "2019-12-19T10:43:46.750789678Z",
       "Scope": "local",
       "Driver": "bridge",
       "EnableIPv6": false,
       "IPAM": {
           "Driver": "default",
           "Options": null,
           "Config": [
               {
                   "Subnet": "x.x.0.0/16",
                   "Gateway": "x.x.x.x"
               }
           ]
       },
       "Internal": false,
       "Attachable": false,
       "Ingress": false,
       "ConfigFrom": {
           "Network": ""
       },
       "ConfigOnly": false,
       "Containers": { #You can see the container information of the default network
           "44371744ca1a": {
               "Name": "tomcat01",
               "EndpointID": "7005e8d9f9aab442",
               "MacAddress": "",
               "IPv4Address": "x.x.x.x/16",
               "IPv6Address": ""
           }
       },
       "Options": {
           "com.docker.network.bridge.default_bridge": "true",
           "com.docker.network.bridge.enable_icc": "true",
           "com.docker.network.bridge.enable_ip_masquerade": "true",
           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
           "com.docker.network.bridge.name": "docker0",
           "com.docker.network.driver.mtu": "1500"
       },
       "Labels": {}
   }
]
# Add tomcat01 to the custom network
docker network connect net01 tomcat01
#Check net01 again and find that tomcat01 is also in it
# docker inspect tomcat01 view the network in tomcat01 now has an additional ip address
#At this time, tomcat01 and tomcat03 can communicate with each other

Conclusion: the reason why containers can connect with each other is that docker creates this bridging mode, which actually uses the Veth pair technology, so containers can communicate with each other.

Can container s communicate with each other through aliases

If you can, you don't need to worry about the change of ip address. Just like microservices, the call between services can be called by registering the service name in the registry. Unfortunately, we are using the default docker network, which is unable to ping the communication by name

  • Add link to specify docker run -d --name tomcat05 --link tomcat01 -p 8085:8080 tomcat
  • The way to customize the network is interoperable

Posted by benwilhelm on Thu, 19 Dec 2019 04:52:04 -0800