docker container network configuration
The creation of namespace in Linux kernel
ip netns command
You can complete various operations on the Network Namespace with the help of the ip netns command. The ip netns command comes from the iproute installation package. Generally, the system will install it by default. If not, please install it yourself.
Note: sudo permission is required when the ip netns command modifies the network configuration.
You can complete the operations related to the Network Namespace through the ip netns command. You can view the command help information through the ip netns help:
[root@localhost ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id
By default, there is no Network Namespace in the Linux system, so the ip netns list command will not return any information.
Create a Network Namespace
Create a namespace named ns0 through the command:
[root@localhost ~]# ip netns list [root@localhost ~]# ip netns add ns0 [root@localhost ~]# ip netns list ns0
The newly created Network Namespace will appear in the / var/run/netns / directory. If a namespace with the same name already exists, the command will report the error of "Cannot create namespace file" / var/run/netns/ns0 ": File exists.
[root@localhost ~]# ls /var/run/netns/ ns0 [root@localhost ~]# ip netns add ns0 Cannot create namespace file "/var/run/netns/ns0": File exists
For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables and other network related resources.
Operation Network Namespace
The ip command provides the ip netns exec subcommand, which can be executed in the corresponding Network Namespace.
View the network card information of the newly created Network Namespace
[root@localhost ~]# ip netns exec ns0 ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
You can see that a lo loopback network card will be created by default in the newly created Network Namespace, and the network card is closed at this time. At this time, if you try to ping the lo loopback network card, you will be prompted that Network is unreachable
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 connect: Network is unreachable
Enable lo loopback network card with the following command:
[root@localhost ~]# ip netns exec ns0 ip link set lo up [root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.073 ms
Transfer equipment
We can transfer devices (such as veth) between different network namespaces. Since a device can only belong to one Network Namespace, the device cannot be seen in the Network Namespace after transfer.
Among them, veth devices are transferable devices, while many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable.
veth pair
The full name of veth pair is Virtual Ethernet Pair. It is a pair of ports. All packets entering from one end of the pair of ports will come out from the other end, and vice versa.
veth pair is introduced to communicate directly in different network namespaces. It can be used to connect two network namespaces directly.
Create veth pair
[root@localhost ~]# ip link add type veth [root@localhost ~]# ip a 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 56:6f:db:6a:03:be brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 9a:20:35:7a:c5:eb brd ff:ff:ff:ff:ff:ff
You can see that a pair of Veth pairs are added to the system to connect the two virtual network cards veth0 and veth1. At this time, the pair of Veth pairs are in the "not enabled" state.
Enable communication between network namespaces
Next, we use veth pair to realize the communication between two different network namespaces. Just now, we have created a Network Namespace named ns0. Next, we will create another information Network Namespace named ns1
[root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns1 ns0
Then we add veth0 to ns0 and veth1 to ns1
[root@localhost ~]# ip link set veth0 netns ns0 [root@localhost ~]# ip link set veth1 netns ns1
Then we configure the ip addresses for these Veth pairs and enable them
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up [root@localhost ~]# ip netns exec ns0 ip addr add 10.0.0.1/24 dev veth0 [root@localhost ~]# ip netns exec ns1 ip link set lo up [root@localhost ~]# ip netns exec ns1 ip link set veth1 up [root@localhost ~]# ip netns exec ns1 ip addr add 10.0.0.2/24 dev veth1
View the status of this pair of Veth pairs
[root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 56:6f:db:6a:03:be brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 10.0.0.1/24 scope global veth0 valid_lft forever preferred_lft forever inet6 fe80::546f:dbff:fe6a:3be/64 scope link valid_lft forever preferred_lft forever [root@localhost ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 9a:20:35:7a:c5:eb brd ff:ff:ff:ff:ff:ff link-netns ns0 inet 10.0.0.2/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::9820:35ff:fe7a:c5eb/64 scope link valid_lft forever preferred_lft forever
As can be seen from the above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We try to access the ip address in ns0 in ns1:
[root@localhost ~]# ip netns exec ns1 ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.029 ms
It can be seen that veth pair successfully realizes the network interaction between two different network namespaces.
veth device rename
[root@localhost ~]# ip netns exec ns0 ip link set veth0 down [root@localhost ~]# ip netns exec ns0 ip link set dev veth0 name eth0 [root@localhost ~]# ip netns exec ns0 ifconfig -a eth0: flags=4098<BROADCAST,MULTICAST> mtu 1500 inet 10.0.0.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 56:6f:db:6a:03:be txqueuelen 1000 (Ethernet) RX packets 16 bytes 1244 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16 bytes 1244 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost ~]# ip netns exec ns0 ip link set eth0 up [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 56:6f:db:6a:03:be brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 10.0.0.1/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::546f:dbff:fe6a:3be/64 scope link valid_lft forever preferred_lft forever
Four network mode configurations
bridge mode configuration
[root@localhost ~]# docker run -it --name test --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2236 (2.1 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / # exit [root@localhost ~]# docker container ls -a # Adding -- network bridge when creating a container has the same effect as not adding -- network option [root@localhost ~]# docker run -it --name test --network bridge --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2236 (2.1 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / # exit
none mode configuration
[root@localhost ~]# docker run -it --name test --network none --rm busybox / # ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / # exit
container mode configuration
Start the first container
[root@localhost ~]# docker run -it --name test --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1479 (1.4 KiB) TX bytes:0 (0.0 B)
Start the second container
[root@localhost ~]# docker run -it --name test1 --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2156 (2.1 KiB) TX bytes:0 (0.0 B)
You can see that the IP address of the container named test1 is 172.17.0.3, which is different from the IP address of the first container, that is, there is no shared network. At this time, if we change the startup mode of the second container, we can make the container IP named test consistent with the test1 container IP, that is, the shared IP, but not the file system.
[root@localhost ~]# docker run -it --name test1 --rm --network container:test busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2516 (2.4 KiB) TX bytes:0 (0.0 B)
At this point, we create a directory on the test container
/ # mkdir /tmp/data / # ls /tmp/ data
Check the / tmp directory on the test1 container and you will find that there is no such directory because the file system is isolated and only shares the network.
Deploy a site on the test1 container
/ # echo 'hello world' > /tmp/index.html / # ls /tmp index.html / # httpd -h /tmp / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
Use a local address on the b1 container to access this site
/ # wget -O - -q 127.0.0.1:80 hello world
It can be seen that the relationship between containers in container mode is equivalent to two different processes on a host
host mode configuration
Directly indicate that the mode is host when starting the container
[root@localhost ~]# docker run -it --name test --rm --network host busybox / # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:7F:8D:B3:8D inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:7fff:fe8d:b38d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:49 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:6228 (6.0 KiB) ens160 Link encap:Ethernet HWaddr 00:0C:29:1F:EB:07 inet addr:192.168.153.139 Bcast:192.168.153.255 Mask:255.255.255.0 inet6 addr: fe80::fded:f7d3:4269:f476/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:365889 errors:0 dropped:0 overruns:0 frame:0 TX packets:542487 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30804932 (29.3 MiB) TX bytes:140902766 (134.3 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2040 (1.9 KiB) TX bytes:2040 (1.9 KiB) / # echo 'hello world' > /tmp/index.html / # httpd -h /tmp / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 192.168.153.139:22 192.168.153.1:61431 ESTABLISHED tcp 0 0 192.168.153.139:22 192.168.153.1:60069 ESTABLISHED tcp 0 0 192.168.153.139:22 192.168.153.1:60068 ESTABLISHED tcp 0 0 192.168.153.139:22 192.168.153.1:61437 ESTABLISHED tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 ::1:631 :::* LISTEN [root@localhost ~]# curl 192.168.153.139:80 hello world
At this point, if we start an http site in this container, we can directly access the site in this container in the browser with the IP of the host.
Common operations of containers
View the host name of the container
[root@localhost ~]# docker run -it --name test --network bridge --rm busybox / # hostname e396cafc9e5a
Inject hostname when container starts
[root@localhost ~]# docker run -it --name test --network bridge --hostname bravealove --rm busybox / # hostname bravealove / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 bravealove # Host name to IP mapping is automatically created when host name is injected / # cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.153.2 # DNS is also automatically configured as the DNS of the host / # ping baidu.com PING baidu.com (220.181.38.148): 56 data bytes 64 bytes from 220.181.38.148: seq=0 ttl=127 time=55.371 ms 64 bytes from 220.181.38.148: seq=1 ttl=127 time=53.319 ms 64 bytes from 220.181.38.148: seq=2 ttl=127 time=55.472 ms
Manually specify the DNS to be used by the container
[root@localhost ~]# docker run -it --name test --network bridge --hostname bravealove --dns 114.114.114.114 --rm busybox / # / # / # cat /etc/resolv.conf search localdomain nameserver 114.114.114.114 / # nslookup -type=a www.baidu.com Server: 114.114.114.114 Address: 114.114.114.114:53 Non-authoritative answer: www.baidu.com canonical name = www.a.shifen.com Name: www.a.shifen.com Address: 110.242.68.4 Name: www.a.shifen.com Address: 110.242.68.3
Manually inject the host name to IP address mapping into the / etc/hosts file
[root@localhost ~]# docker run -it --name test --network bridge --hostname bravealove --add-host www.a.com:1.1.1.1 --rm busybox / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 1.1.1.1 www.a.com 172.17.0.2 bravealove
Open container port
When docker run is executed, there is a - p option to map the application ports in the container to the host, so that the external host can access the applications in the container by accessing a port of the host.
-The p option can be used multiple times, and the port it can expose must be the port that the container is actually listening to.
-Use format of p option:
- -p
- Maps the specified container port to a dynamic port at all addresses of the host
- -p :
- Map the container port to the specified host port
- -p ::
- Maps the specified container port to the dynamic port specified by the host
- -p ::
- Map the specified container port to the port specified by the host
Dynamic ports refer to random ports. The specific mapping results can be viewed using the docker port command.
[root@localhost ~]# docker run --name web --rm -p 80 nginx
After the above command is executed, it will occupy the front end all the time. Let's open a new terminal connection to see what port 80 of the container is mapped to the host
[root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:49157 80/tcp -> :::49157
It can be seen that port 80 of the container is exposed to port 49157 of the host. At this time, we can access this port on the host to see if we can access the sites in the container
[root@localhost ~]# curl http://127.0.0.1:49157 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
iptables firewall rules will be generated automatically with the creation of the container and deleted automatically with the deletion of the container.
Maps the container port to a random port of the specified IP
[root@localhost ~]# docker run --name web --rm -p 192.168.153.139::80 nginx
View the port mapping on another terminal
[root@localhost ~]# docker port web 80/tcp -> 192.168.153.139:49153 [root@localhost ~]# curl http://192.168.153.139:49153 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Map the container port to the specified port of the host
[root@localhost ~]# docker run --name web --rm -p 80:80 nginx
View the port mapping on another terminal
[root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:80 80/tcp -> :::80
Network attribute information of custom docker0 Bridge
Official document related configuration
To customize the network attribute information of docker0 bridge, you need to modify the / etc/docker/daemon.json configuration file
[root@localhost ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://in3617d8.mirror.aliyuncs.com"], "bip": "192.168.150.130/24" } [root@localhost ~]# systemctl restart docker.service
The core option is bip, which means bridge ip. It is used to specify the IP address of docker0 bridge itself. Other options can be calculated from this address.
docker remote connection
The C/S of the dockerd daemon only listens to the address in Unix Socket format (/ var/run/docker.sock) by default. If you want to use TCP sockets, you need to modify the / etc/docker/daemon.json configuration file, add the following contents, and then restart the docker service:
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
Pass the "- H | - host" option directly to dockerd on the client to specify which host to control the docker container on
docker -H 192.168.10.145:2375 ps
docker create custom bridge
Create an additional custom bridge, which is different from docker0
[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 40bbed525de7 bridge bridge local fe12c0b00ead host host local 0d613f7ed2d5 none null local [root@localhost ~]# docker network create -d bridge --subnet "192.168.5.0/24" --gateway "192.168.5.1" br0 2ecd2a5f2c06615912914755244cbafb730ec7c5c998ae35d1604838eadfe137 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ecd2a5f2c06 br0 bridge local 40bbed525de7 bridge bridge local fe12c0b00ead host host local 0d613f7ed2d5 none null local
Create a container using the newly created custom bridge:
[root@localhost ~]# docker run -it --name test --network br0 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:05:02 inet addr:192.168.5.2 Bcast:192.168.5.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:42 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5770 (5.6 KiB) TX bytes:0 (0.0 B)
Create another container and use the default bridge:
[root@localhost ~]# docker run -it --name test1 --network br0 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:05:03 inet addr:192.168.5.3 Bcast:192.168.5.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1750 (1.7 KiB) TX bytes:0 (0.0 B)
Imagine whether b2 and b1 can communicate with each other at this time? If not, how to realize communication?
rors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5770 (5.6 KiB) TX bytes:0 (0.0 B)
Create another container and use the default bridge Bridge: ```text [root@localhost ~]# docker run -it --name test1 --network br0 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:05:03 inet addr:192.168.5.3 Bcast:192.168.5.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1750 (1.7 KiB) TX bytes:0 (0.0 B)
Imagine whether b2 and b1 can communicate with each other at this time? If not, how to realize communication?