The Linux kernel implements namespace creation
ip netns command
You can use the ip netns command to complete various operations on the Network Namespace. The ip netns command comes from the iproute installation package, which is usually installed by default. If not, install it yourself.
Note: The ip netns command requires sudo privileges when modifying network configuration.
You can use the ip netns command to complete operations on the Network Namespace, and you can view the command help information through ip netns help:
[root@localhost ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns attach NAME PID ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id NETNSID := auto | POSITIVE-INT
By default, there is no Network Namespace on Linux, so the ip netns list command does not return any information.
Create Network Namespace
Create a namespace named aabb7 by command:
[root@localhost ~]# ip netns list [root@localhost ~]# ip netns add aabb7 [root@localhost ~]# ip netns list aabb7
The newly created Network Namespace appears in the / var/run/netns/directory. If a namespace with the same name already exists, the command will report a Cannot create namespace file'/var/run/netns/sb0': File exists error.
[root@localhost ~]# ls /var/run/netns/ aabb7 [root@localhost ~]# ip netns add aabb7 Cannot create namespace file "/var/run/netns/aabb7": File exists
For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables, and other network-related resources.
Operating Network Namespace
The ip command provides the ip netns exec subcommand to execute commands in the corresponding Network Namespace.
View the network card information for the newly created Network Namespace
[root@localhost ~]# ip netns exec aabb7 ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
You can see that a lo loopback network card is created by default in the newly created Network Namespace and is turned off. At this point, try to ping the lo loopback network card, which prompts Network is unreachable
[root@localhost ~]# ip netns exec aabb7 ping 127.0.0.1 connect: Network is unreachable
Enable the lo loopback network card with the following command:
[root@localhost ~]# ip netns exec aabb7 ip link set lo up [root@localhost ~]# ip netns exec aabb7 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.054 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.051 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.049 ms 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.049 ms 64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.053 ms 64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.058 ms ^C --- 127.0.0.1 ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 159ms rtt min/avg/max/mdev = 0.043/0.051/0.058/0.004 ms
Transfer equipment
We can transfer devices (such as veth) between different Network Namespaces. Since a device can only belong to one Network Namespace, it will not be visible within the Network Namespace after the transfer.
veth devices are transferable devices, while many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable.
veth pair
The veth pair is fully known as Virtual Ethernet Pair and is a pair of ports from which all packets entering from one end of the pair will come out of the other end and vice versa.
veth pair was introduced to communicate directly between different Network Namespaces, which can be used to connect two Network Namespaces directly.
Create veth pair
[root@localhost ~]# ip link add type veth [root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:07:67:13 brd ff:ff:ff:ff:ff:ff inet 192.168.111.141/24 brd 192.168.111.255 scope global noprefixroute ens160 valid_lft forever preferred_lft forever inet6 fe80::386d:70f:1bdf:e99e/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:58:45:af:c9 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether da:c0:fa:53:36:d9 brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 1e:09:0d:6f:b5:9c brd ff:ff:ff:ff:ff:ff
As you can see, a pair of Veth pairs has been added to the system to connect the two virtual network cards veth0 and veth1, which are not enabled for Veth pairs at this time.
Implement Network Namespace Communication
Here we use veth pair to communicate between two different Network Namespaces. Just now we have created a Network Namespace named aabb7, and next we create an Information Network Namespace named jjyy1
[root@localhost ~]# ip netns add jjyy1 [root@localhost ~]# ip netns list jjyy1 aabb7
Then we'll add veth0 to aabb7 and veth1 to jjyy1
[root@localhost ~]# ip link set veth0 netns aabb7 [root@localhost ~]# ip link set veth1 netns jjyy1
Then we configure the ip addresses on the Veth pairs separately and enable them
[root@localhost ~]# ip netns exec aabb7 ip link set veth0 up [root@localhost ~]# ip netns exec aabb7 ip addr add 192.168.111.155/24 dev veth0 [root@localhost ~]# ip netns exec jjyy1 ip link set lo up [root@localhost ~]# ip netns exec jjyy1 ip link set veth1 up [root@localhost ~]# ip netns exec jjyy1 ip addr add 192.168.111.156/24 dev veth1
View the status of this pair of Veth pairs
[root@localhost ~]# ip netns exec aabb7 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether da:c0:fa:53:36:d9 brd ff:ff:ff:ff:ff:ff link-netns jjyy1 inet 192.168.111.155/24 scope global veth0 valid_lft forever preferred_lft forever inet6 fe80::d8c0:faff:fe53:36d9/64 scope link valid_lft forever preferred_lft forever [root@localhost ~]# ip netns exec jjyy1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 1e:09:0d:6f:b5:9c brd ff:ff:ff:ff:ff:ff link-netns aabb7 inet 192.168.111.156/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::1c09:dff:fe6f:b59c/64 scope link valid_lft forever preferred_lft forever
As you can see above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We tried to access the ip address in aabb7 in jjjy1:
[root@localhost ~]# ip netns exec jjyy1 ping 192.168.111.155 PING 192.168.111.155 (192.168.111.155) 56(84) bytes of data. 64 bytes from 192.168.111.155: icmp_seq=1 ttl=64 time=0.044 ms 64 bytes from 192.168.111.155: icmp_seq=2 ttl=64 time=0.054 ms 64 bytes from 192.168.111.155: icmp_seq=3 ttl=64 time=0.056 ms 64 bytes from 192.168.111.155: icmp_seq=4 ttl=64 time=0.058 ms ^C --- 192.168.111.155 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 79ms rtt min/avg/max/mdev = 0.044/0.053/0.058/0.005 ms [root@localhost ~]# ip netns exec aabb7 ping 192.168.111.156 PING 192.168.111.156 (192.168.111.156) 56(84) bytes of data. 64 bytes from 192.168.111.156: icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from 192.168.111.156: icmp_seq=2 ttl=64 time=0.058 ms 64 bytes from 192.168.111.156: icmp_seq=3 ttl=64 time=0.056 ms 64 bytes from 192.168.111.156: icmp_seq=4 ttl=64 time=0.061 ms ^C --- 192.168.111.156 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 91ms rtt min/avg/max/mdev = 0.026/0.050/0.061/0.015 ms
You can see that veth pair successfully implemented network interaction between two different Network Namespace s.
veth device rename
[root@localhost ~]# ip netns exec aabb7 ip link set veth0 down [root@localhost ~]# ip netns exec aabb7 ip link set dev veth0 name ens0 [root@localhost ~]# ip netns exec aabb7 ip link set ens0 up [root@localhost ~]# ip netns exec aabb7 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: ens0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether da:c0:fa:53:36:d9 brd ff:ff:ff:ff:ff:ff link-netns jjyy1 inet 192.168.111.155/24 scope global ens0 valid_lft forever preferred_lft forever inet6 fe80::d8c0:faff:fe53:36d9/64 scope link valid_lft forever preferred_lft forever
Four network mode configurations
bridge mode configuration
[root@localhost ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE harry1004/mysql v1 bfa74e3222e9 46 hours ago 3.67GB harry1004/nginx v1.1 7a837f83b9a7 2 days ago 576MB centos latest 5d0da3dc9764 2 months ago 231MB [root@localhost ~]# docker run -itd --name by --rm 5d0da3dc9764 7777965592a749b69c5025cb2b21e68d7a2ca2dd7ef6358668d80ceacf567ab4 [root@localhost ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7777965592a7 5d0da3dc9764 "/bin/bash" 8 seconds ago Up 6 seconds by 373efde4145b centos "/bin/bash" 46 hours ago Exited (0) 40 hours ago php8 73481e8f4c33 centos "/bin/bash" 47 hours ago Exited (255) 22 minutes ago mysql 3c7370935253 centos "/bin/bash" 2 days ago Exited (0) 47 hours ago nginx [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7777965592a7 5d0da3dc9764 "/bin/bash" 19 seconds ago Up 17 seconds by [root@localhost ~]# docker exec -it 7777965592a7 /bin/sh sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever sh-4.4# exit exit
Adding the network bridge when creating containers has the same effect as the network option
[root@localhost ~]# docker run -itd --name by --network bridge d23834f29b38 /bin/sh 14938b7bdab6dce3ef69d777607184b7803e4bbc5020817e8c876099e5942b72 [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 14938b7bdab6 d23834f29b38 "/bin/sh" 10 seconds ago Up 9 seconds by [root@localhost ~]# docker exec -it 14938b7bdab6 /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / # exit
none mode
[root@localhost ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE harry1004/mysql v1 bfa74e3222e9 47 hours ago 3.67GB harry1004/nginx v1.1 7a837f83b9a7 2 days ago 576MB busybox latest d23834f29b38 5 days ago 1.24MB centos latest 5d0da3dc9764 2 months ago 231MB [root@localhost ~]# docker run -it --name by --network none --rm d23834f29b38 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
container Mode Configuration
Start the first container
[root@localhost ~]# docker run -it --name by1 --rm d23834f29b38 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
Start the second container
[root@localhost ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE harry1004/mysql v1 bfa74e3222e9 47 hours ago 3.67GB harry1004/nginx v1.1 7a837f83b9a7 2 days ago 576MB busybox latest d23834f29b38 5 days ago 1.24MB centos latest 5d0da3dc9764 2 months ago 231MB [root@localhost ~]# docker run -it --name by2 --rm bfa74e3222e9 [root@21a3fc8ee3d8 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
You can see that the container IP address named by2 is 172.17.0.3, which is different from the IP address of the first container, that is, there is no shared network. If we change the way the second container is started, we can make the container IP named b2 consistent with the container IP of B1, that is, share IP, but do not share the file system.
[root@localhost ~]# docker run -it --name by2 --rm --network container:by1 d23834f29b38 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
Here we create a data under tmp on by1
/ # mkdir /tmp/data / # ls /tmp data / #
Checking the / tmp directory on the by2 container reveals that there is no such directory because the file system is isolated and simply shares the network.
/ # ls /tmp / #
Deploy a site on a by2 container
/ # echo "This is a jjyy" > /tmp/index.html / # ls /tmp/ index.html / # httpd -h /tmp / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
Visit this site with a local address on the by1 container
/ # wget -qO - 127.0.0.1 This is a jjyy / #
Thus, the container-to-container relationship in container mode is equivalent to two different processes on one host
host mode configuration
Starting a container directly indicates that the mode is host
[root@localhost ~]# docker run -it --name test --rm --network host busybox / # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:7F:8D:B3:8D inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:7fff:fe8d:b38d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:49 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:6228 (6.0 KiB) ens160 Link encap:Ethernet HWaddr 00:0C:29:1F:EB:07 inet addr:192.168.153.139 Bcast:192.168.153.255 Mask:255.255.255.0 inet6 addr: fe80::fded:f7d3:4269:f476/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:365889 errors:0 dropped:0 overruns:0 frame:0 TX packets:542487 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30804932 (29.3 MiB) TX bytes:140902766 (134.3 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2040 (1.9 KiB) TX bytes:2040 (1.9 KiB) / # echo 'hello world' > /tmp/index.html / # httpd -h /tmp / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 192.168.111.141:22 192.168.111.1:61431 ESTABLISHED tcp 0 0 192.168.111.141:22 192.168.111.1:60069 ESTABLISHED tcp 0 0 192.168.111.141:22 192.168.111.1:60068 ESTABLISHED tcp 0 0 192.168.111.141:22 192.168.111.1:61437 ESTABLISHED tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 ::1:631 :::* LISTEN [root@localhost ~]# curl 192.168.111.141:80 hello world
Common operations for containers
View the host name of the container
[root@localhost ~]# docker run -it --name test --network bridge --rm busybox / # hostname 3aab638df1a9
Add host name when container starts
[root@localhost ~]# docker run -it --name test --network bridge --hostname bravealove --rm busybox / # hostname bravealove / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 bravealove # Host name-to-IP mapping is automatically created when a host name is injected / # cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.111.2 # DNS is also automatically configured as the host's DNS / # ping baidu.com PING baidu.com (220.181.38.148): 56 data bytes 64 bytes from 220.181.38.148: seq=0 ttl=127 time=55.371 ms 64 bytes from 220.181.38.148: seq=1 ttl=127 time=53.319 ms 64 bytes from 220.181.38.148: seq=2 ttl=127 time=55.472 ms
Manually inject host name to IP address mapping into/etc/hosts file
root@localhost ~]# docker run -it --name by --network bridge --hostname lyw --add-host www.jjyy.com:1.1.1.1 --rm d23834f29b38 / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 1.1.1.1 www.jjyy.com 172.17.0.2 lyw
Open container port
There is a -p option to execute the docker run, which maps the application ports in the container to the host machine so that the external host can access the application in the container by accessing one of the host ports.
The -p option can be used multiple times and must expose ports that the container is actually listening on.
Use format of -p option:
- -p
- Maps the specified container port to a dynamic port for all addresses of the host
- -p :
- Map container ports to specified host ports
- -p ::
- Map the specified container port to the dynamic port specified by the host
- -p ::
- Map the specified container port to the port specified by the host
Dynamic ports are random ports, and the specific mapping results can be viewed using the docker port command.
- Map the specified container port to the port specified by the host
[root@localhost ~]# docker run --name web --rm -p 80 nginx
These commands will always take up the front end after execution. Let's start a new terminal connection and see what port 80 of the container is mapped to on the host machine.
[root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:49157 80/tcp -> :::49157
Thus, port 80 of the container is exposed to port 49157 of the host machine, and we visit this port on the host machine to see if we can access the site inside the container
[root@localhost ~]# curl http://127.0.0.1:49157 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
The iptables firewall rules are automatically generated with the creation of the container and deleted with the deletion of the container.
Map container ports to random ports of specified IP
[root@localhost ~]# docker run --name web --rm -p 192.168.111.141::80 nginx
View port mappings on another terminal
[root@localhost ~]# docker port web 80/tcp -> 192.168.111.141:49153 [root@localhost ~]# curl http://192.168.111.141:49153 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Map container ports to specified ports of the host
[root@localhost ~]# docker run --name web --rm -p 80:80 nginx
View port mappings on another terminal
[root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:80 80/tcp -> :::80
Customize network attribute information for docker0 Bridge
Official Document Configuration
The network attribute information of the custom docker0 bridge needs to be modified/etc/docker/daemon.json configuration file
[root@localhost ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://in3617d8.mirror.aliyuncs.com"], "bip": "192.168.111.150/24" } [root@localhost ~]# systemctl restart docker.service
The core option is bip, meaning bridge ip, which specifies the IP address of the docker0 bridge itself. Other options can be calculated from this address.
docker remote connection
The C/S of the dockerd daemon, which by default only listens for addresses in Unix Socket format (/var/run/docker.sock), needs to modify the/etc/docker/daemon.json configuration file if you want to use a TCP socket, add the following, and then restart the docker service:
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
Pass the'-H|-host'option directly to dockerd on the client to specify which host docker container you want to control
docker -H 192.168.111.143:2375 ps
This method is rarely used now, rarely controlled by dockers remote connection, so just look at it and know it has such a function
docker create custom bridge
Create an additional custom bridge, different from docker0
[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 66d6eade5c82 bridge bridge local c81f7ace7dc6 host host local 621ead08fa64 none null local [root@localhost ~]# docker network create -d bridge --subnet "192.168.1.0/24" --gateway "192.168.1.1" br0 1bf080ab11a845efa3eb687b1d81089460c28fb24d9b5ce0e00a6811c658e47a [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 1bf080ab11a8 br0 bridge local 66d6eade5c82 bridge bridge local c81f7ace7dc6 host host local 621ead08fa64 none null local
Use the newly created custom bridge to create the container:
[root@localhost ~]# docker run -it --name by --network br0 --rm d23834f29b38 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0 valid_lft forever preferred_lft forever
Create another container using the default bridge:
[root@localhost ~]# docker run -it --name by1 --rm d23834f29b38 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
Just imagine if by and by1 can communicate with each other at this time? How can I communicate if I can't?
We can connect the bridge to the past. br0 was uploaded and created. We connect the default bridge to the bridge we just created.
[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 1bf080ab11a8 br0 bridge local 66d6eade5c82 bridge bridge local c81f7ace7dc6 host host local 621ead08fa64 none null local [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 024a9207d378 busybox "sh" 52 minutes ago Up 52 minutes test 7777965592a7 5d0da3dc9764 "/bin/bash" 2 hours ago Up 2 hours by [root@localhost ~]# docker network connect br024a9207d378
/ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / # hostname d90e81025b20 / # ping 192.168.1.2 PING 192.168.1.3 (192.168.1.2): 56 data bytes 64 bytes from 192.168.1.2: seq=0 ttl=64 time=0.142 ms 64 bytes from 192.168.1.2: seq=1 ttl=64 time=0.077 ms 64 bytes from 192.168.1.2: seq=2 ttl=64 time=0.077 ms ^C --- 192.168.1.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.077/0.098/0.142 ms