Docker's Data Management and Network Communication

Keywords: Linux Docker network vsftpd

Bowen outline:

  • 1. Data management of docker
    1. Data Volume
    2. Data Volume Container
  • 2. Communication of docker network
    1. Port mapping
    2. Container interconnection

1. Data management of docker

In docker, in order to conveniently view the data generated in the container or share the data among multiple containers, it involves the data management operation of the container. There are two main ways to manage the data in the docker container: data volume and data volume container.

1. Data Volume

Data Volume is a special directory for containers. Located in containers, the host's directory can be mounted on the data volume. The modification of the data volume is immediately visible, and the update of the data will not affect the image, thus realizing the migration of data between the host and the container. The use of the data volume is similar to that of the directory under Linux. Mount mount operation (note: mount the host local directory into the container, for example: if the host local / data directory mounts / dev/sdb1, then when mapping the data volume to / data, the file system used by the directory specified in the container is also / dev/sdb1. I don't know if you can explain that. Understand how it works.

Mount the host directory as an example of a data volume:

Using the - v option, you can create a data volume (just create a directory when you run the container), create a data volume while mounting the host's directory on the data volume to achieve data migration between the host and the container.

It should be noted that the path of the host local directory must be absolute, and if the path does not exist, Docker will automatically create the corresponding path.

[root@localhost ~]# docker run -d -p 5000:5000 -v /data/registry/:/tmp/registry docker.io/registry
#This is a container that runs a private repository, where - p is the port mapping option, which is not explained here.
# - v is the directory mapping, mapping the local / data/registry / directory to the / tmp/registry directory in the container.
#Then the contents in the container's / tmp/registry directory are the same as those in the host's / data/registry / content.
[root@localhost ~]# df -hT /data/registry/           #First look at the local / data/registry / mounted file system
//File System Type Capacity Used Available% Mount Points
node4:dis-stripe fuse.glusterfs   80G  130M   80G    1% /data/registry
[root@localhost ~]# docker exec -it a6bf726c612b /bin/sh #In a container that enters a private warehouse, the container does not have / bin/bash, so / bin/sh is used.
/ # df -hT /tmp/registry/    #Looking at it, we found that the file system mounted by this directory is the same as that mounted by the host machine, which means no problem.
Filesystem           Type            Size      Used Available Use% Mounted on
node4:dis-stripe     fuse.glusterfs
                                    80.0G    129.4M     79.8G   0% /tmp/registry

2. Data Volume Container

If you need to share some data between containers, the easiest way is to use a volume container. Data volume container is a common container, which provides data volume for other containers to mount. First, you need to create a container as a data volume container, and then mount the data volume in the data volume container with volume-from when other containers are created.

Examples of container volume creation and use:

[root@localhost ~]# docker run -itd --name datasrv -v /data1 -v /data2  docker.io/sameersbn/bind /bin/bash
#Create and run a container named datasrv, and create two data volumes: data1 and data2.
d9e578db8355da35637d2cf9b0a3406a647fe8e70b2df6172ab41818474aab08
[root@localhost ~]# docker exec -it datasrv /bin/bash     #Enter the created container
root@d9e578db8355:/# ls | grep data             #Check to see if there is a corresponding data volume
data1
data2
[root@localhost ~]# docker run -itd --volumes-from datasrv --name ftpsrv docker.io/fauria/vsftpd /bin/bash
#Run a container called ftpsrv and use -- volumes-from to mount data volumes from the datasrv container onto the new ftpsvr container.
eb84fa6e85a51779b652e0058844987c5974cf2a66d1772bdc05bde30f8a254f
[root@localhost ~]# docker exec -it ftpsrv /bin/bash         #Enter the newly created container
[root@eb84fa6e85a5 /]# ls | grep data          #Check to see if the new container can see the data volume provided by datasrv
data1
data2
[root@eb84fa6e85a5 /]# echo " data volumes test" > /data1/test.txt       #Write files to data1 directory in ftpsrv container for testing
[root@eb84fa6e85a5 /]# exit          #Exit the container
exit
[root@localhost ~]# docker exec -it datasrv /bin/bash     #Enter the datasrv container that provides data volumes
root@d9e578db8355:/# cat /data1/test.txt            #You can see the file just created in the ftpsrv container, OK.
 data volumes test

Note that the most important thing in the production environment is the reliability of storage and the dynamic scalability of storage. We must take this into account when making data volumes. In this respect, the more outstanding is the GFS file system. I just made a simple configuration above. If in the production environment, we must consider it carefully, it is better than that. If the mirror volume container is made above, the GFS file system can be mounted locally on the host computer. When creating the mirror volume container, the directory mounting the GFS can be mapped to the mirror volume in the container. This is a qualified mirror volume container.

2. Communication of docker network

1. Port mapping

docker provides a mechanism for mapping container ports to host and container interconnection to provide network services for containers.

When the container is started, it is impossible to access the service in the container through the network outside the container without specifying the corresponding port. docker provides port mapping mechanism to provide services in the container for external network access. In essence, it maps the port of the host to the container so that the port of the host can be accessed by the external network.

To achieve port mapping, we need to use the - P (uppercase) option to achieve random mapping when running docker run command. Docker will generally map randomly to a port accessing the open network ports within the container from 4900 to 49900, but it is not absolute, and there are exceptions that will not map to this range. Use the - P (lowercase) option to specify the port to be mapped when running the docker run command (commonly used).

Port mapping example:

[root@localhost ~]# docker run -d -P docker.io/sameersbn/bind      #Random Mapping Port
9b4b7c464900df3b766cbc9227b21a3cad7d2816452c180b08eac4f473f88835
[root@localhost ~]# docker run -itd -p 68:67 docker.io/networkboot/dhcpd /bin/bash
#Mapping 67 ports in the container to 68 ports in the host
6f9f8125bcb22335dcdb768bbf378634752b5766504e0138333a6ef5c57b7047
[root@localhost ~]# docker ps -a     #Check to see if it's okay.
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
6f9f8125bcb2        docker.io/networkboot/dhcpd   "/entrypoint.sh /b..."   2 seconds ago       Up 1 second         0.0.0.0:68->67/tcp                                                       keen_brattain
9b4b7c464900        docker.io/sameersbn/bind      "/sbin/entrypoint...."   4 minutes ago       Up 4 minutes        0.0.0.0:32768->53/udp, 0.0.0.0:32769->53/tcp, 0.0.0.0:32768->10000/tcp   coc_gates
#At this point, accessing port 68 of the host is equivalent to accessing port 67 of the first container; accessing port 32768 of the host is equivalent to accessing port 53 of the container.

2. Container interconnection

Container interconnection is realized by establishing a special network communication tunnel between containers by the name of the container. Simply put, a tunnel will be built between the source container and the receiving container. The receiving container can see the information specified by the source container.

When running the docker run command, the -- link option is used to realize the interconnected communication between containers in the following format:

--link name: alias    #Where name is the name of the container to be connected, alias is the alias of the connection.

Container interconnection is performed by the name of the container. The - - name option creates a friendly name for the container. This name is unique. If a container with the same name has been named, you need to use the docker RM command to delete the previously created container with the same name when you want to use the name again.

Examples of container interconnection:

[root@localhost ~]# docker run -tid -P --name web1  docker.io/httpd /bin/bash    #Running container web1
c88f7340f0c12b9f5228ec38793e24a6900084e58ea4690e8a847da2cdfe0b
[[root@localhost ~]# docker run -tid -P --name web2 --link web1:web1 docker.io/httpd /bin/bash
#Run the container web2 and associate the web1 container
c7debd7809257c6375412d54fe45893241d2973b7af1da75ba9f7eebcfd4d652
[root@localhost ~]# docker exec -it web2 /bin/bash     #Enter the web2 container
root@c7debd780925:/usr/local/apache2# cd
root@c7debd780925:~# ping web1        #ping test for web1
bash: ping: command not found        #sorry, no ping command, download one.
root@c7debd780925:~#apt-get update    #Update
root@c7debd780925:~#apt install iputils-ping     #Install ping command
root@c7debd780925:~#apt install net-tools      #This is to install ifconfig command, you can not install, I just take a note here.
root@c7debd780925:~# ping web1     #Then ping web1
PING web1 (172.17.0.2) 56(84) bytes of data.
64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.114 ms
              ..............#Omit part of content
#ping is through, so we can say that the two containers must be interconnected.
#If a new container, web3, is created at this time, it should be interconnected with web1 and web2 at the same time. The commands are as follows:
[root@localhost ~]# docker run -dit -P --name web3 --link web1:web1 --link web2:web2 docker.io/httpd /bin/bash
#When running the container, associate web1 with web2.
#Here's how to get into web3
[root@localhost ~]# docker exec -it web3 /bin/bash
root@433d5be6232c:/usr/local/apache2# cd
#Here is the installation ping command
root@433d5be6232c:~# apt-get update
root@433d5be6232c:~# apt install iputils-ping
#The following are the ping tests for web1 and web2 respectively
root@433d5be6232c:~# ping web1
PING web1 (172.17.0.2) 56(84) bytes of data.
64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.112 ms
              ..............#Omit part of content
root@433d5be6232c:~# ping web2
PING web2 (172.17.0.3) 56(84) bytes of data.
64 bytes from web2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.165 ms
64 bytes from web2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.115 ms
              ..............#Omit part of content
#OK, no problem.

———————— This is the end of the article. Thank you for reading.————————

Posted by dimkasmir on Mon, 09 Sep 2019 22:17:39 -0700