Basic concepts of Image, Container and Rigistry
Image: Docker Image is equivalent to a root file system. In addition to providing programs, libraries, resources, configuration files, etc. required by the container runtime, it also includes some configuration parameters prepared for runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data, and its content will not be changed after construction.
Container: the relationship between an Image and a container. Like classes and instances in object-oriented programming, an Image is a static definition, and a container is an entity when the Image runs. Containers can be created, started, stopped, deleted, paused, and so on.
The container process runs in its own independent namespace. Therefore, the container can have its own root file system, its own network configuration, its own process space, and even its own user ID space. The processes in the container run in an isolated environment and are used as if they were operating on a host independent system. This feature makes container encapsulated applications more secure than running directly on the host.
Registration server, warehouse: after the image is built, it can easily run on the current host. However, if you need to use this image on other servers, we need a centralized service for storing and distributing images. Docker Registry is the specific server for managing the warehouse.
A Docker Registry can contain mu lt iple repositories. A Repository will contain images of different versions of the same software, and labels are often used to correspond to each version of the software. We can specify the specific version of the software image through the format of < warehouse name >: < label >
#Install curl $ sudo apt install curl $ curl https://get.docker.com | sh && sudo systemctl --now enable docker $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ > && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ > && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list $ sudo apt-get update $ sudo apt-get install -y nvidia-docker2 $ sudo systemctl restart docker # test by running a base CUDA container: $ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi Unable to find image 'nvidia/cuda:11.0-base' locally
step 2: create docker user group
$ sudo groupadd docker $ sudo gpasswd -a $USER docker $ newgrp docker # View mirror $ docker images
# pull a mirror down docker pull pytorch/pytorch:1.9.0-cuda11.1-cudnn8-devel 1.9.0-cuda11.1-cudnn8-devel: Pulling from pytorch/pytorch # Based on this image, configure a new image according to the Dockerfile ## step1 write a Dockerfile file ## step2 $ cd DockerFile $ docker build -t test:v1 . # In this way, an image of test is generated, and the tag is v1 # Create a new container named mmdetection $ docker run -it --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -e QT_X11_NO_MITSHM=1 -v /home/si/SIINNO/:/SIINNO/ --shm-size='8g' --name mmdetection test:v1 # Check whether it is built $ docker ps -a # Enter container $ docker attach mmdetection # Expose the X server on the host. $ xhost +local:root # Configure container environment $ docker run \ -it \ --gpus all \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -e DISPLAY=$DISPLAY \ -e QT_X11_NO_MITSHM=1 \ glvnd-x \ bash #Inside the container, a good way to test that the GPU is being used is to install and run the OpenGL benchmark application glmark2. $ apt-get update \ && apt-get install -y -qq glmark2 \ $ glmark2
Configure environment on container
The environment configuration required by the running network is generally based on docker / requirement.txt
It is best to use Ali source for downloading, and add - I after command https://mirrors.aliyun.com/pypi/simple/
During installation, pay attention to modifying the corresponding cuda/pytorch version
Export / import / delete container
docker export mmdetection > mmd.tar #Importing a mirror from a container snapshot file $ cat mmd.tar | docker import - test/mmd:v2 $ docker container rm mmdetection
# start a container, while docker is stopped $ docker start ContainerName $ docker container start $ docker container stop $ docker container restart # log-in to a running container, when it is log-out before $ docker attach ContainerName # Exit container $ exit # View containers that have been run $ docker ps -a # View mirror $ docker images # push image to Docker Hub $ docker push imageName:Tag
Data volume, local file mount
A data volume is a special directory that can be used by one or more containers. It bypasses UFS and can provide many useful features:
Data volumes can be shared and reused between containers
Changes to the data volume take effect immediately
Updates to data volumes do not affect mirroring
Data volumes persist by default, even if the container is deleted
Create / Mount / delete