The relationship between shared memory sharedMemory(shm) and memory mapped memory ymap under linux

Keywords: Linux

The earliest examples of this paragraph were found online because of the continuous use of shared memory and memory mapping, and then changed to meet our needs, but the underlying differences and connections were not considered. Today, because the data sharing libraries being tested for the Cognitive Framework involve shared memory, colleagues ask about these things and have seen the system beforeThe relationship between shared memory of V, Posix, XSI is forgotten, so I take a look at it and think about it carefully.


First, we need to clarify what scenarios are used for shared memory (shm). That is, when shared memory is used. Generally speaking, shared memory is required when multiple processes share the same memory area or access an area across processes. That is, the same piece of memory is shared by multiple processes.

Shared memory, as its name implies, is a reserved area of memory that allows access by a group of processes. Shared memory is the fastest and easiest of the three communication mechanisms in system v IPC. For processes,Once shared memory is available, its use is the same as other memory. Operations performed by one process on shared memory are immediately visible to other processes because each process only needs a pointer to shared memory space to read the contents of shared memory.(To put it plainly, it's like requesting a block of memory where each process you need has a pointer to it) and you can easily get results.


Memory mapping, generally refers to the mapping of a section of a file on a file system to memory in order to speed up file operations. Users can directly read and write the address corresponding to the kernel address space in an out-of-core address space, thus reducing the copy overhead for a process when reading and writing files. This is generally used within a process.

sharedMemory is used by multiple processes to share the same address space, enabling multiple people to access the same area for the purpose of sharing. The purpose of mmap memory mapping is to speed up access. The two do not conflict, so they can be used together.

Use of mmap

  1. Memory Mapping - > File Memory Mapping, there are two types of memory mapping, File Memory Mapping and Anonymous Mapping. Use mmap system call to map a file into memory. Usually use open() a file and pass the file descriptor into mmap() to map.
  2. Shared memory: mainly tells you that shared memory is the fastest way to communicate between processes, and then:
  • Posix interface, POSIX interface for shared memory, usage: shm_open(), mmap()
  • System V interface, using methods shmget(), shmmat()....

In terms of shared memory for POSIX interfaces, the bottom level of them is to call mmap, except that shared memory opens files to call shm_open(), and memory maps to call open(). What's so amazing about shm_open? Take a look at the source code in glibc2.29:

/* Open shared memory object. */ 
shm_open (const char *name, int oflag, mode_t mode) 
int state; 
pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, &state); 
int fd = open (shm_name, oflag, mode); 
pthread_setcancelstate (state, NULL); 
return fd; 

The code says open, so shm_open just encapsulates a safer open.

Why does shared memory have to be placed under/dev/shm/

In the code above, the parameter shm_open passed in is called name, but when it opens it becomes shm_name, but no shm_name definition can be found anywhere else. Are the codes written by the big guys so amazing? Look carefully, in the macro above:


The macro declares shm_name, constructs shm_dir, whose content is / dev/shm/, then stitches with name:

__mempcpy (__mempcpy (__mempcpy (shm_name, shm_dir, shm_dirlen), \
prefix, sizeof prefix - 1),name,namelen) 

shm_name was generated.

This only explains how, but not why, and why does GlibC put shared memory under this partition in its code?

/ dev/shm/ uses a special file system, tmpfs, which is virtual, not a real partition on disk, how fast, how good, how awesome, etc.... / dev/shm/partition is viewed through the df command, and the type does write tmpfs,Files generated by file memory mapping are unexpectedly flat and there is nothing special about them. What's the difference between shared memory and file memory mapping? Shared memory uses a special file system, but file memory mapping doesn't? But the parameters they explicitly pass in to mmap are the same... that is, both memory mapping and shared memory are used.mmap, but it's called and used differently, and it uses different files and file systems involved. I guess TMPFS are built entirely in kernel space, not user space, so multiple processes can share it.

/dev/shm or tmpfs

*The memory file system tmpfs, which has a path to / dev/shm on the operating system. Files in this directory are stored in memory and disappear after power failure. How do I open files under / dev/shm? shm_open can open. Just like open can open normal files. Because the file system is in memory, its operations are read and write, and there is no operation to write to disk, so it is fast..

Tmpfs is a file system which keeps all files in virtual memory. 
Everything in tmpfs is temporary in the sense that no files will 
be created on your hard drive. If you unmount a tmpfs instance, 
everything stored therein is lost. 

This paragraph says that tmpfs is a file system, all files exist in virtual memory, and there are no files on the physical disk. If an instance of tmpfs is unmounted, the contents stored in it will be lost. This paragraph is the same as what is said on the Internet, in the same cloud as for a novice. What is called in virtual memory? Virtual memory is not a drain.Like? At most, it's a data structure in the kernel. No files are on disk? Where are the files? In physical memory? Not quite right. Does physical memory work directly as a hard disk?

Since tmpfs lives completely in the page cache and on swap, all tmpfs
pages currently in memory will show up as cached. It will not show up
as shared or something like that. Further on you can check the actual 
RAM+swap use of a tmpfs instance with df(1) and du(1). 

This means that the tmpfs file system is fully alive in page cache and swap, and the tmpfs pages currently in memory will appear cached and not shared or other. You can view the physical memory + swap size that the tmpfs instance actually uses through df and du commands.

Find a test machine and look at it with the free-g command:

[xxxxx ~]# free -g 
      total used free shared buff/cache available 
Mem:  124   0     33   0         90     82 
Swap: 0 17575006175232 17179869183

shared 0, buff/cache 90G

Then delete a few files under / dev/shm / to see:

[xxxxx ~]# rm /dev/shm/News_Share_Memory_V10 
[xxxxx ~]# free -g 
    total used free shared buff/cache available 
Mem: 124    0   36    0       87       84 
Swap: 0 17575006175232 17179869183 
[xxxxx ~]# rm /dev/shm/Video_Share_Memory_V10 
[xxxxx ~]# free -g 
    total used free shared buff/cache available 
Mem: 124    0   39    0        84        88 
Swap: 0 17575006175232 17179869183 

Well, if the size is correct, it proves that what is said in the text is true. Instances in the tmpfs file system will appear in the cache, not shared, although we'll call this Shared Memory.

There is always a kernel internal mount which you will not see at 
all. This is used for shared anonymous mappings and SYSV shared 
This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not 
set, the user visible part of tmpfs is not build. But the internal 
mechanisms are always present. 

This means that there will be a mount inside the kernel that you cannot see, and that hidden mount will be used when you use anonymous shared memory mapping or System V shared memory. Moreover, even if the compilation option CONFIG_TMPFS is not set in the kernel and tmpfs is not compiled, this internal hidden mechanism will still work.

This means that both shared memory and tmpfs file systems are implemented using a mechanism provided by the kernel, and there is no dependency between the implementation of shared memory and the tmpfs system. So what is this mechanism and how is it implemented? Dig a hole here.

glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
POSIX shared memory (shm_open, shm_unlink). Adding the following 
line to /etc/fstab should take care of this: 
tmpfs /dev/shm tmpfs defaults 0 0 
Remember to create the directory that you intend to mount tmpfs on 
if necessary. 

This is even more straightforward. When glibc implements POSIX shared memory, the tmpfs file system is preset to mount on / dev/shm/, so when sharing memory with POSIX, remember to edit this partition in / etc/fstab to mount it.

What is the difference between shared memory and file memory mapping?

First of all, the interface and usage of shared memory and file memory mapping are different. Shared memory implementation of POSIX of GLIBC will place shared memory files under/dev/shm/partition by default, if there is no such partition, it needs to be mounted manually.

Then there is shared memory and file memory mapping, and how they are implemented in the kernel uses the kernel's cache and swap mechanisms, no difference at all.

Although both System V and POSIX shared memory are implemented through tmpfs, the limitations are different. That is to say, /proc/sys/kernel/shmmax only affects SYS V shared memory and/dev/shm only affects Posix shared memory. In fact, System V and Posix shared memory are two different instances of TMPFS used by themselves.

Posted by Bopo on Mon, 13 Sep 2021 10:31:08 -0700