xfs logdev perfectly solves the problem that ext4 data=writeback can solve when cgroup iops is limited

Keywords: Attribute PostgreSQL Database Linux

https://github.com/digoal/blog/blob/master/201601/20160107_02.md
author

digoal

date

2016-01-07

Label

PostgreSQL , xfs , ext4 , cgroup , iops , iohang , writeback , ordered , xfs logdev

background

Under Linux, ext4 and xfs are file systems with journals. Before writing metadata, you must first write the journal of metadata.

(journal is a database-like REDO LOG that can be used for disaster recovery)

Metadata includes inodes, directories, indirect blocks of the file system. Creating files (including directories), changing file sizes, and changing file modification times all involve metadata writing operations.

In ext4,xfs file system, the journal operation of metadata is serial, which is similar to redo log.

The blkio module of cgroup can control the process to read, write IOPS, throughput and so on.

When we restrict iops, we may encounter interference problems because "journal operations of metadata are serial".

For example:

There's a block device. Find its major,minor number.

#ll /dev/mapper/aliflash-lv0*  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv01 -> ../dm-0  
#ll /dev/dm-0  
brw-rw---- 1 root disk 253, 0 Jan  7 11:22 /dev/dm-0  

Create an xfs or ext4 file system on this block device and mount it to / data01.

Initialize two PostgreSQL database instances and place them in different directories of / data01.

Limit one of the PostgreSQL clusters to write IOPS to 100 for (253:0) this block device.

ps -ewf|grep postgres  
digoal 24259     1  0 12:58 pts/4    00:00:00 /home/digoal/pgsql9.5/bin/postgres  -- Monitor 1921  
digoal 24260 24259  0 12:58 ?        00:00:00 postgres: logger process              
digoal 24262 24259  0 12:58 ?        00:00:00 postgres: checkpointer process        
digoal 24263 24259  0 12:58 ?        00:00:00 postgres: writer process              
digoal 24264 24259  0 12:58 ?        00:00:00 postgres: wal writer process          
digoal 24265 24259  0 12:58 ?        00:00:00 postgres: autovacuum launcher process     
digoal 24266 24259  0 12:58 ?        00:00:00 postgres: stats collector process     
digoal 24293     1  0 12:58 pts/4    00:00:00 /home/digoal/pgsql9.5/bin/postgres -D /data01/digoal/pg_root  -- Monitor 1922  
digoal 24294 24293  0 12:58 ?        00:00:00 postgres: logger process                                          
digoal 24296 24293  0 12:58 ?        00:00:20 postgres: checkpointer process                                    
digoal 24297 24293  0 12:58 ?        00:00:00 postgres: writer process                                          
digoal 24298 24293  0 12:58 ?        00:00:00 postgres: wal writer process                                      
digoal 24299 24293  0 12:58 ?        00:00:00 postgres: autovacuum launcher process                             
digoal 24300 24293  0 12:58 ?        00:00:00 postgres: stats collector process   

IOPS Restricting 1921 Instances

cd /sys/fs/cgroup/blkio/  
mkdir cg1  
cd cg1  
echo "253:0 100" > blkio.throttle.write_iops_device  
echo 24259 > tasks  
echo 24260 > tasks  
echo 24262 > tasks  
echo 24263 > tasks  
echo 24264 > tasks  
echo 24265 > tasks  
echo 24266 > tasks 

Open a pressure that will modify metadata a lot. Use create database.

(create database will call fsync from a large number of COPY template database data files. This results in a large number of metadata modifications, triggering metadata journal modifications.)

vi test.sh  
#!/bin/bash  

for ((i=0;i<100;i++))  
do  
psql -h 127.0.0.1 -p 1921 -c "create database $i"  
done  

. ./test.sh  

Observing block device IOPS, written IOPS is limited to 100.

iostat -x 1  
avg-cpu:  %user   %nice %system %iowait  %steal   %idle  
           0.00    0.00    0.03    3.12    0.00   96.84  
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util  
dm-0              0.00     0.00    0.00  100.00     0.00  1600.00    16.00     0.00    0.00   0.00   0.00  

Now, with the example of 1922, the de-pressure measurement performance is as follows:

pgbench -i -s 100 -h 127.0.0.1 -p 1922  

pgbench -M prepared -n -r -P 1 -c 96 -j 96 -T 100 -h 127.0.0.1 -p 1922  
progress: 1.0 s, 33.0 tps, lat 2.841 ms stddev 1.746  
progress: 2.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 3.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 4.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 5.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 6.0 s, 197.2 tps, lat 2884.437 ms stddev 2944.982  
progress: 7.0 s, 556.6 tps, lat 33.527 ms stddev 34.798  
progress: 8.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 9.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 10.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 11.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 12.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 13.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 14.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 15.0 s, 0.0 tps, lat -nan ms stddev -nan  

As you can see, 1922's performance was affected by 1921. In fact, the IO capacity of block devices is several hundred thousand.

Why?

Because metadata journal is a serial operation, when the 1921 instance operates metadata Journal slowly, the 1922 instance operates metadata Journal on the file system.

Even select 1; this is affected by the need to create a temporary catalog file global/pg_internal.init.pid every time front process establishes a connection with PostgreSQL.

Tracking the postmaster process of the second database instance

[root@digoal ~]# strace -T -f -p 24293 >./conn 2>&1  

Connect to the second database instance

postgres@digoal-> strace -T psql -h 127.0.0.1 -p 1922  
execve("/opt/pgsql/bin/psql", ["psql", "-h", "127.0.0.1", "-p", "1922"], [/* 34 vars */]) = 0 <0.009976>  
brk(0)                                  = 0x1747000 <0.000007>  


poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) // will be stuck here
At this point, you can see the startup process in the system, which comes out of postmaster fork. Note that the process number corresponds to the conn file in the back.

[root@digoal postgresql-9.4.4]# ps -efw|grep start  
postgres 46147 24293  0 19:43 ?        00:00:00 postgres: postgres postgres 127.0.0.1(17947) startup  

strace -T psql -h 127.0.0.1 -p 1922Output interception:  
setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 <0.000008>  
connect(3, {sa_family=AF_INET, sin_port=htons(1922), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000943>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000011>  
getsockopt(3, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 <0.000124>  
getsockname(3, {sa_family=AF_INET, sin_port=htons(17947), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0 <0.000015>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000008>  
sendto(3, "\0\0\0\10\4\322\26/", 8, MSG_NOSIGNAL, NULL, 0) = 8 <0.000050>  
poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}]) <0.000600>  
recvfrom(3, "N", 16384, 0, NULL, NULL)  = 1 <0.000010>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000007>  
sendto(3, "\0\0\0T\0\3\0\0user\0postgres\0database\0p"..., 84, MSG_NOSIGNAL, NULL, 0) = 84 <0.000020>  

poll response time reached 67 seconds

poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}]) <67.436925>  , Response time reached67second  
recvfrom(3, "R\0\0\0\10\0\0\0\0S\0\0\0\32application_name\0p"..., 16384, 0, NULL, NULL) = 322 <0.000017>  

When a connection is established, check the tracking of the postmaster process. You can see the startup process 46147, which took 66 seconds to call write, because this call to write triggered a write modification metadata action.

[root@digoal ~]# grep "pid 46147" conn|less  
[pid 46147] mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0f1403d000 <0.000012>  
[pid 46147] unlink("global/pg_internal.init.46147") = -1 ENOENT (No such file or directory) <0.000059>  
[pid 46147] open("global/pg_internal.init.46147", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 14 <0.000068>  
[pid 46147] fstat(14, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0 <0.000013>  
[pid 46147] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0f1403c000 <0.000020>  
[pid 46147] write(14, "f2W\0008\1\0\0\0\0\0\0\200\6\0\0\0\0\0\0U2\0\0\0\0\0\0\0\0\0\0"..., 4096 <unfinished ...>  
[pid 46147] <... write resumed> )       = 4096 <66.317440>  
[pid 46147] --- SIGALRM (Alarm clock) @ 0 (0) ---  
//Find the corresponding code:

write_relcache_init_file@src/backend/utils/cache/relcache.c

Track this c file again:

[root@digoal ~]# cat trc.stp   
global f_start[999999]  

probe process("/opt/pgsql/bin/postgres").function("*@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c").call {   
  f_start[execname(), pid(), tid(), cpu()] = gettimeofday_ms()  
}  

probe process("/opt/pgsql/bin/postgres").function("*@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c").return {   
  t=gettimeofday_ms()  
  a=execname()  
  b=cpu()  
  c=pid()  
  d=pp()  
  e=tid()  
  if (f_start[a,c,e,b] && t-f_start[a,c,e,b]>1) {  
#    printf("time:%d, execname:%s, pp:%s, par:%s\n", t - f_start[a,c,e,b], a, d, $$locals$$)  
    printf("time:%d, execname:%s, pp:%s\n", t - f_start[a,c,e,b], a, d)  
  }  

}  

Because the startup process is dynamically generated, this is the only way to > [root@digoal~]# cat T.Sh

!/bin/bash for ((i=0;i<1;i=0)) do PID=ps -ewf|grep start|grep -v grep|awk '{print $2}' stap -vp 5 -DMAXSKIPPED=9999999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./trc.stp -x $PID done Follow-up is as follows:

postgres@digoal-> strace -T psql -h 127.0.0.1 -p 1922
[root@digoal ~]# . ./t.sh Pass 1: parsed user script and 111 library
script(s) using 209296virt/36828res/3172shr/34516data kb, in
180usr/20sys/199real ms. Pass 2: analyzed script: 102 probe(s), 7
function(s), 4 embed(s), 1 global(s) using
223800virt/51400res/4172shr/49020data kb, in 80usr/60sys/142real ms.
Pass 3: translated to C into
"/tmp/stapbw7MDq/stap_b17f8a3318ccf4b972f4b84491bbdc1e_54060_src.c"
using 223800virt/51744res/4504shr/49020data kb, in 10usr/40sys/57real
ms. Pass 4: compiled C into
"stap_b17f8a3318ccf4b972f4b84491bbdc1e_54060.ko" in
1440usr/370sys/1640real ms. Pass 5: starting run. time:6134,
execname:postgres,
pp:process("/opt/pgsql9.4.4/bin/postgres").function("write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979").return time:3, execname:postgres,
pp:process("/opt/pgsql9.4.4/bin/postgres").function("write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979").return time:6, execname:postgres,
pp:process("/opt/pgsql9.4.4/bin/postgres").function("write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979").return…
//How to solve the above problems? How can IOPS isolate database instances without interrupting each other?

Solution 1

Different instances use different file systems.

for example

mkfs.ext4 /dev/mapper/vgdata01-lv01  
mkfs.ext4 /dev/mapper/vgdata01-lv02  
mount /dev/mapper/vgdata01-lv01 /data01  
mount /dev/mapper/vgdata01-lv02 /data02 

Two database instances are placed in / data01 and / data02, respectively.

Limiting the IOPS of / dev/mapper/vgdata01-lv01 will not affect another file system.

The disadvantage of this method is that if there are many instances, it needs to be split into many small file systems, which is not suitable for spatial flexible management and sharing.

Solution 2

Aiming at ext4

Normally, if you want to modify metadata, you must make sure that the block changes corresponding to metadata have been abolished, so there may be situations where writing metadata is forced to brush dirty data page.

pic

If dirty data page brushes are slow, metadata writes will be blocked. Writing metadata journals is serial, which inevitably affects the modification of metadata journals by other processes.

Use data=writeback to load the ext4 file system.

The principle of this method is that you don't have to wait for data to finish before you write metadata. Thus, metadata may be new, but the data is old. (For example, inode is new, data is old, maybe some inode reference blocks do not exist or old deleted blocks)

Writing metadata does not wait to write data. The advantage is that the serial operation is not good because the data is blocked.

       data={journal|ordered|writeback}  
              Specifies the journalling mode for file data.  Metadata is always journaled.  To use modes other than ordered on the root filesystem, pass the mode to  the  kernel  as  boot  parameter,  e.g.   root-  
              flags=data=journal.  

              journal  
                     All data is committed into the journal prior to being written into the main filesystem.  

              ordered  
                     This is the default mode.  All data is forced directly out to the main file system prior to its metadata being committed to the journal.  

              writeback  
                     Data  ordering  is not preserved - data may be written into the main filesystem after its metadata has been committed to the journal.  This is rumoured to be the highest-throughput option.  It  
                     guarantees internal filesystem integrity, however it can allow old data to appear in files after a crash and journal recovery.  

The disadvantage is that after crash, the file system or the operating system may cause inconsistency between metadata and data, resulting in dirty blocks.

Solution 3

It will be used as a journal block device independently. When IOPS is restricted, IO of journal block device is not limited (because metadata journal operates very little and quickly, and there is no need to restrict), only IO of data block device is restricted.

This method is only suitable for xfs file systems.

The ext4 filesystem does not work with this method. The ext4 separate journal dev method is as follows, but it doesn't work. You can try it.

Create logical volumes, one for DATA and one for journal

#pvcreate /dev/dfa  
#pvcreate /dev/dfb  
#pvcreate /dev/dfc  
#vgcreate aliflash /dev/dfa /dev/dfb /dev/dfc  
#lvcreate -i 3 -I 8 -L 1T -n lv01 aliflash  
#lvcreate -i 3 -I 8 -L 2G -n lv02 aliflash  

Create journal block device

#mkfs.ext4 -O journal_dev -b 4096 /dev/mapper/aliflash-lv02  
mke2fs 1.41.12 (17-May-2010)  
Discarding device blocks: done                              
Filesystem label=  
OS type: Linux  
Block size=4096 (log=2)  
Fragment size=4096 (log=2)  
Stride=2 blocks, Stripe width=6 blocks  
0 inodes, 525312 blocks  
0 blocks (0.00%) reserved for the super user  
First data block=0  
0 block group  
32768 blocks per group, 32768 fragments per group  
0 inodes per group  
Superblock backups stored on blocks:   

Zeroing journal device: done     

Create ext4 file system

#mkfs.ext4 -E stride=16,stripe-width=48 -J device=/dev/mapper/aliflash-lv02 /dev/mapper/aliflash-lv01  
mke2fs 1.41.12 (17-May-2010)  
Using journal device's blocksize: 4096  
Discarding device blocks: done                              
Filesystem label=  
OS type: Linux  
Block size=4096 (log=2)  
Fragment size=4096 (log=2)  
Stride=16 blocks, Stripe width=48 blocks  
67117056 inodes, 268437504 blocks  
13421875 blocks (5.00%) reserved for the super user  
First data block=0  
Maximum filesystem blocks=4294967296  
8193 block groups  
32768 blocks per group, 32768 fragments per group  
8192 inodes per group  
Superblock backups stored on blocks:   
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,   
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,   
        102400000, 214990848  

Writing inode tables: done                              
Adding journal to device /dev/mapper/aliflash-lv02: done  
Writing superblocks and filesystem accounting information: done  

This filesystem will be automatically checked every 31 mounts or  
180 days, whichever comes first.  Use tune2fs -c or -i to override.  

#ll /dev/mapper/aliflash-lv0*  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv01 -> ../dm-0  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv02 -> ../dm-1  
#ll /dev/dm-0  
brw-rw---- 1 root disk 253, 0 Jan  7 11:22 /dev/dm-0  
#ll /dev/dm-1  
brw-rw---- 1 root disk 253, 1 Jan  7 11:22 /dev/dm-1 

Mount file system

#mount -o nobarrier,noatime,nodiratime,discard,defaults,nodelalloc /dev/mapper/aliflash-lv01 /data01

Using the method at the beginning of this paper, only IOPS of / dev/mapper/vgdata01-lv01 is limited, and testing can not solve the problem.

The method of using journal dev in XFS file system

# mkfs.xfs -f -b size=4096 -l logdev=/dev/mapper/vgdata01-lv02,size=2136997888,sunit=16 -d agcount=9000,sunit=16,swidth=48 /dev/mapper/vgdata01-lv01   
# mount -t xfs -o nobarrier,nolargeio,logbsize=262144,noatime,nodiratime,swalloc,logdev=/dev/mapper/vgdata01-lv02 /dev/mapper/vgdata01-lv01 /data01  

Using the method at the beginning of this paper, only IOPS of / dev/mapper/vgdata01-lv01 is limited, and the problem is solved by testing.

In the case of restricted IOPS, IO usage can only reach the limit value, so without changing the original requirements, it realizes mutual non-interference.

Posted by SumitGupta on Fri, 11 Jan 2019 11:51:11 -0800