Linux operation and maintenance - disk storage - 2. RAID

Keywords: Linux sudo Database SQL ascii

With the limitations of single disk in data security, performance and capacity, RAID has emerged. RAID combines multiple independent disks in different ways to form a disk group to obtain higher data security, performance and capacity than single disk.

1, Common RAID levels

Raid has several levels of RAID0~RAID7. In addition, there are some composite raid modes, such as RAID10, RAID01, RAID50 and RAID53.

The commonly used raid modes are RAID0, RAID1, RAID5 and RAID10.

  1. RAID0

RAID0 is also known as data striping. Data is scattered on each physical disk in the array. It requires two or more hard disks. It has low cost, and its performance and capacity increase with the number of hard disks. In all RAID levels, RAID 0 is the fastest, but RAID 0 is the fastest 0 does not provide redundancy or error recovery capability. If one disk (physical) is damaged, all data cannot be used.

For a RAID array with disaster recovery mode, if a disk is damaged, just replace it with a new hard disk, and the array system will automatically synchronize the data to the new hard disk. (if hot plug is not supported, you need to shut down and then turn on)

  1. RAID1

RAID1 is also known as data mirroring. Two or more hard disks (even) are divided into two groups. Each group has one copy of data. If one group has disk damage, the other group can ensure that data access will not be interrupted. RAID1 has the same reading speed as RAID0, but the writing speed has decreased.

  1. RAID5

RAID 5 is a solution that takes data security, performance, capacity, cost and feasibility into account. Therefore, similar RAID2, RAID3, RAID4 and RAID6 are rarely applied in practice. RAID5 needs three or more hard disks. Instead of directly backing up the stored data, it stores the data and the corresponding parity information on each disk that makes up the array. In short, when any disk is broken, another N-1 disk can recover the data on the broken disk by using the parity information. RAID 5 can be understood as a compromise between RAID 0 and RAID 1. It has a data reading speed similar to RAID 0 and a disaster tolerance capability lower than RAID1 (RAID5 only allows one disk to be damaged). Because of more parity information, the data writing speed is slower than RAID1.

  1. RAID10

RAID10. It can be seen from its name that it is a combination of RAID0 and RAID1. Obviously, it needs at least 4 disks. However, it is not the same whether RAID0 first followed by RAID1 or RAID1 first followed by RAID0.

RAID01 is to do RAID0 first, and then RAID1 for two groups of RAID0. Suppose a RAID0 breaks a disk at this time, then this RAID0 is unavailable, and all IO points to the remaining RAID0;

RAID10 is to do RAID1 first, then RAID0 for two groups of RAID1. Suppose that at this time one RAID1 breaks a disk, and currently RAID1 can still provide services, and another RAID1 can also break a disk at the same time.

Therefore, we usually choose RAID10 instead of RAID01.

  1. Read and write performance at different RAID levels

Assuming that all four disks are used, RAID0, RAID1, RAID5 and RAID10 can read multiple disks at the same time in the case of multithreading / multi CPU, and the performance of reading is very good; the IOPs of writing decreases in turn, which is roughly: RAID0 > RAID10 > RAID1 > RAID5. 2, Space calculation of raid

In RAID, the disks with uniform specifications are usually selected. If there are disks with different space sizes and different read / write speeds, the array system will take the disks with small space and low speed as the standard, and the disks with large space and high speed will be downward compatible. For example, if two disks of 100G and 50G are used as RAID0, the space obtained is 50G*2 = 100G.

Raid space calculation formula: RAID0 space: Disk Size * N RAID1 space: (Disk Size * N)/2 RAID5 space: ((N-1)/N) * (Disk Size * N) = (N-1) * Disk Size RAID10 space: (Disk Size * N/2)/2 + (Disk Size * N/2)/2 = (Disk Size * N)/2

Suppose that all four disks are used, and each disk is 100G RAID0 space: 100G * 4 = 400G RAID1 space: (100G * 4)/2 = 200G RAID5 space: (4-1) * 100G = 300G RAID10 space: (100G * 4)/2 = 200G III. IOPS calculation of raid

  1. IOPS of a single hard disk is a fixed calculation about IOPS of a single disk. There are detailed methods, but usually this value is relatively fixed and does not need to be calculated repeatedly. Refer to the following:

It can be found that IOPS of single disk of different models are maintained at a similar order of magnitude with the same revolution.

  1. For example, for raid or JBOD (just a bunch of disks) storage, the total IOPS of 10 175 IOPS disks is 10*175 = 1750 IOPS. However, for other RAID levels, this is not the case, because raid has the overhead of multiple IO writes. To put it simply, if raid initiates one IO write, there will be more than one IO write in raid. The IO overhead in raid is as follows:

From the figure, we get the formula: user read IO+N * user write IO = total IOPS (N is the number of IO overhead in RAID)

Suppose that each half of the user's read and write requests (50%) are used, and the same example is 10 175 IOPS disks: 50% * total user IO requests + N * (50% * total user IO requests) = 175 IOPS * 10

Taking RAID1 as an example, then N = 2, the above formula becomes: 1.5 * total IO requests of users = 1750 IOPS total IO requests of users = 1167 IOPS, which is the IOPS that 10 175 IOPS disks can provide with RAID1.

Refer to write penalty

  1. The application of the IOPS calculation of RAID in reality in actual use, we usually do not calculate the IOPS of the existing RAID, but in reverse: choose a good disk specification, RAID mode, test the read-write ratio of the system, the IOPS that the system needs to achieve, and then see how many hard disks are needed to complete the array to achieve such IOPS requirements?

Suppose: you choose a disk with 175 IOPS as RAID1, and the read-write ratio of the system is 60%: 40%. The system needs to reach 2000 IOPS. Ask: how many hard disks of this specification do you want to configure?

Change the above formula to a general formula: reads * workload IOPs + writesimpact * (writes * workload_iops) = 175 * m 60% * 2000 + 2 * (40% * 2000) = 175 * m = 16 (that is, to achieve the specified 2000 IOPS, RAID1 needs to configure 16 175 IOPS disks)

Some people may think that the system's read-write request ratio and how many IOPS the system needs to achieve are unknown. If there is no pre-test, it can only be estimated based on experience. 4, Application of RAID in database storage

Take SQL Server database as an example to see what scenario different RAID levels are applicable to: RAID0, which is rarely used alone because there is no disaster recovery mechanism.

RAID1, operating system, SQL Server instance, log file; RAID5, data file, backup file; RAID10, all types are applicable, but considering the cost, generally not all RAID10 will be used. 5, Create raid using mdadm create RAID array

We can create software RAID through the command of mdadm. For example, the following command can create a RAID5

sudo mdadm --create /dev/md0 -a yes -l 5 -n 3 /dev/sdb /dev/sdc /dev/sdd -x  1 /dev/sde  

among

--create /dev/md0 creates a new raid, named / dev/md0 -a yes automatically creates the corresponding RAID array device under / dev / - l 5 specifies that the RAID level is 5 -n 3 specifies the number of hard disks. It means to create RAID5 with three hard disks, / dev/sdb, /dev/sdc, /dev/sdd

We will find a device named md0 under / dev

ls -l /dev/md0

brw-rw----. 1 root disk 9, 0 Sep 11 09:40 /dev/md0

Automatically enable RAID

After creating a RAID, you can save the RAID information to the / etc/mdadm.conf file, so that the next time the operating system restarts, the system will automatically load the file to enable RAID

sudo mdadm -D --scan >/etc/mdadm.conf
cat /etc/mdadm.conf

ARRAY /dev/md0 metadata=1.2 name=MiWiFi-R3-srv:0 UUID=ece6c656:c9999ff6:9d17c0ec:08a0e3af

Viewing RAID array information

After the RAID array is created, we can view the details of the newly created raid through mdadm --misc mode

mdadm --misc --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Tue Sep 11 09:40:45 2018
        Raid Level : raid5
        Array Size : 16758784 (15.98 GiB 17.16 GB)
     Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Sep 11 10:03:35 2018
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : MiWiFi-R3-srv:0  (local to host MiWiFi-R3-srv)
              UUID : ece6c656:c9999ff6:9d17c0ec:08a0e3af
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      active sync   /dev/sdd

Or we can view the simple information of RAID through the / proc/mdstat file

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd[3] sdc[1] sdb[0]
      16758784 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

Using RAID arrays

After creating a RAID array, we can't directly operate the disks that make up the array, otherwise the RAID array just created will be damaged. We use / dev/md0 to format and mount files

set -x
exec 2>&1
mkfs.xfs -f /dev/md0
mount /dev/md0 /mnt
mount |grep md0

+ mkfs.xfs -f /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=261760 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=4188160, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
+ mount /dev/md0 /mnt
+ mount
+ grep md0
/dev/md0 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)

Close RAID

We can turn off RAID through mdadm --misc mode. This will free up all resources

You need to uninstall RAID before shutting down RAID:

sudo umount /mnt

Then turn off RAID

sudo mdadm --misc --stop /dev/md0

After the raid is shut down, we can clear the super block information of RAID array through mdadm -- misc -- zero superblock. Empty and you can use these disks normally

mdadm --misc --zero-superblock /dev/sdb
mdadm --misc --zero-superblock /dev/sdc
mdadm --misc --zero-superblock /dev/sdd

Simulate RAID failure

We can set a disk to fail state through mdadm --manage /dev/md0 --fail

sudo mdadm /dev/md0 -f /dev/sdd 2>&1

mdadm: set /dev/sdd faulty in /dev/md0

Then let's check the RAID information

sudo mdadm --misc --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Tue Sep 11 10:32:21 2018
        Raid Level : raid5
        Array Size : 16758784 (15.98 GiB 17.16 GB)
     Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Sep 11 10:35:12 2018
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : MiWiFi-R3-srv:0  (local to host MiWiFi-R3-srv)
              UUID : c031d0c9:998a4e86:5cf90e71:52b229cd
            Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       -       0        0        2      removed

       3       8       48        -      faulty   /dev/sdd

You will find that the state of / dev/sdd has now become fault, but RAID5 allows a disk to be damaged without data damage. Removing disks from a RAID array

sudo mdadm --manage /dev/md0 --remove /dev/sdd

Replace the disk with a new one

set -x
exec 2>&1
sudo mdadm --manage /dev/md0 --add /dev/sdd
sudo mdadm --misc --detail /dev/md0

+ sudo mdadm --manage /dev/md0 --add /dev/sdd
mdadm: added /dev/sdd
+ sudo mdadm --misc --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Sep 11 10:32:21 2018
        Raid Level : raid5
        Array Size : 16758784 (15.98 GiB 17.16 GB)
     Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Sep 11 10:40:41 2018
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : MiWiFi-R3-srv:0  (local to host MiWiFi-R3-srv)
              UUID : c031d0c9:998a4e86:5cf90e71:52b229cd
            Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       -       0        0        2      removed

       3       8       48        -      spare   /dev/sdd​

Posted by simplyi on Thu, 27 Feb 2020 01:15:44 -0800