There are two ways to implement raid:
The performance of software RAID is low because it uses the resources of the host. You need to load the raid software to read data from the software RAID volume. Before loading the raid software, the operating system needs to boot up to load the raid software. No physical hardware is required in software RAID. Zero cost investment.
The performance of hardware raid is high. They use PCI Express cards to physically provide a dedicated RAID controller. It does not use host resources. They have NVRAM for cache reads and writes. When the cache is used for RAID reconstruction, it uses the backup battery power to keep the cache even in case of power failure. It is a very expensive investment for large-scale use.
Raid has different levels. Here, we list only the most used RAID levels in the real world.
- RAID0 = striping
- RAID1 = mirror
- RAID5 = single disk distributed parity
- RAID6 = dual disk distributed parity
- RAID10 = mirror + stripe. (nested RAID)
RAID 1 is called disk image. The principle is to mirror the data of one disk to another disk. That is to say, when the data is written to one disk, it will generate image files on another idle disk. In order to ensure the reliability and repairability of the system to the greatest extent without affecting the performance, as long as at least one disk in any pair of image disks in the system can be used Even when half of the hard disks have problems, the system can operate normally. When a hard disk fails, the system will ignore the hard disk and use the remaining mirror disk to read and write data, which has good disk redundancy. Although this is absolutely safe for data, the cost will increase significantly. The disk utilization rate is 50%. For four 80GB hard disks, the available disk space is only 160GB. In addition, the RAID system with hard disk failure is no longer reliable. The damaged hard disk should be replaced in time. Otherwise, the remaining image disk also has problems, and the whole system will crash. After replacing the new disk, the original data will take a long time to synchronize the image, and the external access to the data will not be affected, but the performance of the whole system will decline. Therefore, RAID 1 is often used to save critical and important data.
RAID 1 mainly realizes disk mirroring through secondary reading and writing, so the load of disk controller is quite large, especially in the environment where data needs to be written frequently. In order to avoid performance bottlenecks, it is necessary to use multiple disk controllers.
RAID 1: the disk utilization is 50%, and the secondary read and write realize disk mirroring.
1. Preparations
After installing the system, two hard disks are connected to the main board. Here I use the virtual machine to do the experiment.
Test system: Centos 8.1.1911
2. Create logical volume RAID 1
View logical volumes and partitions
[root@study ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 17G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 1G 0 disk sdc 8:32 0 1G 0 disk sr0 11:0 1 1024M 0 rom [root@study ~]#
We use sdb and sdc resume logical volume RAID1
[root@study ~]# mdadm -C /dev/md0 -l raid1 -n 2 /dev/sdb /dev/sdc mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? Continue creating array? (y/n) y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@study ~]#
View status
[root@study ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jun 9 07:24:46 2020 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jun 9 07:24:52 2020 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : study.server.com:0 (local to host study.server.com) UUID : 77aeebdc:4a82397f:0a1ab9ad:dadf083c Events : 17 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc [root@study ~]#
Format the partition and mount it for use
[root@study ~]# mkfs.ext4 /dev/md0 mke2fs 1.44.6 (5-Mar-2019) Create a file system with 261632 blocks (4k each) and 65408 inode s File system UUID: 07d1f71a-95dd-4985-b018-e205de9b9772 Backups of superblocks are stored in the following blocks: 32768, 98304, 163840, 229376 Assigning group table: done Writing inode table: done Create log (4096 blocks) complete Write superblock and file system account statistics: done [root@study ~]# mkdir /mnt/raid1 [root@study ~]# mount /dev/md0 /mnt/raid1/ [root@study ~]# echo "this is linux raid1" > /mnt/raid1/readme.txt [root@study ~]# ll /mnt/raid1/readme.txt -Rw-r -- R --. 1 root 20 June 9 07:27 /mnt/raid1/readme.txt [root@study ~]# cat /mnt/raid1/readme.txt this is linux raid1 [root@study ~]#
Configure auto mount
[root@study ~]# umount /mnt/raid1 [root@study ~]# blkid /dev/md0 /dev/md0: UUID="07d1f71a-95dd-4985-b018-e205de9b9772" TYPE="ext4" [root@study ~]# vim /etc/fstab [root@study ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Thu May 28 03:40:23 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/cl-root / xfs defaults 0 0 UUID=9df293e1-6753-4be2-9733-dad673e61f4a /boot ext4 defaults 1 2 /dev/mapper/cl-swap swap swap defaults 0 0 UUID="07d1f71a-95dd-4985-b018-e205de9b9772" /mnt/raid1 ext4 defaults 0 0 [root@study ~]# mount -a [root@study ~]# df -h //File system capacity used% free used% mount point devtmpfs 885M 0 885M 0% /dev tmpfs 903M 0 903M 0% /dev/shm tmpfs 903M 9.4M 894M 2% /run tmpfs 903M 0 903M 0% /sys/fs/cgroup /dev/mapper/cl-root 17G 4.3G 13G 26% / /dev/sda1 976M 143M 766M 16% /boot tmpfs 181M 1.2M 180M 1% /run/user/42 tmpfs 181M 4.0K 181M 1% /run/user/0 /dev/md0 990M 2.6M 921M 1% /mnt/raid1 [root@study ~]#
3. Simulate raid disk failure
Remove the disk sdc.
After removal, partprobe refreshes the next partition.
[root@study ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jun 9 07:24:46 2020 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jun 9 07:30:15 2020 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : study.server.com:0 (local to host study.server.com) UUID : 77aeebdc:4a82397f:0a1ab9ad:dadf083c Events : 17 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc [root@study ~]# partprobe /dev/md0 [root@study ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 17G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 1G 0 disk └─md0 9:0 0 1022M 0 raid1 /mnt/raid1 sr0 11:0 1 1024M 0 rom
View raid1 status data
It is found that the data still exists and does not affect the data.
[root@study ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jun 9 07:24:46 2020 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jun 9 07:35:17 2020 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Consistency Policy : resync Name : study.server.com:0 (local to host study.server.com) UUID : 77aeebdc:4a82397f:0a1ab9ad:dadf083c Events : 20 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb - 0 0 1 removed 1 8 32 - faulty /dev/sdc [root@study ~]# ll /mnt/raid1/ //Total consumption 20 drwx------. 2 root root 16384 6 September 7:26 lost+found -rw-r--r--. 1 root root 20 6 September 7:27 readme.txt [root@study ~]# cat /mnt/raid1/readme.txt this is linux raid1 [root@study ~]#
Repair RAID1
Remove the failed disk, insert a new disk, and restart the server.
[root@study ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 17G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 1G 0 disk └─md0 9:0 0 1022M 0 raid1 /mnt/raid1 sdc 8:32 0 1G 0 disk sr0 11:0 1 1024M 0 rom [root@study ~]# df -h //File system capacity used% free used% mount point devtmpfs 886M 0 886M 0% /dev tmpfs 904M 0 904M 0% /dev/shm tmpfs 904M 9.4M 894M 2% /run tmpfs 904M 0 904M 0% /sys/fs/cgroup /dev/mapper/cl-root 17G 4.4G 13G 26% / /dev/sda1 976M 143M 766M 16% /boot tmpfs 181M 1.2M 180M 1% /run/user/42 tmpfs 181M 4.0K 181M 1% /run/user/0 /dev/md0 990M 2.6M 921M 1% /mnt/raid1 [root@study ~]# umount /mnt/raid1 [root@study ~]# mdadm --manage /dev/md0 --add /dev/sdc mdadm: added /dev/sdc [root@study ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 17G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 1G 0 disk └─md0 9:0 0 1022M 0 raid1 sdc 8:32 0 1G 0 disk └─md0 9:0 0 1022M 0 raid1 sr0 11:0 1 1024M 0 rom
View data
[root@study ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jun 9 07:24:46 2020 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jun 9 21:11:45 2020 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : study.server.com:0 (local to host study.server.com) UUID : 77aeebdc:4a82397f:0a1ab9ad:dadf083c Events : 53 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 2 8 32 1 active sync /dev/sdc [root@study ~]# mount -a [root@study ~]# df -h //File system capacity used% free used% mount point devtmpfs 886M 0 886M 0% /dev tmpfs 904M 0 904M 0% /dev/shm tmpfs 904M 9.4M 894M 2% /run tmpfs 904M 0 904M 0% /sys/fs/cgroup /dev/mapper/cl-root 17G 4.4G 13G 26% / /dev/sda1 976M 143M 766M 16% /boot tmpfs 181M 1.2M 180M 1% /run/user/42 tmpfs 181M 4.0K 181M 1% /run/user/0 /dev/md0 990M 2.6M 921M 1% /mnt/raid1 [root@study ~]# ll /mnt/raid1/ //Total consumption 20 drwx------. 2 root root 16384 6 September 7:26 lost+found -rw-r--r--. 1 root root 20 6 September 7:27 readme.txt [root@study ~]# cat /mnt/raid1/readme.txt this is linux raid1 [root@study ~]#
There is always one on the way to study and keep fit