The data redundancy mechanism of ASM is achieved by copying the mirror copy of extension onto disks of different failgroup s of the same disk group, which is called the partner disk. In the 11GR2 version, a high redundancy disk group, each ASM disk can have up to eight partner disks, and before the 11GR2 version, each disk has up to 10 partners. There is neither fail group nor partner relationship between disks in the external redundant disk group.
In fact, each disk in the external redundant disk group will be assigned a separate failure group name, which can be seen through the query view v$asm_disk, but it has no real meaning. 2) The number of partners can be adjusted by the implicit parameter _asm_partner_target_disk_part, but the translator has no reason to do so.
If a normal redundant disk group has two disks, they are part relationships. All extent s on disk 0 have mirror copies on disk 1, and vice versa, in which case each disk has a partner.
If a normal redundant disk group has three disks and failgroup is not specified manually, each disk will have two partners. Disk 0 has a partner relationship with disk 1 and 2, disk 1 has a partner relationship with disk 0 and 2, and disk 2 has a partner relationship with disk 0 and 1. When an extension is allocated to disk 0, its mirror copy will be allocated to disk 1 or 2, but not both. Note that each extension in a normal redundant disk group has only two mirrored copies, not three. Similarly, an extent on disk 1 will have a mirror copy on disk 0 or 2, and an extension on disk 2 will have a mirror copy on disk 0 or 1. The whole allocation is relatively simple and clear.
Translator's Note: If metadata is extent, there will be three copies on the Normal redundant disk group, of course, if the number of failgroup s is greater than or equal to three.
A high redundant disk group consisting of three disks is similar. The parnter relationship between disks is exactly the same as the three disks described in the previous paragraph, which make up the normal redundant disk group. The difference is that at the mirror level, each Extent on disk 0 has a mirror copy on the parnter disk, disk 1 and disk 2. Every Extent on disk 1 is the same, with mirror copies on disk 0 and disk 2. The extension on disk 2 has mirror copies on both disk 0 and disk 2.
If a normal redundant disk group has many disks, there will be up to eight partners per disk. That is to say, an extension on any disk will have a copy on its eight partners'disks. Let's bear in mind that an extension will always have a mirror copy on a partner disk.
By querying the x$kfdpartner view, you can find out more details about the disk partner relationship. Let's look at an example of a multi-disk disk disk group:
SQL> SELECT count(disk_number)
FROM v$asm_disk
WHERE group_number = 1;
COUNT(DISK_NUMBER)
------------------
168
The query results show that there are a lot of disks in the disk group. Next, see how many partner s a single disk has.
SQL> SELECT disk "Disk", count(number_kfdpartner) "Number of partners"
FROM x$kfdpartner
WHERE grp=1
GROUP BY disk
ORDER BY 1;
Disk Number of partners
---------- ------------------
0 8
1 8
2 8
...
165 8
166 8
167 8
168 rows selected.
The query results show that a single disk has exactly eight partner disks. (You can see that the environment here is an 11GR2 or more version of the database)
Next, query the partner relationship information for each disk in each disk group:
SQL> set pages 1000
SQL> break on Group# on Disk#
SQL> SELECT d.group_number "Group#", d.disk_number "Disk#", p.number_kfdpartner "Partner disk#"
FROM x$kfdpartner p, v$asm_disk d
WHERE p.disk=d.disk_number and p.grp=d.group_number
ORDER BY 1, 2, 3;
Group# Disk# Partner disk#
---------- ---------- -------------
1 0 12
13
18
20
24
27
31
34
1 13
17
21
22
24
29
34
35
...
29 4
5
7
8
10
12
16
19
816 rows selected..
partner relationships are automatically allocated by ASM when creating disk groups and are updated every time a disk is added or deleted.
partner relationship information between disks is recorded in the partner relationship state table of disks, namely the PST table (refer to the previous chapter) and the disk directory, which are both important ASM metadata structures.
SQL> CREATE DISKGROUP wxh normal REDUNDANCY
failgroup ocr1 disk
'/dev/qdata/vdc' ,
'/dev/qdata/vdd'
failgroup ocr2 disk
'/dev/qdata/vde' ,
'/dev/qdata/vdg'
attribute
'au_size'='1M',
'compatible.asm' = '11.2.0.4',
'compatible.rdbms' = '11.2.0.4';
Diskgroup created.
kfed read /dev/qdata/vdc aun=1 blkn=0| more
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 17 ; 0x002: KFBTYP_PST_META
kfbh.datfmt: 2 ; 0x003: 0x02
kfbh.block.blk: 256 ; 0x004: blk=256
kfbh.block.obj: 2147483648 ; 0x008: disk=0
kfbh.check: 2503555118 ; 0x00c: 0x9539382e
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdpHdrPairBv1.first.super.time.hi:33036846 ; 0x000: HOUR=0xe DAYS=0x11 MNTH=0x6 YEAR=0x7e0
kfdpHdrPairBv1.first.super.time.lo:485925888 ; 0x004: USEC=0x0 MSEC=0x1a9 SECS=0xf MINS=0x7
kfdpHdrPairBv1.first.super.last: 2 ; 0x008: 0x00000002
kfdpHdrPairBv1.first.super.next: 2 ; 0x00c: 0x00000002
kfdpHdrPairBv1.first.super.copyCnt: 2 ; 0x010: 0x02
kfdpHdrPairBv1.first.super.version: 1 ; 0x011: 0x01
kfdpHdrPairBv1.first.super.ub2spare: 0 ; 0x012: 0x0000
kfdpHdrPairBv1.first.super.incarn: 1 ; 0x014: 0x00000001
kfdpHdrPairBv1.first.super.copy[0]: 0 ; 0x018: 0x0000
kfdpHdrPairBv1.first.super.copy[1]: 2 ; 0x01a: 0x0002
kfdpHdrPairBv1.first.super.copy[2]: 0 ; 0x01c: 0x0000
kfdpHdrPairBv1.first.super.copy[3]: 0 ; 0x01e: 0x0000
kfdpHdrPairBv1.first.super.copy[4]: 0 ; 0x020: 0x0000
kfdpHdrPairBv1.first.super.dtaSz: 4 ; 0x022: 0x0004
The above output shows that there are copies of pst on disk 0 and 2 (values at kfdpHdrPairBv1.first.super.copy[0] and kfdpHdrPairBv1.first.super.copy[1])
kfed read /dev/qdata/vdc aun=1 blkn=3| more
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 18 ; 0x002: KFBTYP_PST_DTA
kfbh.datfmt: 2 ; 0x003: 0x02
kfbh.block.blk: 259 ; 0x004: blk=259
kfbh.block.obj: 2147483648 ; 0x008: disk=0
kfbh.check: 2182251266 ; 0x00c: 0x82128302
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdpDtaEv1[0].status: 127 ; 0x000: I=1 V=1 V=1 P=1 P=1 A=1 D=1
kfdpDtaEv1[0].fgNum: 1 ; 0x002: 0x0001
kfdpDtaEv1[0].addTs: 2174935503 ; 0x004: 0x81a2e1cf
kfdpDtaEv1[0].partner[0]: 49155 ; 0x008: P=1 P=1 PART=0x3
kfdpDtaEv1[0].partner[1]: 49154 ; 0x00a: P=1 P=1 PART=0x2
kfdpDtaEv1[0].partner[2]: 10000 ; 0x00c: P=0 P=0 PART=0x2710
kfdpDtaEv1[0].partner[3]: 0 ; 0x00e: P=0 P=0 PART=0x0
The above output represents the failure group number of disk 0 (kfdpDtaEv1[0]), which has two parts disks, PART=0x3 and PART=0x2.
Original text:
How many partners
Author: Bane Radulovic
Translator: Zhuang Peipei, Pre-sales Engineer of Waukee Science and Technology Database, is mainly responsible for database platform architecture design, product validation and testing.
Revision: Wei Xinghua
Chief Editor: Qian Shuguang