Thursday, May 2, 2024
 Popular · Latest · Hot · Upcoming
30
rated 0 times [  30] [ 0]  / answers: 1 / hits: 60807  / 2 Years ago, sun, october 9, 2022, 9:09:43

I created a RAID with:



sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2


sudo mdadm --detail --scan returns:



ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


Which I appended it to /etc/mdadm/mdadm.conf, see below:



# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500
# by mkconf $Id$
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


cat /proc/mdstat returns:



Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdb2[0] sdc2[1]
208629632 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]

unused devices: <none>


ls -la /dev | grep md returns:



brw-rw----   1 root disk      9,   1 Oct 30 11:06 md1
brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2


So I think all is good and I reboot.






After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127?????



sudo mdadm --detail --scan returns:



ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


cat /proc/mdstat returns:



Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid1 sdc2[1] sdb2[0]
208629632 blocks super 1.2 [2/2] [UU]

md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]

unused devices: <none>


ls -la /dev | grep md returns:



drwxr-xr-x   2 root root          80 Oct 30 11:18 md
brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126
brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127





All is not lost, I:



sudo mdadm --stop /dev/md126
sudo mdadm --stop /dev/md127
sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1
sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2


and verify everything:



sudo mdadm --detail --scan returns:



ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


cat /proc/mdstat returns:



Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdb2[0] sdc2[1]
208629632 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]

unused devices: <none>


ls -la /dev | grep md returns:



brw-rw----   1 root disk      9,   1 Oct 30 11:26 md1
brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2


So once again, I think all is good and I reboot.






Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127?????



sudo mdadm --detail --scan returns:



ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


cat /proc/mdstat returns:



Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid1 sdc2[1] sdb2[0]
208629632 blocks super 1.2 [2/2] [UU]

md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]

unused devices: <none>


ls -la /dev | grep md returns:



drwxr-xr-x   2 root root          80 Oct 30 11:42 md
brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126
brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127


What am I missing here?


More From » mdadm

 Answers
3

I found the answer here, RAID starting at md127 instead of md0. In short, I chopped my /etc/mdadm/mdadm.conf definitions from:



ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


to:



ARRAY /dev/md1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb


and ran:



sudo update-initramfs -u


I am far from an expert on this, but my understanding is this ...



The kernel assembled the arrays prior to the normal time to assemble the arrays occurs. When the kernel assembles the arrays, it does not use mdadm.conf. Since the partitions had already been assembled by the kernel, the normal array assembly which uses mdadm.conf was skipped.



Calling sudo update-initramfs -u tells the kernel take a look at the system again to figure out how to start up.



I am sure someone with better knowledge will correct me / elaborate on this.



Use the following line to update the initrd for each respective kernel that exists on your system:



sudo update-initramfs -k all -u

[#34588] Monday, October 10, 2022, 2 Years  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
aradxalte

Total Points: 70
Total Questions: 116
Total Answers: 116

Location: Dominica
Member since Sat, Nov 5, 2022
2 Years ago
;