Friday, May 3, 2024
 Popular · Latest · Hot · Upcoming
2
rated 0 times [  2] [ 0]  / answers: 1 / hits: 5121  / 2 Years ago, wed, may 18, 2022, 10:30:10

I'm not sure what else to check. Everything below looks sane to me but the system hangs on boot. This is a home server with four disks crammed into a Dell OP620. Each pair of disks are assembled as RAID1: / and data. The failed array is /, hence the inability to boot.



The full error, which repeats indefinitely on the console, is:



incrementally starting raid arrays
mdadm: Create user root not found
mdadm: create group disk not found
incrementally started raid arrays


A similar screenshot is available here. This system was running fine until the last restart. The array assembles fine from a Puppy Linux rescue USB:



mdadm --assemble --scan


fdiisk shows the available disks:



# fdisk -l|grep GB
Disk /dev/sda: 320.1 GB, 320072933376 bytes
Disk /dev/sdb: 320.1 GB, 320072933376 bytes
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
Disk /dev/md127: 3000.5 GB, 3000457494528 bytes
Disk /dev/md126: 317.9 GB, 317938532352 bytes


Followed by blkid displaying UUIDs:



# blkid
/dev/md126: UUID="fc836940-3c99-4f64-8751-decc9629abc5" TYPE="ext4"
/dev/md0: UUID="2b00d6da-aa0e-4295-a1bb-822f4224815b" TYPE="swap"
/dev/loop0: TYPE="squashfs"
/dev/sda1: UUID="908ccc1f-cb70-4d3e-9d81-43b8e0f519ff" TYPE="ext4"
/dev/sdb1: UUID="3a052c52-593f-47d5-8606-cb818619c50b" TYPE="ext4"
/dev/sde1: LABEL="8GB_BLACK_P" UUID="1CE1-AF11" TYPE="vfat"


and I can mount the md126 device with:



mount /dev/md126 /mnt/tmp


My (previously working) fstab file is:



proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/md1 during installation
UUID=fc836940-3c99-4f64-8751-decc9629abc5 / ext4 errors=remount-ro 0 1
# swap was on /dev/md0 during installation
UUID=2b00d6da-aa0e-4295-a1bb-822f4224815b none swap sw 0 0

/dev/mapper/3TB_RAID--1--LVM-lvol0 /data ext4 nosuid,auto 0 0

More From » boot

 Answers
7

I just had this problem too. I noticed that your md is numbered md126 which is usually a random number made up at boot time, not the number from mdadm.conf


In /boot/grub/grub.cfg, various things refer to both /dev/md?? and UUID=.....


Both are needed. If the machine is booting up with a random md??? number every time, initrd will struggle to find the raid and go in an endless loop.


I had to change these numbers because I re-created my md device.


update-grub grabs the md? number from what's currently running in /proc/mdstats and puts it into /boot/grub/grub.cfg


update-initramfs grabs the md? number from the /etc/mdadm/mdadm.conf file and puts it into /boot/initrd___ Both have to match.


When you boot via a rescue disk, /dev/md... is just whatever random number the rescue disk makes up. This is different to the md... number in /etc/mdadm/mdadm.conf


What I did was run mdadm --stop /dev/md... on all the disks. Then ran...


mdadm --assemble --config=/etc/mdadm/mdadm.conf --run
cat /proc/mdstat # To check that the numbers are correct.
update-grub

If you needed to change /etc/mdadm/mdadm.conf, also run update-initramfs


Looks like your fstab says / was on /dev/md1 during installation; that's the number that may be in /boot/grub/grub.cfg and /etc/mdadm/mdadm.conf.


[#23053] Friday, May 20, 2022, 2 Years  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
rinstracte

Total Points: 221
Total Questions: 114
Total Answers: 120

Location: France
Member since Fri, Jan 28, 2022
2 Years ago
;