Sunday, April 28, 2024
9
rated 0 times [  9] [ 0]  / answers: 1 / hits: 15928  / 1 Year ago, wed, november 30, 2022, 8:14:30

I have a machine with UEFI BIOS. I want to install Ubuntu 20.04 desktop with LVM on top of RAID 1, so my system will continue to work even if one of the drives fail. I haven't found a HOWTO for that. The 20.04 desktop installer supports LVM but not RAID. The answer to this question describes the process for 18.04. However, 20.04 does not provide an alternate server installer. The answer to this question and this question describe RAID but not LVM nor UEFI. Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?


More From » system-installation

 Answers
5

After some weeks of experimenting and with some help from this link, I have finally found a solution that works. The sequence below was performed with Ubuntu 20.04.2.0 LTS. I have also succeeded with the procedure with 21.04.0 inside a virtual machine. (However, please note that there is a reported problem with Ubuntu 21.04 and some older UEFI systems.


In short



  1. Download and boot into Ubuntu Live for 20.04.

  2. Set up mdadm and lvm.

  3. Run the Ubuntu installer, but do not reboot.

  4. Add mdadm to target system.

  5. Clone EFI partition to second drive.

  6. Install second EFI partition into UEFI boot chain.

  7. Reboot


In detail


1. Download the installer and boot into Ubuntu Live


1.1 Download



  • Download the Ubuntu Desktop installer from https://ubuntu.com/download/desktop and put it onto a bootable media. (As of 2021-12-13, the iso was called ubuntu-20.04.3-desktop-amd64.iso.)


1.2 Boot Ubuntu Live



  • Boot onto the media from step 1.1.

  • Select Try Ubuntu.

  • Start a terminal by pressing Ctrl-Alt-T. The commands below should be entered in that terminal.


2. Set up mdadm and lvm


In the example below, the disk devices are called /dev/sdaand /dev/sdb. If your disks are called something else, e.g., /dev/nvme0n1 and /dev/sdb, you should replace the disk names accordingly. You may use sudo lsblk to find the names of your disks.


2.0 Install ssh server


If you do not want to type all the commands below, you may install want to log in via ssh and cut-and-paste the commands.



  • Install


    sudo apt install openssh-server



  • Set a password to enable external login


    passwd



  • If you are testing this inside a virtual machine, you will probably want to forward a suitable port. Select Settings, Network, Advanced, Port forwarding, and the plus sign. Enter, e.g., 3022 as the Host Port and 22 as the Guest Port and press OK. Or from the command line of your host system (replace VMNAME with the name of your virtual machine):


    VBoxManage modifyvm VMNAME --natpf1 "ssh,tcp,,3022,,22"
    VBoxManage showvminfo VMNAME | grep 'Rule'



Now, you should be able to log onto your Ubuntu Live session from an outside computer using


ssh <hostname> -l ubuntu

or, if you are testing on a virtual machine on localhost,


ssh localhost -l ubuntu -p 3022

and the password you set above.


2.1 Create partitions on the physical disks



  • Zero the partition tables with


    sudo sgdisk -Z /dev/sda
    sudo sgdisk -Z /dev/sdb


  • Create two partitions on each drive; one for EFI and one for the RAID device.


    sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sda
    sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sda
    sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sdb
    sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sdb


  • Create a FAT32 system for the EFI partition on the first drive. (Will be cloned to the second drive later.)


    sudo mkfs.fat -F 32 /dev/sda1



2.2 Install mdadm and create md device


Install mdadm


  sudo apt-get update
sudo apt-get install mdadm

Create the md device. Ignore the warning about the metadata since the array will not be used as a boot device.


  sudo mdadm --create /dev/md0 --bitmap=internal --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2

Check the status of the md device.


$ cat /proc/mdstat 
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.0% (1001728/1047918528) finish=69.6min speed=250432K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

In this case, the device is syncing the disks, which is normal and may continue in the background during the process below.


2.4 Partition the md device


  sudo sgdisk -Z /dev/md0
sudo sgdisk -n 1:0:0 -t 1:E6D6D379-F507-44C2-A23C-238F2A3DF928 -c 1:"Linux LVM" /dev/md0

This creates a single partition /dev/md0p1 on the /dev/md0 device. The UUID string identifies the partition of be an LVM partition.


2.3 Create LVM devices



  • Create a physical volume on the md device


    sudo pvcreate /dev/md0p1


  • Create a volume group on the physical volume


    sudo vgcreate vg0 /dev/md0p1


  • Create logical volumes (partitions) on the new volume group. The sizes and names below are my choices. You may decide differently.


    sudo lvcreate -Z y -L 25GB --name root vg0
    sudo lvcreate -Z y -L 10GB --name tmp vg0
    sudo lvcreate -Z y -L 5GB --name var vg0
    sudo lvcreate -Z y -L 10GB --name varlib vg0
    sudo lvcreate -Z y -L 200GB --name home vg0



Now, the partitions are ready for the Ubuntu installer.


3. Run the installer



  • Double-click on the Install Ubuntu 20.04.2.0 LTS icon on the desktop of the new computer. (Do NOT start the installer via any ssh connection!)

  • Answer the language and keyboard questions.

  • On the Installation type page, select Something else. (This is the important part.) This will present you with a list of partitions called /dev/mapper/vg0-home, etc.

  • Double-click on each partition starting with /dev/mapper/vg0-. Select Use as: Ext4, check the Format the partition box, and choose the appropriate mount point (/ for vg0-root, /home for vg0-home, etc., /var/lib for vg0-varlib).

  • Select the first device /dev/sda for the boot loader.

  • Press Install Now and continue the installation.

  • When the installation is finished, select Continue Testing.


In a terminal, run lsblk. The output should be something like this:


$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sda 8:0 0 1000G 0 disk
├─sda1 8:1 0 512M 0 part
└─sda2 8:2 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
sdb 8:16 0 1000G 0 disk
├─sdb1 8:17 0 512M 0 part
└─sdb2 8:18 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
...

As you can see, the installer left the installed system root mounted to /target. However, the other partitions are not mounted. More importantly, mdadm is not yet part of the installed system.


4. Add mdadm to the target system


4.1 chroot into the target system


First, we must mount the unmounted partitions:


sudo mount /dev/mapper/vg0-home /target/home
sudo mount /dev/mapper/vg0-tmp /target/tmp
sudo mount /dev/mapper/vg0-var /target/var
sudo mount /dev/mapper/vg0-varlib /target/var/lib

Next, bind some devices to prepare for chroot...


cd /target
sudo mount --bind /dev dev
sudo mount --bind /proc proc
sudo mount --bind /sys sys

...and chroot into the target system.


sudo chroot .

4.2 Update the target system


Now we are inside the target system. Install mdadm


apt install mdadm

If you get a dns error, do


echo "nameserver 1.1.1.1" >> /etc/resolv.conf 

and repeat


apt install mdadm

You may ignore any warnings about pipe leaks.


Inspect the configuration file /etc/mdadm/mdadm.conf. It should contain a line near the end similar to


ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6 name=ubuntu:0

Remove the name=... part to have the line read like


ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6

Update the module list the kernel should load at boot.


echo raid1 >> /etc/modules

Update the boot ramdisk


update-initramfs -u

Finally, exit from chroot


exit

5. Clone EFI partition


Now the installed target system is complete. Furthermore, the main partition is protected from a single disk failure via the RAID device. However, the EFI boot partition is not protected via RAID. Instead, we will clone it.


sudo dd if=/dev/sda1 of=/dev/sdb1 bs=4096

Run


$ sudo blkid /dev/sd[ab]1
/dev/sda1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="ccc71b88-a8f5-47a1-9fcb-bfc960a07c16"
/dev/sdb1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="fd070974-c089-40fb-8f83-ffafe551666b"

Note that the FAT UUIDs are identical but the GPT PARTUUIDs are different.


6. Insert EFI partition of second disk into the boot chain


Finally, we need to insert the EFI partition on the second disk into the boot chain. For this we will use the efibootmgr.


sudo apt install efibootmgr

Run


sudo efibootmgr -v

and study the output. There should be a line similar to


Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(EFIubuntushimx64.efi)

Note the path after File. Run


sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l 'EFIubuntushimx64.efi'

to create a new boot entry on partition 1 of /dev/sdb with the same path as the ubuntu entry. Re-run


sudo efibootmgr -v

and verify that there is a second entry called ubuntu2 with the same path as ubuntu:


Boot0005* ubuntu  HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(EFIubuntushimx64.efi)
Boot0006* ubuntu2 HD(1,GPT,fd070974-c089-40fb-8f83-ffafe551666b,0x800,0x100000)/File(EFIubuntushimx64.efi)

Furthermore, note that the UUID string of each entry is identical to the corresponding PARTUUID string above.


7. Reboot


Now we are ready to reboot. Check if the sync process has finished.


$ cat /proc/mdstat 
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
bitmap: 1/8 pages [4KB], 65536KB chunk

unused devices: <none>

If the syncing is still in progress, it should be ok to reboot. However, I suggest to wait until the syncing is complete before rebooting.


After rebooting, the system should be ready to use! Furthermore, should either of the disks fail, the system would use the UEFI partition from the healthy disk and boot ubuntu with the md0 device in degraded mode.


8. Update EFI partition after grub-efi-amd64 update


When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.


8.1 Find out clone source, quick way


If you haven't rebooted after the update, use


mount | grep boot

to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.


8.2 Find out clone source, paranoid way


Create mount points and mount both partitions:


sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1

Find timestamp of newest file in each tree


sudo find /tmp/sda1 -type f -printf '%T+ %p
' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p
' | sort | tail -n 1 > /tmp/newest.sdb1

Compare timestamps


cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.
"'

Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.


Unmount the partitions before the cloning to avoid cache/partition inconsistency.


sudo umount /tmp/sda1 /tmp/sdb1

8.3 Clone


If /dev/sdb1 was the clone source:


sudo dd if=/dev/sdb1 of=/dev/sda1

If /dev/sda1 was the clone source:


sudo dd if=/dev/sda1 of=/dev/sdb1

Done!


9. Virtual machine gotchas


If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):


FS0:
EFIubuntugrubx64.efi

The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.


[#2232] Friday, December 2, 2022, 1 Year  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
inglehare

Total Points: 330
Total Questions: 111
Total Answers: 95

Location: Sint Maarten
Member since Tue, Mar 29, 2022
2 Years ago
;