Friday, May 17, 2024
 Popular · Latest · Hot · Upcoming
6
rated 0 times [  6] [ 0]  / answers: 1 / hits: 3098  / 1 Year ago, sun, march 12, 2023, 4:23:47

I Solved it



The solution is in my own answer (se my next post). This post only describes my original problem and what I've tried.



The may be some pointers in it for you though... or not.



I Solved it ends



First of all i'm pretty new to linux.
Here's the deal. My old computer mainboard has failed me. That no problem I just bye a new one. However I had been stupid enough to use Intels RST, wich was onboard the old mainboard but not the new one.
Now the question is if it is posible to recover the RST raid, without the Intel RST boot expantion? It dosn't look like the disks has automagicaly been assembled to one volume.
It seems to my that it is posible, but when it comes to raid and disk/partion mangement, my knowledge pretty much stops at gparted.



So far I've found that blkid for both disks gives (and only gives):



/dev/sdb: TYPE="isw_raid_member"
/dev/sda: TYPE="isw_raid_member"


That looks allright.



mdadm -E gives me:



mdadm -E /dev/sdb /dev/sda
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 3ad31c33
Family : 3ad31c33
Generation : 000006b7
Attributes : All supported
UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
Checksum : 0798e757 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : 6VYCWHXL
State : active
Id : 00000000
Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
RAID Level : 0
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Sector Size : 512
Array Size : 976783360 (465.77 GiB 500.11 GB)
Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
Sector Offset : 0
Num Stripes : 1907780
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off

Disk01 Serial : W2A50R0P
State : active
Id : 00000004
Usable Size : 488391680 (232.88 GiB 250.06 GB)
mdadm: /dev/sda is not attached to Intel(R) RAID controller.
mdadm: /dev/sda is not attached to Intel(R) RAID controller.
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 3ad31c33
Family : 3ad31c33
Generation : 000006b7
Attributes : All supported
UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
Checksum : 0798e757 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk01 Serial : W2A50R0P
State : active
Id : 00000004
Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
RAID Level : 0
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Sector Size : 512
Array Size : 976783360 (465.77 GiB 500.11 GB)
Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
Sector Offset : 0
Num Stripes : 1907780
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off

Disk00 Serial : 6VYCWHXL
State : active
Id : 00000000
Usable Size : 488391680 (232.88 GiB 250.06 GB)


So is it posible to safly reassmble theese two disks into a single volume?
eg mdadmin --assemble



I'm in doubt about the workings of mdadm. So this is a good learning experience for me.



lsb_release -a



Distributor ID: Ubuntu
Description: Ubuntu 19.10
Release: 19.10
Codename: eoan


uname -a



Linux HPx64 5.3.0-51-generic #44-Ubuntu SMP Wed Apr 22 21:09:44 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux


Note that it was named HPx64 because i've reused the Ubuntu installation and that is a xUbuntu



--- Update 2020-05-15 ---



Found out that seting the IMSM_NO_PLATFORM=1 env.var. has two affects (so far).
1) Removes the "mdadm: /dev/sdb is not attached to Intel(R) RAID controller." warning output from:



mdadm -E /dev/sdb



2) Removes the "mdadm: /dev/sdb is not attached to Intel(R) RAID controller." output from:



mdadm --assemble /dev/md0 /dev/sdb /dev/sda



Status now after assemble is that md0 devices is created in dev:



cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sda[0](S)
5488 blocks super external:imsm

unused devices: <none>


And



mdadm -E /dev/md0 
/dev/md0:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 3ad31c33
Family : 3ad31c33
Generation : 000006b7
Attributes : All supported
UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
Checksum : 0798e757 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : 6VYCWHXL
State : active
Id : 00000000
Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
RAID Level : 0
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Sector Size : 512
Array Size : 976783360 (465.77 GiB 500.11 GB)
Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
Sector Offset : 0
Num Stripes : 1907780
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off

Disk01 Serial : W2A50R0P
State : active
Id : 00000004
Usable Size : 488391680 (232.88 GiB 250.06 GB)


And



mdadm --query --detail  /dev/md0

/dev/md0:
Version : imsm
Raid Level : container
Total Devices : 2

Working Devices : 2


UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
Member Arrays :

Number Major Minor RaidDevice

- 8 0 - /dev/sda
- 8 16 - /dev/sdb


So it's some of the way but something is still wrong. It seems that the volume isn't exposed to the system and the examine of md0 is simular to sdb.
Any ideas and thoughts are welcome.


More From » raid

 Answers
2

!!! Success!!!



Found it. I was trying to hard. All I had to do was:



IMSM_NO_PLATFORM=1 mdadm --assemble --scan --verbose


And wuuupti dooooo the raid volume was (re)assembled as /dev/md126:



mdadm --query --detail  /dev/md126p1
/dev/md126p1:
Container : /dev/md/imsm0, member 0
Raid Level : raid0
Array Size : 488388608 (465.76 GiB 500.11 GB)
Raid Devices : 2
Total Devices : 2

State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 128K

Consistency Policy : none


UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
Number Major Minor RaidDevice State
1 8 16 0 active sync /dev/sdb
0 8 0 1 active sync /dev/sda

[#3478] Monday, March 13, 2023, 1 Year  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
bewre

Total Points: 164
Total Questions: 108
Total Answers: 106

Location: Ghana
Member since Sun, Mar 27, 2022
2 Years ago
bewre questions
Sun, May 14, 23, 13:27, 1 Year ago
Mon, Aug 2, 21, 03:57, 3 Years ago
Thu, Aug 26, 21, 18:05, 3 Years ago
Sat, Aug 6, 22, 21:41, 2 Years ago
Sat, Jul 24, 21, 22:52, 3 Years ago
;