Monday, April 29, 2024
 Popular · Latest · Hot · Upcoming
1
rated 0 times [  1] [ 0]  / answers: 1 / hits: 23899  / 1 Year ago, mon, january 23, 2023, 7:33:32

Hello this is my first foray into ubuntu, or any form of Linix for that matter, as well as my first attempt to ever recover a lost a raid, but I'm a fairly quick study. I have an 8TB WD Sharespace, four 2TB drives set up on Raid 5 with two "failed" (maybe not) drives. I'm desperate to recover my data. Aside from all the music and movies and games etc, all my photos and home videos of my children growing up are on this thing and I'm fairly desperate to recover them. Aside from preferring to do things myself I can't afford professional data recovery. I do not have back-up drives to copy my originals and I can't afford them so I'm stuck trying to do this myself with the original disks. Any help is appreciated.



Forgive me if I over explain as I'm not really sure what's relevant and what isn't as I'm trying to figure this out. I believe the controller in my NAS has failed and the data is actually still intact on the drives. I've pulled all four drives out of the NAS and put them in order in my comp disconnected my normal HDD's and am using the Live CD in try mode to run Ubuntu. I've been trying to do this thus far using this guide: HOWTO:Sharespace Raid 5 Data Recovery, but I've run into a few snags along the way and the entire forum is closed so I can't ask questions there.



The first thing I did was to set myself at the root user and check to make sure all my drives were in the right place and recognized. Using fdisk -l



Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sda1 1 417689 208844+ fd Linux raid autodetect
/dev/sda2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sda3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sda4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdb1 1 417689 208844+ fd Linux raid autodetect
/dev/sdb2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdb3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdb4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 1 417689 208844+ fd Linux raid autodetect
/dev/sdc2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdc3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdc4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdd1 1 417689 208844+ fd Linux raid autodetect
/dev/sdd2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdd3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdd4 2923830 3907024064 1952050117+ fd Linux raid autodetect


Not really knowing what I'm looking for, but not seeing anything that sent up any red flags, they all look pretty intact and healthy to me, so I proceed with trying to assemble the raid from the sd*4 partitions which are supposed to be the ones with the data to rebuild the raid.



I tried:



mdadm --assemble /dev/md0 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4


but I got an error about 2 drives and not being enough so began to scour the internet and learned there was a force command so I used it:



mdadm --assemble --force /dev/md0 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4


and that seemed to work. YAY!... sort of...



vgscan

No volume groups found


Boo... So back to scouring the internet I found a post where someone was having a similar problem and had to recreate their volume and local groups to successfully access their data. Using the names for the volume groups and local groups from the guide I was using I created them using the new commands I found:



vgcreate vg0 /dev/md0


and



lvcreate -L5.45T -n lv0 vg0


both reported being created and all seemed good in the world. Until, going back to my guide I tried to mount it:



mkdir /mnt/raid
mount -t auto /dev/vg0/lv0 /mnt/raid
mount: you must specify the filesystem type


Apparently "auto" doesn't work like the guide said. Poking around the net I've found a couple filesystem types ext3 and ext 4 so I tried them too:



mount -t ext3 /dev/vg0/lv0 /mnt/raid
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so


and



mount -t ext4 /dev/vg0/lv0 /mnt/raid
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so


As you can see neither worked... So after several hours more of searching I've come to the conclusion that I really need to ask for help. If anyone has any suggestions, or advice, or even better knows how to make this work, I'd really appreciate the insight. If I did something wrong that would also be really good to know.



I figured this might also be useful:



mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Thu Dec 10 05:44:29 2009
Raid Level : raid5
Array Size : 5854981248 (5583.75 GiB 5995.50 GB)
Used Dev Size : 1951660416 (1861.25 GiB 1998.50 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Apr 4 08:12:03 2013
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : dd69553d:5c832cf7:9755c9c8:d208511e
Events : 0.3986045

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 0 0 3 removed


as well as this:



cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda4[0] sdc4[2] sdb4[1]
5854981248 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

unused devices: <none>


This is what my fdisk -l looks like now after playing around with all this:



fdisk -l

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sda1 1 417689 208844+ fd Linux raid autodetect
/dev/sda2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sda3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sda4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdb1 1 417689 208844+ fd Linux raid autodetect
/dev/sdb2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdb3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdb4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 1 417689 208844+ fd Linux raid autodetect
/dev/sdc2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdc3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdc4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdd1 1 417689 208844+ fd Linux raid autodetect
/dev/sdd2 417690 2506139 1044225 fd Linux raid autodetect
/dev/sdd3 2506140 2923829 208845 fd Linux raid autodetect
/dev/sdd4 2923830 3907024064 1952050117+ fd Linux raid autodetect

Disk /dev/md0: 5995.5 GB, 5995500797952 bytes
2 heads, 4 sectors/track, 1463745312 cylinders, total 11709962496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/mapper/vg0-lv0: 5992.3 GB, 5992339210240 bytes
255 heads, 63 sectors/track, 728527 cylinders, total 11703787520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg0-lv0 doesn't contain a valid partition table


Please keep in mind that I'm still new at this so it would be highly appreciated if any advice could please come with some basic understanding of why you are making your suggestion. It will help me greatly understand more of what I'm doing. Thank you!


More From » mount

 Answers
4

From the info you have provided I think you are likely correct in that the vast majority of your data remains intact.
At this point you are dealing with a broken RAID array. Obviously not where you want to be but not the end of the world either.



In my experience with ShareSpace units usually one drive will drop out of the array long before the RAID actually crashes.
The Linux Software RAID system detects the first drive failure and switches the array over to degraded mode.
This means the array continues to operate but it's only using the three remaining good drives.
Things will appear to operate normally for a period of time until a second drive drops out of the array.
Then the RAID crashes and you have a problem.



These drives drop out for a reason. Commonly it's bad sectors. Fortunately recovery is very often possible.
But you need to tread carefully as any miss-steps you take will lessen the chances of data recovery.



If you want to go it alone I would advise you to take backup images before proceeding further.
I know easier said than done with four 2TB drives. But you would really only need to backup three of them /dev/sda, /dev/sdb, and /dev/sdc.



Or if you would like help I'm an online data recovery consultant.
And I perform remote NAS / RAID data recovery via the Internet for clients worldwide.
In the past I've performed many successful remote WD ShareSpace data recovery operations.
I offer both remote data recovery and Do-It-Yourself data recovery assistance.
If you like I would be happy to help you. Contact me through my site.
My name is Stephen Haran and my site is http://www.FreeDataRecovery.us


[#31891] Tuesday, January 24, 2023, 1 Year  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
breadoules

Total Points: 212
Total Questions: 118
Total Answers: 120

Location: Dominica
Member since Mon, Jun 22, 2020
4 Years ago
;