Have been running a test instance of a NAS using a ZFS as mentioned in Restoring an Ubuntu Server using ZFS RAIDZ for data.
This week one of my disks died. Shouldn't be a problem, should it (the benefits of RAID being resilience as well as performance)?
Except that my ZFS pool got corrupted, as in:
andy@ubuntu:~$ sudo zpool status -v
pool: tank
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
sdb FAULTED 0 0 0 corrupted data
sdc FAULTED 0 0 0 corrupted data
sdd UNAVAIL 0 0 0
Fortunately this is a test instance and so I can easily start again. But what if this pool contained important data? What would the right next step(s) be to recover the data and restore my NAS to working order? Or does ZFS automatically try all possible restoration approaches, such that the data is now toast?