Quantcast
Channel: Active questions tagged zfs - Server Fault
Viewing all articles
Browse latest Browse all 143

ZFS-HA pool faulted with metadata corruption

$
0
0

I setup the ZFS-HA following the excellent description Github (see here). After extensive testing, I rolled the setup out to production using 5x12 disks in RAIDZ3 connected to two nodes using HBA Controllers. This ran quite smooth until last night when one of the two storage pools suddenly faulted with "The pool metadata is corrupted." during a scrub run. At this point I can only speculate about what caused this, both pools were set up with SCSI fencing in pacemaker and disk reservations worked flawlessly during all failure scenarios I tested before going into production. The only major incident which occurred recently were two complete power outages without UPS support (read: the power was just gone from one moment to the next). However, it might also be that the true reason for the corruption is something completely different.

The situation now is that I cannot import the pool anymore (kindly see the output of zpool import at the end of this question). So far, all my intents to rescue the pool failed:

# zpool import -f tankcannot import 'tank': one or more devices is currently unavailable# zpool import -F tankcannot import 'tank': one or more devices is currently unavailable

This puzzles me a bit since it does not really say that the only option would be to destroy the pool (which would be the expected response on a lethally corrupted pool).

# zpool clear -F tankcannot open 'tank': no such pool

I also manually removed all SCSI reservations, e.g.:

# DEVICE=35000c5008472696f# sg_persist --in --no-inquiry --read-reservation --device=/dev/mapper/$DEVICE# sg_persist --in --no-inquiry --read-key --device=/dev/mapper/$DEVICE# sg_persist --out --no-inquiry --register --param-sark=0x80d0001 --device=/dev/mapper/$DEVICE# sg_persist --out --no-inquiry --clear --param-rk=0x80d0001 --device=/dev/mapper/$DEVICE# sg_persist --in --no-inquiry --read-reservation --device=/dev/mapper/$DEVICE

I further tried removing A/C from the disk shelves to clear any temporary information that might remain in the desks.

I am quite frankly running short on options. The only thing left on my list is the -X option to zpool import - which I will try after all other measures failed.

So my question is, did you run into anything like this before and - more importantly - did you find a way to resolve this? I would be very grateful for any suggestions you might have .

=========

Pool layout/configuration:

   pool: tank     id: 1858269358818362832  state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data.        The pool may be active on another system, but can be imported using        the '-f' flag.   see: http://zfsonlinux.org/msg/ZFS-8000-72 config:        tank                   FAULTED  corrupted data          raidz3-0             FAULTED  corrupted data            35000c5008472696f  ONLINE            35000c5008472765f  ONLINE            35000c500986607bf  ONLINE            35000c5008472687f  ONLINE            35000c500847272ef  ONLINE            35000c50084727ce7  ONLINE            35000c50084729723  ONLINE            35000c500847298cf  ONLINE            35000c50084728f6b  ONLINE            35000c50084726753  ONLINE            35000c50085dd15bb  ONLINE            35000c50084726e87  ONLINE          raidz3-1             FAULTED  corrupted data            35000c50084a8a163  ONLINE            35000c50084e80807  ONLINE            35000c5008472940f  ONLINE            35000c50084a8f373  ONLINE            35000c500847266a3  ONLINE            35000c50084726307  ONLINE            35000c50084726897  ONLINE            35000c5008472908f  ONLINE            35000c50084727083  ONLINE            35000c50084727c8b  ONLINE            35000c500847284e3  ONLINE            35000c5008472670b  ONLINE          raidz3-2             FAULTED  corrupted data            35000c50084a884eb  ONLINE            35000c500847262bb  ONLINE            35000c50084eb9f43  ONLINE            35000c50085030a4b  ONLINE            35000c50084eb238f  ONLINE            35000c50084eb6873  ONLINE            35000c50084728baf  ONLINE            35000c50084eb4c83  ONLINE            35000c50084727443  ONLINE            35000c50084a8405b  ONLINE            35000c5008472868f  ONLINE            35000c50084727c6f  ONLINE          raidz3-3             FAULTED  corrupted data            35000c50084eaa467  ONLINE            35000c50084e7d99b  ONLINE            35000c50084eb55e3  ONLINE            35000c500847271d7  ONLINE            35000c50084726cef  ONLINE            35000c50084726763  ONLINE            35000c50084727713  ONLINE            35000c50084728127  ONLINE            35000c50084ed0457  ONLINE            35000c50084e5eefb  ONLINE            35000c50084ecae2f  ONLINE            35000c50085522177  ONLINE          raidz3-4             FAULTED  corrupted data            35000c500855223c7  ONLINE            35000c50085521a07  ONLINE            35000c50085595dff  ONLINE            35000c500855948a3  ONLINE            35000c50084f98757  ONLINE            35000c50084f981eb  ONLINE            35000c50084f8b0d7  ONLINE            35000c50084f8d7f7  ONLINE            35000c5008539d9a7  ONLINE            35000c5008552148b  ONLINE            35000c50085521457  ONLINE            35000c500855212b3  ONLINE

Edit:

Servers are 2x Dell PowerEdge R630, Controllers are DELL OEM versions of Broardcom SAS HBA (should be similar to SAS 9300-8e) and all 60 disks in this pool are Seagate ST6000NM0034. The Enclosure is Quanta MESOS M4600H.

Edit 2:

OS is CentOS 7

ZFS is zfs-0.7.3-1.el7_4.x86_64


Viewing all articles
Browse latest Browse all 143

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>