I had a ZFS pool -- a mirror containing 2 vdevs -- running on a FreeBSD server. I now have only one of the disks from the mirror, and I am trying to recover files from it.
The ZFS data sits in a GPT partition on the disk.
When I try to import the pool, there's no sign that it exists at all. I have tried a number of approaches, but nothing happens.
I have run zdb -lu
on the partition, and it seems to find the labels just fine.
# zpool import# zpool import -D# zpool statusno pools available# zpool import -f ztmpcannot import 'ztmp': no such pool available# zpool import 16827460747202824739cannot import '16827460747202824739': no such pool available
Partition information:
# gpart list da0Geom name: da0modified: falsestate: OKfwheads: 255fwsectors: 63last: 3907029134first: 34entries: 128scheme: GPTProviders:1. Name: da0p1 Mediasize: 65536 (64K) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r0w0e0 rawuuid: d7a10230-8b0e-11e1-b750-f46d04227f12 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 17408 type: freebsd-boot index: 1 end: 161 start: 342. Name: da0p2 Mediasize: 17179869184 (16G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r0w0e0 rawuuid: d7aa40b7-8b0e-11e1-b750-f46d04227f12 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 17179869184 offset: 82944 type: freebsd-swap index: 2 end: 33554593 start: 1623. Name: da0p3 Mediasize: 1905891737600 (1.7T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r0w0e0 rawuuid: d7b6a47e-8b0e-11e1-b750-f46d04227f12 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 1905891737600 offset: 17179952128 type: freebsd-zfs index: 3 end: 3755999393 start: 33554594Consumers:1. Name: da0 Mediasize: 2000398934016 (1.8T) Sectorsize: 512 Mode: r0w0e0
ZFS label:
--------------------------------------------LABEL 0-------------------------------------------- version: 5000 name: 'ztmp' state: 0 txg: 0 pool_guid: 16827460747202824739 hostid: 740296715 hostname: '#############' top_guid: 15350190479074972289 guid: 3060075816835778669 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 15350190479074972289 whole_disk: 0 metaslab_array: 30 metaslab_shift: 34 ashift: 9 asize: 1905887019008 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3060075816835778669 path: '/dev/gptid/d7b6a47e-8b0e-11e1-b750-f46d04227f12' phys_path: '/dev/gptid/d7b6a47e-8b0e-11e1-b750-f46d04227f12' whole_disk: 1 DTL: 5511 resilvering: 1 children[1]: type: 'disk' id: 1 guid: 3324029433529063540 path: '/dev/gptid/396a2b11-cb16-11e1-83f4-f46d04227f12' phys_path: '/dev/gptid/396a2b11-cb16-11e1-83f4-f46d04227f12' whole_disk: 1 DTL: 3543 create_txg: 4 resilvering: 1 features_for_read: create_txg: 0Uberblock[0] magic = 0000000000bab10c version = 5000 txg = 0 guid_sum = 1668268329223536005 timestamp = 1361299185 UTC = Tue Feb 19 10:39:45 2013
(Other labels are exact copies)
There is a discussion of a similar-sounding problem in this old thread. I tried running Jeff Bonwick's labelfix
tool (with updates from this post), but it did not seem to solve the problem.
Any ideas?