Quantcast
Channel: Active questions tagged zfs - Server Fault
Viewing all articles
Browse latest Browse all 143

Reimport a ZFS pool that has not correctly been exported

$
0
0

I recently bought a QNap NAS and decided to try out a few NAS OS, amongst which TrueNAS. After creating a ZFS pool on TrueNAS and transfering a few files, I got disatisfied with TrueNAS and decided to try out QNap's OS (QuTS). Both OS are installed on the NAS. I went through QuTS initialization, noticed that QuTS didn't automatically import the ZFS pool and got disatisfied with QuTS' interface. After what I decided to reinstall TrueNAS along with QuTS (QuTS had erased the disk on which TrueNAS was installed, which is different from the disks in the ZFS pool). And this is the current state of affairs.

The ZFS pool in question contains 3 disks of capacity 5To each. TrueNAS shows the pool in its interface, but is not able to mount it. The pool is named main-pool. Following are different commands that I tried and their results:

# zpool import -ano pools available to import# zpool status -v  pool: boot-pool state: ONLINE  scan: scrub repaired 0B in 00:00:08 with 0 errors on Sun Sep 22 03:45:10 2024config:        NAME         STATE     READ WRITE CKSUM        boot-pool    ONLINE       0     0     0          nvme0n1p3  ONLINE       0     0     0errors: No known data errors# zpool status -v main-poolcannot open 'main-pool': no such pool# zdb -l /dev/sd{b,c,e}    failed to unpack label 0failed to unpack label 1------------------------------------LABEL 2 (Bad label cksum)------------------------------------    version: 5000    name: 'main-pool'    state: 0    txg: 4    pool_guid: 8298165464761202083    errata: 0    hostid: 1555077055    hostname: 'nas'    top_guid: 4887568585043273647    guid: 12714426885291094564    vdev_children: 1    vdev_tree:        type: 'raidz'        id: 0        guid: 4887568585043273647        nparity: 1        metaslab_array: 128        metaslab_shift: 34        ashift: 12        asize: 15002922123264        is_log: 0        create_txg: 4        children[0]:            type: 'disk'            id: 0            guid: 12714426885291094564            path: '/dev/disk/by-partuuid/8b808c7a-cacd-46d1-b400-9f8c71d51b30'            whole_disk: 0            create_txg: 4        children[1]:            type: 'disk'            id: 1            guid: 822100609061977128            path: '/dev/disk/by-partuuid/8965dbb6-9484-4da5-ba1b-d33303b19ae5'            whole_disk: 0            create_txg: 4        children[2]:            type: 'disk'            id: 2            guid: 1731788408784132969            path: '/dev/disk/by-partuuid/fae9eb3e-e782-4115-bbc9-cde4dc72c408'            whole_disk: 0            create_txg: 4    features_for_read:        com.delphix:hole_birth        com.delphix:embedded_data        com.klarasystems:vdev_zaps_v2    labels = 2 3 

Weirdly, it seems like I can import each disk separatly, by dev path:

# zpool import -d /dev/sdb                                  pool: main-pool     id: 8298165464761202083  state: UNAVAILstatus: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data.   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY config:        main-pool                                 UNAVAIL  insufficient replicas          raidz1-0                                UNAVAIL  insufficient replicas            sdb                                   ONLINE            8965dbb6-9484-4da5-ba1b-d33303b19ae5  UNAVAIL            fae9eb3e-e782-4115-bbc9-cde4dc72c408  UNAVAIL

But not all of them together:

# zpool import -d /dev/sdb -d /dev/sdc -d /dev/sde                      pool: main-pool     id: 8298165464761202083  state: UNAVAILstatus: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data.   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY config:        main-pool   UNAVAIL  insufficient replicas          raidz1-0  UNAVAIL  insufficient replicas            sdb     UNAVAIL  invalid label            sdc     UNAVAIL  invalid label            sde     UNAVAIL  invalid label

So… Is my ZFS pool lost?

If so, then how do people recover a ZFS pool when the system has crashed before the ZFS pool was correctly exported? For instance if the disk containing the system fails?

Edit: some more informations below.

Here is the output of blkid:

# blkid/dev/mapper/nvme0n1p4: UUID="ce0e9fc8-60b8-4c80-9a3b-d06e45d06ba3" TYPE="swap"/dev/nvme0n1p1: PARTUUID="4633d564-6e46-4db6-9579-b01358dd0264"/dev/nvme0n1p2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="6D4A-8FC7" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="1be2d0e3-7bd4-4066-863f-4db859cefd35"/dev/nvme0n1p3: LABEL="boot-pool" UUID="17872462125275668883" UUID_SUB="9643558866770636482" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="efb94924-9001-4825-86d3-1285ee7a0701"/dev/nvme0n1p4: PARTUUID="c8537bb8-6d57-4246-9e13-667680e23bf6"/dev/sda1: UUID="bc485df0-39b6-10b6-7c50-b6f458a74999" UUID_SUB="e9118230-67c8-ed89-4a74-ad184d8cdea6" LABEL="9" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="73fbca5c-3870-4ed3-a7c6-dd22e58a193d"/dev/sda2: PARTLABEL="primary" PARTUUID="31dd8d57-ac07-4128-ae67-19bda9b7be17"/dev/sda3: PARTLABEL="primary" PARTUUID="4a1ebaf5-ef92-40ff-872b-b4281bf6f88f"/dev/sda4: UUID="8a5a3234-6059-95c0-06aa-b64147247264" UUID_SUB="3402db43-204c-2354-97ae-9a17d51a4ce1" LABEL="13" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="41405202-70c9-44dc-a2a1-de773b808279"/dev/sda5: UUID="32e6353a-e9ac-afda-3e0f-36e8b3d075ca" UUID_SUB="bf2c22d4-581c-5c16-461f-bcf048e22062" LABEL="322" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="b68d9269-1024-468d-b021-821e7e68150a"/dev/sdb1: UUID="bc485df0-39b6-10b6-7c50-b6f458a74999" UUID_SUB="15681faf-51c3-dfce-ad94-cc40fd5925c5" LABEL="9" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="4ddc315a-dd88-477e-bd2f-a145286d8596"/dev/sdb2: PARTLABEL="primary" PARTUUID="e9222737-18b8-4a3c-84cf-fefd9a5a6acf"/dev/sdb3: PARTLABEL="primary" PARTUUID="320b0e51-d289-44e0-b7d9-c7a7d8448e53"/dev/sdb4: UUID="8a5a3234-6059-95c0-06aa-b64147247264" UUID_SUB="1375ce0f-1448-cbc0-36d3-bdef9382d4f7" LABEL="13" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="3426a0c2-9c13-4451-a109-106d6e1aa913"/dev/sdb5: UUID="32e6353a-e9ac-afda-3e0f-36e8b3d075ca" UUID_SUB="d4fe128b-cd0a-ef96-df43-32562bd22186" LABEL="322" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="48cfc3db-425d-4ecc-9d6f-060c78d16f82"/dev/sdc1: UUID="bc485df0-39b6-10b6-7c50-b6f458a74999" UUID_SUB="5aa0e2f6-d6f3-f902-dbcf-fa23cf312d43" LABEL="9" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="3eba061c-a2d0-4059-ac47-b1ec9f79eadc"/dev/sdc2: PARTLABEL="primary" PARTUUID="b508be5e-5bc3-4278-99ae-d73d6f2ad28c"/dev/sdc3: PARTLABEL="primary" PARTUUID="47b34f9c-1dff-4c04-a5b5-4403727ec19b"/dev/sdc4: UUID="8a5a3234-6059-95c0-06aa-b64147247264" UUID_SUB="bd86e25c-5a7b-af5a-d904-4bd2de8e64d7" LABEL="13" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="73a12459-f90d-4929-b905-22774e366a6c"/dev/sdc5: UUID="32e6353a-e9ac-afda-3e0f-36e8b3d075ca" UUID_SUB="94f8107f-8d17-6357-d788-3b242fb13c92" LABEL="322" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="50508432-d0c8-41de-b656-da97b6eb67e3"/dev/sdd1: UUID="bc485df0-39b6-10b6-7c50-b6f458a74999" UUID_SUB="fdb00b87-7805-ab91-c815-d18c2270b6f3" LABEL="9" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="0356a6fb-5aee-4a30-a9c3-d248d38dd33c"/dev/sdd2: PARTLABEL="primary" PARTUUID="a6596854-3bcf-4ac6-96d1-9558369bab1c"/dev/sdd3: PARTLABEL="primary" PARTUUID="94cf24c3-5de8-4106-a71d-b2bbe16c9344"/dev/sdd4: UUID="8a5a3234-6059-95c0-06aa-b64147247264" UUID_SUB="5834801a-a5cf-03eb-6e9e-24123cab09f4" LABEL="13" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="5135b9b2-ce67-442b-b7eb-711a9825b084"/dev/sdd5: UUID="32e6353a-e9ac-afda-3e0f-36e8b3d075ca" UUID_SUB="f60ec16c-97c7-f84e-e02c-dbba79d0a943" LABEL="322" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="4b76e000-b45d-48ad-aaf7-934363b13aa5"/dev/sde1: UUID="9333eb40-8071-460b-972f-a3192d483667" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-01"/dev/sde2: LABEL="QTS_BOOT_PART2" UUID="9cb067c5-0ede-4482-950d-cdf662995f85" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-02"/dev/sde3: LABEL="QTS_BOOT_PART3" UUID="a2e549dc-4a48-4ab4-bed8-9fcd14845f13" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-03"/dev/sde5: UUID="9015507c-d233-41e0-8480-0fe81ff64be2" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-05"/dev/sde6: UUID="bbeb321e-525b-4ffb-96a3-028432d3dea8" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-06"/dev/sde7: UUID="1e9f878e-af23-48b9-96e4-1fabe75c0c81" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="b66e61e9-07"

Viewing all articles
Browse latest Browse all 143

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>