Quantcast
Channel: Active questions tagged zfs - Server Fault
Viewing all articles
Browse latest Browse all 143

ZFS import unable to find any pools on RockyLinux after Reboot

$
0
0

I had a single ZFS pool comprised of 8 (6/2) drives in a ZFS pool and after my server was down for ~5 days of no power the zpool I had created no longer shows where I mounted it. When I first access the server I have to run sudo /sbin/modprobe zfs in order to run zfs list or zpool status.

$ zfs --versionzfs-2.0.7-1zfs-kmod-2.0.7-1$ uname -r4.18.0-348.20.1.el8_5.x86_64

I try the following commands to attempt to import the zfs pool:

$ sudo zpool import -D -f tpoolcannot import 'tpool': no such pool available$ sudo zpool import -ano pools available to import$ zfs listno datasets available

I can run zdb and I get:

$ zdbtpool:    version: 5000    name: 'tpool'    state: 0    txg: 7165299    pool_guid: 11415603756597526308    errata: 0    hostname: 'cms-Rocky'    com.delphix:has_per_vdev_zaps    vdev_children: 1    vdev_tree:        type: 'root'        id: 0        guid: 11415603756597526308        create_txg: 4        children[0]:            type: 'raidz'            id: 0            guid: 10941203445809909102            nparity: 2            metaslab_array: 138            metaslab_shift: 34            ashift: 12            asize: 112004035510272            is_log: 0            create_txg: 4            com.delphix:vdev_zap_top: 129            children[0]:                type: 'disk'                id: 0                guid: 4510750026254274869                path: '/dev/sdd1'                devid: 'ata-WDC_WD140EDGZ-11B1PA0_9LK5RGEG-part1'                phys_path: 'pci-0000:02:00.0-sas-phy2-lun-0'                whole_disk: 1                DTL: 11590                create_txg: 4                expansion_time: 1713624189                com.delphix:vdev_zap_leaf: 130            children[1]:                type: 'disk'                id: 1                guid: 11803937638201902428                path: '/dev/sdb1'                devid: 'ata-WDC_WD140EDGZ-11B2DA2_3WKJ6Z8K-part1'                phys_path: 'pci-0000:02:00.0-sas-phy0-lun-0'                whole_disk: 1                DTL: 11589                create_txg: 4                expansion_time: 1713624215                com.delphix:vdev_zap_leaf: 131            children[2]:                type: 'disk'                id: 2                guid: 3334214933689119148                path: '/dev/sdc1'                devid: 'ata-WDC_WD140EFGX-68B0GN0_9LJYYK5G-part1'                phys_path: 'pci-0000:02:00.0-sas-phy1-lun-0'                whole_disk: 1                DTL: 11588                create_txg: 4                expansion_time: 1713624411                com.delphix:vdev_zap_leaf: 132            children[3]:                type: 'disk'                id: 3                guid: 1676946692400057901                path: '/dev/sda1'                devid: 'ata-WDC_WD140EDGZ-11B1PA0_9LJT82UG-part1'                phys_path: 'pci-0000:02:00.0-sas-phy3-lun-0'                whole_disk: 1                DTL: 11587                create_txg: 4                expansion_time: 1713624185                com.delphix:vdev_zap_leaf: 133            children[4]:                type: 'disk'                id: 4                guid: 8846690516261376704                path: '/dev/disk/by-id/ata-WDC_WD140EDGZ-11B1PA0_9MJ336JT-part1'                devid: 'ata-WDC_WD140EDGZ-11B1PA0_9MJ336JT-part1'                phys_path: 'pci-0000:02:00.0-sas-phy4-lun-0'                whole_disk: 1                DTL: 386                create_txg: 4                expansion_time: 1713624378                com.delphix:vdev_zap_leaf: 384            children[5]:                type: 'disk'                id: 5                guid: 6800729939507461166                path: '/dev/disk/by-id/ata-WDC_WD140EDGZ-11B1PA0_9LK5RP5G-part1'                devid: 'ata-WDC_WD140EDGZ-11B1PA0_9LK5RP5G-part1'                phys_path: 'pci-0000:02:00.0-sas-phy5-lun-0'                whole_disk: 1                DTL: 388                create_txg: 4                expansion_time: 1713623930                com.delphix:vdev_zap_leaf: 385            children[6]:                type: 'disk'                id: 6                guid: 3896010615790154775                path: '/dev/sdg1'                devid: 'ata-WDC_WD140EDGZ-11B2DA2_2PG07PYJ-part1'                phys_path: 'pci-0000:02:00.0-sas-phy6-lun-0'                whole_disk: 1                DTL: 11585                create_txg: 4                expansion_time: 1713624627                com.delphix:vdev_zap_leaf: 136            children[7]:                type: 'disk'                id: 7                guid: 10254148652571546436                path: '/dev/sdh1'                devid: 'ata-WDC_WD140EDGZ-11B2DA2_2CJ292BJ-part1'                phys_path: 'pci-0000:02:00.0-sas-phy7-lun-0'                whole_disk: 1                DTL: 11584                create_txg: 4                expansion_time: 1713624261                com.delphix:vdev_zap_leaf: 137    features_for_read:        com.delphix:hole_birth        com.delphix:embedded_data

So I know the drives are still there and the pool is still formed (I also see it in the Avago utilities when I boot the server all drives are present). However lsblk only returns my main SSD running the server.

So I am unsure of what to do or how to get my zpool back after the power outage. Any ideas on what I can do/run next or what else I can do to troubleshoot this?

Also why do I have to keep running $ sudo /sbin/modprobe zfs after every reboot?

UPDATE

$ sudo blkid[sudo] password for chrs987:/dev/sda1: UUID="bfa9805a-ea03-4618-ad4b-dffaa4e24474" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="fa0465e1-01"/dev/sda2: UUID="wox6DN-0oFz-vQNy-bxLe-4VG8-qUfB-mek3fc" TYPE="LVM2_member" PARTUUID="fa0465e1-02"/dev/mapper/cl-root: UUID="b80d8c8d-2288-46b9-b0b5-376ac194aabd" BLOCK_SIZE="4096" TYPE="xfs"/dev/mapper/cl-swap: UUID="6c348d6d-d7bd-4c78-8aee-b683f9264dc4" TYPE="swap"/dev/mapper/cl-home: UUID="734c6bfc-afd5-4523-9818-43786aef06aa" BLOCK_SIZE="4096" TYPE="xfs"$ sudo fdisk -lDisk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: 0xfa0465e1Device     Boot   Start       End   Sectors   Size Id Type/dev/sda1  *       2048   2099199   2097152     1G 83 Linux/dev/sda2       2099200 488396799 486297600 231.9G 8e Linux LVMDisk /dev/mapper/cl-root: 50 GiB, 53687091200 bytes, 104857600 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/mapper/cl-swap: 19.7 GiB, 21151875072 bytes, 41312256 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/mapper/cl-home: 162.2 GiB, 174143307776 bytes, 340123648 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes
$ lsscsi[0:0:0:0]    disk    ATA      CT250MX500SSD1   023   /dev/sda

Viewing all articles
Browse latest Browse all 143

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>