I have a ZFS mirror pool in my Proxmox instance. I am trying to get it to mount to an LXC container read-only (preferably not giving full filesystem access) so I can have Homepage detect the used and free space properly for monitoring purposes.
Other serverfault questions and answers have reported that df -hT
should be able to report used space of zfs filesystems properly (minus parity data, possibly compression or dedupe data, but I'm not looking for 100% accuracy here)
But for me, df -hT
reports the total space as what's available, and the used space as 128K.
root@pve:~# df -hT /nvmeFilesystem Type Size Used Avail Use% Mounted onnvme zfs 590G 128K 590G 1% /nvme
/nvme is the toplevel zfs dataset is it not? Why does the space seem to be oddly calculated across these datasets?
root@pve:~# zfs listNAME USED AVAIL REFER MOUNTPOINTnvme 309G 590G 128K /nvmenvme/base-103-disk-0 43.4G 631G 2.19G -nvme/nvmecheck 96K 590G 96K /mnt/nvmechecknvme/subvol-102-disk-0 554M 1.46G 554M /nvme/subvol-102-disk-0nvme/subvol-105-disk-0 1.71G 2.29G 1.71G /nvme/subvol-105-disk-0nvme/subvol-106-disk-0 625M 1.39G 625M /nvme/subvol-106-disk-0nvme/subvol-107-disk-0 3.37G 11.6G 3.37G /nvme/subvol-107-disk-0nvme/subvol-108-disk-0 1.18G 1.82G 1.18G /nvme/subvol-108-disk-0nvme/subvol-109-disk-0 533M 1.48G 533M /nvme/subvol-109-disk-0nvme/vm-100-disk-0 43.3G 624G 8.96G -nvme/vm-101-disk-0 132G 607G 115G -nvme/vm-104-disk-0 82.5G 661G 11.6G -
FYI, nvme/nvmecheck
is a dataset I made to see how child datasets would be calculated. It's even stranger with its reference.
Can anyone tell me what is going on? Maybe it's Debian's zfs implementation? Or something wrong with df
?
zpool status is warning me that there are missing features with my zpool that I can run zpool upgrade, but I'm unsure if this is safe. My Proxmox is using the new boot tool, not legacy, so that should not be a concern.
zfs's USEDDS
property may be related, but unsure:
root@pve:~# zfs list -o space nvmeNAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILDnvme 590G 309G 0B 128K 0B 309G
As you can see below, ZFS has a few properties where this 128K
may derive from also.
root@pve:~# zfs get all nvmeNAME PROPERTY VALUE SOURCEnvme type filesystem -nvme creation Sat Jul 22 20:53 2023 -nvme used 309G -nvme available 590G -nvme referenced 128K -nvme compressratio 1.13x -nvme mounted yes -nvme quota none defaultnvme reservation none defaultnvme recordsize 128K defaultnvme mountpoint /nvme defaultnvme sharenfs off defaultnvme checksum on defaultnvme compression on localnvme atime off localnvme devices on defaultnvme exec on defaultnvme setuid on defaultnvme readonly off defaultnvme zoned off defaultnvme snapdir hidden defaultnvme aclmode discard defaultnvme aclinherit restricted defaultnvme createtxg 1 -nvme canmount on defaultnvme xattr on defaultnvme copies 1 defaultnvme version 5 -nvme utf8only off -nvme normalization none -nvme casesensitivity sensitive -nvme vscan off defaultnvme nbmand off defaultnvme sharesmb off defaultnvme refquota none defaultnvme refreservation none defaultnvme guid [redacted] -nvme primarycache all localnvme secondarycache all defaultnvme usedbysnapshots 0B -nvme usedbydataset 128K -nvme usedbychildren 309G -nvme usedbyrefreservation 0B -nvme logbias latency defaultnvme objsetid 54 -nvme dedup off defaultnvme mlslabel none defaultnvme sync standard defaultnvme dnodesize legacy defaultnvme refcompressratio 1.00x -nvme written 128K -nvme logicalused 164G -nvme logicalreferenced 54.5K -nvme volmode default defaultnvme filesystem_limit none defaultnvme snapshot_limit none defaultnvme filesystem_count none defaultnvme snapshot_count none defaultnvme snapdev hidden defaultnvme acltype off defaultnvme context none defaultnvme fscontext none defaultnvme defcontext none defaultnvme rootcontext none defaultnvme relatime on defaultnvme redundant_metadata all defaultnvme overlay on defaultnvme encryption off defaultnvme keylocation none defaultnvme keyformat none defaultnvme pbkdf2iters 0 defaultnvme special_small_blocks 0 default
My zpool properties:
root@pve:~# zpool get all nvmeNAME PROPERTY VALUE SOURCEnvme size 928G -nvme capacity 15% -nvme altroot - defaultnvme health ONLINE -nvme guid [redacted] -nvme version - defaultnvme bootfs - defaultnvme delegation on defaultnvme autoreplace off defaultnvme cachefile - defaultnvme failmode wait defaultnvme listsnapshots off defaultnvme autoexpand off defaultnvme dedupratio 1.00x -nvme free 782G -nvme allocated 146G -nvme readonly off -nvme ashift 12 localnvme comment - defaultnvme expandsize - -nvme freeing 0 -nvme fragmentation 12% -nvme leaked 0 -nvme multihost off defaultnvme checkpoint - -nvme load_guid [redacted] -nvme autotrim off defaultnvme compatibility off defaultnvme bcloneused 0 -nvme bclonesaved 0 -nvme bcloneratio 1.00x -nvme feature@async_destroy enabled localnvme feature@empty_bpobj active localnvme feature@lz4_compress active localnvme feature@multi_vdev_crash_dump enabled localnvme feature@spacemap_histogram active localnvme feature@enabled_txg active localnvme feature@hole_birth active localnvme feature@extensible_dataset active localnvme feature@embedded_data active localnvme feature@bookmarks enabled localnvme feature@filesystem_limits enabled localnvme feature@large_blocks enabled localnvme feature@large_dnode enabled localnvme feature@sha512 enabled localnvme feature@skein enabled localnvme feature@edonr enabled localnvme feature@userobj_accounting active localnvme feature@encryption enabled localnvme feature@project_quota active localnvme feature@device_removal enabled localnvme feature@obsolete_counts enabled localnvme feature@zpool_checkpoint enabled localnvme feature@spacemap_v2 active localnvme feature@allocation_classes enabled localnvme feature@resilver_defer enabled localnvme feature@bookmark_v2 enabled localnvme feature@redaction_bookmarks enabled localnvme feature@redacted_datasets enabled localnvme feature@bookmark_written enabled localnvme feature@log_spacemap active localnvme feature@livelist enabled localnvme feature@device_rebuild enabled localnvme feature@zstd_compress enabled localnvme feature@draid enabled localnvme feature@zilsaxattr disabled localnvme feature@head_errlog disabled localnvme feature@blake3 disabled localnvme feature@block_cloning disabled localnvme feature@vdev_zaps_v2 disabled local