I'm facing a strange behaviour when using Docker together with ZFS.
I have one pool with 2x10TB HDDs as mirror vdev and 2x250GB NVMEs as mirror for special vdev (to store metadata and datasets with small config files etc).
I decided to put my Docker Root (containers/images) in ZFS dataset using ZFS driver:
/etc/docker/daemon.json:
{ "data-root": "/docker-root", "storage-driver": "zfs" }
I also set the recordsize of /docker-root dataset to 32K and special_small_blocks to 128K, to make sure all files store in dataset get allocated and store in special vdev with fast NVMEs. (I understood if you set the recordsize equal or smaller than special_small_blocks, all files go to special vdev)
But the problem is that I still see quite a lot writes to HDDs vdevs when I'm starting and stopping containers. As well frequent rights while containers are running, which is not the case when I stop all containers.I also made sure that all containers mounted volumes are mounted in config dataset that is configured the same to store all files special vdev.
The atime property is also off for datasets in the pool.
Why this is still happening even though I'm forcing all docker related files to be store in special vdev? Is this still expected or I'm missing something?
Layout of the pool:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOTstorage-pool-01 9.32T 6.50T 2.82T - - 0% 69% 1.00x ONLINE - mirror-0 9.09T 6.41T 2.68T - - 0% 70.5% - ONLINE ata-ST10000NE000-xxxxxx_xxxxxxx1 - - - - - - - - ONLINE ata-ST20000NM007D-xxxxxx_xxxxxxx2 - - - - - - - - ONLINEspecial - - - - - - - - - mirror-1 232G 91.5G 140G - - 29% 39.4% - ONLINE nvme-Samsung_SSD_980_PRO_250GB_xxxxxxxxxxxxxx1 - - - - - - - - ONLINE nvme-Samsung_SSD_980_PRO_250GB_xxxxxxxxxxxxxx2 - - - - - - - - ONLINE
Thanks in advance!