Quantcast
Channel: Active questions tagged zfs - Server Fault
Viewing all articles
Browse latest Browse all 143

zfs, possibly bad performance

$
0
0

I am looking to confirm if this behaviour is normal as I got a new server and I am testing some SQL queries and they seem to be slightly slower than on the old machine. This is freebsd 14.1

When I copy a bunch of large files between 2 folders on the same disk, I am getting the following output from zpool iostat

              capacity     operations     bandwidthpool        alloc   free   read  write   read  write----------  -----  -----  -----  -----  -----  -----zroot       90.6G   797G      8     61   272K  1.86Mzroot       90.6G   797G      0      0      0      0zroot       90.6G   797G      0      0      0      0zroot       90.6G   797G      0    125      0  1.12Mzroot       90.6G   797G      0      0      0      0zroot       90.6G   797G      0      0      0      0zroot       90.6G   797G      0      0      0      0zroot       90.6G   797G      0    138      0  2.23Mzroot       90.6G   797G  2.05K      0   138M      0zroot       90.6G   797G  1.66K      0   122M      0zroot       90.6G   797G  1.80K  1.22K  98.4M  91.7Mzroot       91.0G   797G  3.75K  3.37K   189M   249Mzroot       91.0G   797G  3.19K  2.46K   142M   240Mzroot       91.2G   797G  2.53K    593   108M  49.9Mzroot       91.2G   797G  1.84K      0  86.9M      0zroot       91.2G   797G  3.77K  2.81K   181M   268Mzroot       91.2G   797G  2.20K      0  98.5M      0zroot       91.5G   797G  3.27K  3.02K   159M   295Mzroot       91.5G   797G  2.63K      0   144M      0zroot       91.4G   797G  3.52K  3.46K   204M   326Mzroot       91.4G   797G  2.26K      0   128M      0zroot       91.4G   797G  1.85K  1.29K   112M   117Mzroot       91.4G   797G  2.67K  2.75K   118M   244Mzroot       91.4G   797G  1.43K      0  51.6M      0zroot       91.4G   797G  1.84K    873  74.8M  84.0Mzroot       91.3G   797G  2.99K  1.86K   161M   188Mzroot       91.3G   797G  1.40K      0  70.7M      0zroot       91.3G   797G  1.81K  1.03K  79.6M  97.5Mzroot       91.6G   796G  2.84K  2.22K   137M   212Mzroot       91.6G   796G  1.50K      0  81.5M      0zroot       91.6G   796G  1.42K      0  73.9M      0zroot       91.5G   797G  2.94K  3.28K   159M   315Mzroot       91.5G   797G  2.36K      0   121M      0zroot       91.5G   797G  3.80K  3.58K   173M   331Mzroot       91.5G   797G  2.84K      0   123M      0zroot       91.3G   797G  2.74K  2.84K   175M   275Mzroot       91.3G   797G  2.35K      0   108M      0zroot       91.3G   797G  3.86K  3.51K   170M   326Mzroot       91.3G   797G  2.23K      0  93.8M      0zroot       91.5G   796G  3.52K  2.74K   171M   274Mzroot       91.5G   796G  1.93K      0   106M      0zroot       91.5G   796G  1.90K      0   116M      0zroot       91.5G   797G  2.85K  3.90K   188M   347Mzroot       91.5G   797G  1.50K      0  90.2M      0zroot       91.5G   797G  1.97K      0   131M      0zroot       91.5G   796G  3.26K  4.83K   215M   405Mzroot       91.5G   796G  2.53K      0   161M      0zroot       91.6G   796G  3.15K  4.66K   220M   399Mzroot       91.6G   796G  2.65K      0   150M      0zroot       91.4G   797G  3.66K  4.39K   202M   381M

all the zeroes in read and write columns worry me. On old server there are no zeroes and copy speed is consistent, but this new server reaches 189M read speed while the old server is consistent at around 80M. It seems like on the new server reads saturate writes. Is there any setting of zfs (I have default) that I could change to improve this? The disk is some nvme SSD at ovh, a single drive zfs


Viewing all articles
Browse latest Browse all 143

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>