Quantcast
Channel: Active questions tagged zfs - Server Fault
Viewing all articles
Browse latest Browse all 140

zpool mismatched replication level

$
0
0

I have accidentally added a mismatched raidz1 to an existing pool. Yes, I specified the '-f' to do it, but I was expecting to use the '-f' for a different reason and wasn't paying attention.

Anyhoo... how bad is it? I really just needed extra space in the pool and wanted that space to be redundant. The pool looks like this:

   NAME                       STATE      READ WRITE CKSUM    pool_02c                   ONLINE        0     0     0      raidz1-0                 ONLINE        0     0     0        c0t5000C500B4AA5681d0  ONLINE        0     0     0        c0t5000C500B4AA6A51d0  ONLINE        0     0     0        c0t5000C500B4AABF20d0  ONLINE        0     0     0        c0t5000C500B4AAA933d0  ONLINE        0     0     0      raidz1-1                 ONLINE        0     0     0        c0t5000C500B0889E5Bd0  ONLINE        0     0     0        c0t5000C500B0BCFB13d0  ONLINE        0     0     0        c0t5000C500B09F0C54d0  ONLINE        0     0     0

I read one other question about this kind of scenario and it stated that "both performance and space efficiency (ie: via un-optimal padding) will be affected", but that's a little vague and I'm hoping someone can give a bit more detail.

From a pool use standpoint, isn't data put on the disks in vdev raidz1-0 redundant in that vdev and data put into raidz1-1 redundant within that vdev? And, if that's the case, wouldn't performance be related to the specific vdev?

Where does padding come into play here, and how would that affect storage capacity? ie. Would it cause more space to be allocated like for every 1M I write, I use up 1.2M?

I'm not overly concerned about performance for this pool, but how does this configuration affect read/write speeds? I would expect each vdev to perform at the speed of it's respective devices, so how does a vdev replication difference affect this?

As and FYI, this is on a Solaris 11.4 system. I have tried to remove the vdev using:

zpool remove pool_02c raidz1-1

but I get the error:

 cannot remove device(s): not enough space to migrate data

Which seems odd since I literally just added it and haven't written anything to the pool.

I'm ok living with it since it seems to have given me the space I expected, but just want to better understand the devil I'll be living with.


Viewing all articles
Browse latest Browse all 140

Trending Articles