Hi all,

thanks a lot for your suggestions. I have checked all of them and neither the network itself nor any other check indicated any problem.

Alas, I think I know what is going on… ehh… my current zpool has two vdevs that are actually not even sized, as shown by zpool iostat -v:

zpool iostat -v obelixData 5
capacity operations bandwidth
pool alloc free read write read write
----------------------- ----- ----- ----- ----- ----- -----
obelixData 13,1T 5,84T 36 227 348K 21,5M
c9t210000D023038FA8d0 6,25T 59,3G 21 98 269K 9,25M
c9t210000D02305FF42d0 6,84T 5,78T 15 129 79,2K 12,3M
----------------------- ----- ----- ----- ----- ----- -----


So, the small vdev is actually 99+% full, which is likely to be the root cause for this issue. Especially, since RAIDs tend to take tremendous performance hits, when they exceed 90% space utilization.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to