Currently the pool is about 20% full: # zpool list pool01 NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT pool01 65.2T 15.4T 49.9T - 23% 1.00x ONLINE - #
The old data and new data will be equally use after adding the vdev. The FS hold tens of thousands of small images (~500KB) that are read, write and new one added depending on what customers are doing. It's pretty heavy on the file system. About 800 IOPS going up to 1500 IOPS at times. Performance is important. On Wed, Feb 20, 2013 at 3:48 PM, Tim Cook <t...@cook.ms> wrote: > > > > On Wed, Feb 20, 2013 at 5:46 PM, Bob Friesenhahn < > bfrie...@simple.dallas.tx.us> wrote: > >> On Thu, 21 Feb 2013, Sašo Kiselkov wrote: >> >> On 02/21/2013 12:27 AM, Peter Wood wrote: >>> >>>> Will adding another vdev hurt the performance? >>>> >>> >>> In general, the answer is: no. ZFS will try to balance writes to >>> top-level vdevs in a fashion that assures even data distribution. If >>> your data is equally likely to be hit in all places, then you will not >>> incur any performance penalties. If, OTOH, newer data is more likely to >>> be hit than old data >>> , then yes, newer data will be served from fewer spindles. In that case >>> it is possible to do a send/receive of the affected datasets into new >>> locations and then renaming them. >>> >> >> You have this reversed. The older data is served from fewer spindles >> than data written after the new vdev is added. Performance with the newer >> data should be improved. >> >> Bob >> > > > That depends entirely on how full the pool is when the new vdev is added, > and how frequently the older data changes, snapshots, etc. > > --Tim > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss