> Another (less satisfying) workaround is to increase the amount of free space > in the pool, either by reducing usage or adding more storage. Observed > behavior is that allocation is fast until usage crosses a threshhold, then > performance hits a wall. We actually tried this solution. We were at 70% usage and performance hit a wall. We figured it was because of the change of fit algorithm so we added 16 2TB disks in mirrors. (Added 16TB to an 18TB pool). It made almost no difference in our pool performance. It wasn't until we told the metaslab allocator to stop looking for such large chunks that the problem went away.
> The original poster's pool is about 78% full. If possible, try freeing > stuff until usage goes back under 75% or 70% and see if your performance > returns. Freeing stuff did fix the problem for us (temporarily) but only in an indirect way. When we freed up a bunch of space, the metaslab allocator was able to find large enough blocks to write to without searching all over the place. This would fix the performance problem until those large free blocks got used up. Then- even though we were below the usage problem threshold from earlier- we would still have the performance problem. -Don _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss