> There is snapshot of metaslab layout, the last 51 metaslabs have 64G free > space. After we added all the disks to our system we had lots of free metaslabs- but that didn't seem to matter. I don't know if perhaps the system was attempting to balance the writes across more of our devices but whatever the reason- the percentage didn't seem to matter. All that mattered was changing the size of the min_alloc tunable.
You seem to have gotten a lot deeper into some of this analysis than I did so I'm not sure if I can really add anything. Since 10u8 doesn't support that tunable I'm not really sure where to go from there. If you can take the pool offline, you might try connecting it to a b148 box and see if that tunable makes a difference. Beyond that I don't really have any suggestions. Your problem description, including the return of performance when freeing space is _identical_ to the problem we had. After checking every single piece of hardware, replacing countless pieces, removing COMSTAR and other pieces from the puzzle- the only change that helped was changing that tunable. I wish I could be of more help but I have not had the time to dive into the ZFS code with any gusto. -Don _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss