On May 13, 2011, at 11:25 AM, Donald Stahl <d...@blacksun.org> wrote:
> Running a zpool scrub on our production pool is showing a scrub rate > of about 400K/s. (When this pool was first set up we saw rates in the > MB/s range during a scrub). > The scrub I/O has lower priority than other I/O. In later ZFS releases, scrub I/O is also throttled. When the throttle kicks in, the scrub can drop to 5-10 IOPS. This shouldn't be much of an issue, scrubs do not need to be, and are not intended to be, run very often -- perhaps once a quarter or so. -- richard > Both zpool iostat and an iostat -Xn show lots of idle disk times, no > above average service times, no abnormally high busy percentages. > > Load on the box is .59. > > 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147. > > Known hardware errors: > - 1 of 8 SAS lanes is down- though we've seen the same poor > performance when using the backup where all 8 lanes work. > - Target 44 occasionally throws an error (less than once a week). When > this happens the pool will become unresponsive for a second, then > continue working normally. > > Read performance when we read off the file system (including cache and > using dd with a 1meg block size) shows 1.6GB/sec. zpool iostat will > show numerous reads of 500 MB/s when doing this test. > > I'm willing to consider that hardware could be the culprit here- but I > would expect to see signs if that were the case. The lack of any slow > service times, the lack of any effort at disk IO all seem to point > elsewhere. > > I will provide any additional information people might find helpful > and will, if possible, test any suggestions. > > Thanks in advance, > -Don > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss