On Apr 29, 2010, at 5:52 AM, Tomas Ögren wrote:

> On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
> 
>> On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
>> 
>>> I got this hint from Richard Elling, but haven't had time to test it much. 
>>> Perhaps someone else could help? 
>>> 
>>> roy 
>>> 
>>>> Interesting. If you'd like to experiment, you can change the limit of the 
>>>> number of scrub I/Os queued to each vdev. The default is 10, but that 
>>>> is too close to the normal limit. You can see the current scrub limit via: 
>>>> 
>>>> # echo zfs_scrub_limit/D | mdb -k 
>>>> zfs_scrub_limit: 
>>>> zfs_scrub_limit:10 
>>>> 
>>>> you can change it with: 
>>>> # echo zfs_scrub_limit/W0t2 | mdb -kw 
>>>> zfs_scrub_limit:0xa = 0x2 
>>>> 
>>>> # echo zfs_scrub_limit/D | mdb -k 
>>>> zfs_scrub_limit: 
>>>> zfs_scrub_limit:2 
>>>> 
>>>> In theory, this should help your scenario, but I do not believe this has 
>>>> been exhaustively tested in the lab. Hopefully, it will help. 
>>>> -- richard 
>> 
>> If I'm reading the code right, it's only used when "creating" a new vdev
>> (import, zpool create, maybe at boot).. So I took an alternate route:
>> 
>> http://pastebin.com/hcYtQcJH
>> 
>> (spa_scrub_maxinflight used to be 0x46 (70 decimal) due to 7 devices *
>> zfs_scrub_limit(10) = 70..)
>> 
>> With these lower numbers, our pool is much more responsive over NFS..
> 
> But taking snapshots is quite bad.. A single recursive snapshot over
> ~800 filesystems took about 45 minutes, with NFS operations taking 5-10
> seconds.. Snapshots usually take 10-30 seconds..
> 
>> scrub: scrub in progress for 0h40m, 0.10% done, 697h29m to go
> 
> scrub: scrub in progress for 1h41m, 2.10% done, 78h35m to go
> 
> This is chugging along..
> 
> The server is a Fujitsu RX300 with a Quad Xeon 1.6GHz, 6G ram, 8x400G
> SATA through a U320SCSI<->SATA box - Infortrend A08U-G1410, Sol10u8.

slow disks == poor performance

> Should have enough oompf, but when you combine snapshot with a
> scrub/resilver, sync performance gets abysmal.. Should probably try
> adding a ZIL when u9 comes, so we can remove it again if performance
> goes crap.

A separate log will not help.  Try faster disks.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to