Indeed. Just recently had very noticeable performance issues
on a zfs pool that became 75% full. Fell of a cliff basically.
Didnt expect that until well above 80%. Now, this was Solaris 10u8
or somewhere around that. Deleted un-needed files back to 55% and
performance is excellent again.

>-- Original Message --
>Date: Thu, 27 Oct 2011 14:33:25 -0700
>From: Erik Trimble <tr...@netdemons.com>
>To: zfs-discuss@opensolaris.org
>Subject: Re: [zfs-discuss] Poor relative performance of SAS over SATA drives
>
>
>It occurs to me that your filesystems may not be in the same state.
>
>That is, destroy both pools.  Recreate them, and run the tests. This 
>will eliminate any possibility of allocation issues.
>
>-Erik
>
>On 10/27/2011 10:37 AM, weiliam.hong wrote:
>> Hi,
>>
>> Thanks for the replies. In the beginning, I only had SAS drives 
>> installed when I observed the behavior, the SATA drives were added 
>> later for comparison and troubleshooting.
>>
>> The slow behavior is observed only after 10-15mins of running dd where
>
>> the file size is about 15GB, then the throughput drops suddenly from

>> 70 to 50 to 20 to <10MB/s in a matter of seconds and never recovers.
>>
>> This couldn't be right no matter how look at it.
>>
>> Regards,
>> WL
>>
>>
>>
>> On 10/27/2011 9:59 PM, Brian Wilson wrote:
>>> On 10/27/11 07:03 AM, Edward Ned Harvey wrote:
>>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>>>> boun...@opensolaris.org] On Behalf Of weiliam.hong
>>>>>
>>>>> 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
>>>> driver
>>>>> is used. Are SAS and SATA drives handled differently ?
>>>> If they're all on the same HBA, they may be all on the same bus.  It
>
>>>> may be
>>>> *because* you're mixing SATA and SAS disks on the same bus.  I'll 
>>>> suggest
>>>> separating the tests, don't run them concurrently, and see if 
>>>> there's any
>>>> difference.
>>>>
>>>> Also, the HBA might have different defaults for SAS vs SATA, look in
>
>>>> the HBA
>>>> to see if write back / write through are the same...
>>>>
>>>> I don't know if the HBA gives you some way to enable/disable the 
>>>> on-disk
>>>> cache, but take a look and see.
>>>>
>>>> Also, maybe the SAS disks are only doing SATA.  If the HBA is only

>>>> able to
>>>> do SATA, then SAS disks will work, but might not work as optimally

>>>> as they
>>>> would if they were connected to a real SAS HBA.
>>>>
>>>> And one final thing - If you're planning to run ZFS (as I suspect 
>>>> you are,
>>>> posting on this list running OI) ... It actually works *better* 
>>>> without any
>>>> HBA.  *Footnote
>>>>
>>>> *Footnote:  ZFS works the worst, if you have ZIL enabled, no log 
>>>> device, and
>>>> no HBA.  It's a significant improvement, if you add a battery backed
>or
>>>> nonvolatile HBA with writeback.  It's a signfiicant improvement 
>>>> again, if
>>>> you get rid of the HBA, add a log device.  It's a significant 
>>>> improvement
>>>> yet again, if you get rid of the HBA and log device, and run with ZIL
>>>> disabled (if your work load is compatible with a disabled ZIL.)
>>>>
>>>> _______________________________________________
>>>> zfs-discuss mailing list
>>>> zfs-discuss@opensolaris.org
>>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>> First, ditto everything Edward says above.  I'd add that your "dd" 
>>> test creates a lot of straight sequential IO, not anything that's 
>>> likely to be random IO.  I can't speak to why your SAS might not be

>>> performing any better than Edward did, but your SATA's probably 
>>> screaming on straight sequential IO, where on something more random
I
>
>>> would bet they won't perform as well as they do in this test.  The 
>>> tool I've seen used for that sort of testing is iozone - I'm sure 
>>> there are others as well, and I can't attest what's better or worse.
>>>
>>> cheers,
>>> Brian
>>>
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>_______________________________________________
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to