And, just to add one more point, since pretty much everything the host writes to the controller eventually has to make it out to the disk drives, the long term average write rate cannot exceed the rate that the backend disk subsystem can absorb the writes, regardless of the workload. (An exception is if the controller can combine some overlapping writes). Basically just like putting water into a reservoir at twice the rate it is being withdrawn, the reservoir will eventually overflow! At least in this case the controller can limit the input from the host and avoid an actual data overflow situation.

Drew

Andrew Wilson wrote:
What kind of workload are you running. If you are you doing these measurements 
with some sort of "write as fast as possible" microbenchmark, once the 4 GB of 
nvram is full, you will be limited by backend performance (FC disks and their 
interconnect) rather than the host / controller bus.

Since, best case, 4 gbit FC can transfer 4 GBytes of data in about 10 seconds, 
you will fill it up, even with the backend writing out data as fast as it can, 
in about 20 seconds. Once the nvram is full, you will only see the backend (e.g. 
2 Gbit) rate.

The reason these controller buffers are useful with real applications is that 
they smooth the bursts of writes that real applications tend to generate, thus 
reducing the latency of those writes and improving performance. They will then 
"catch up" during periods when few writes are being issued. But a typical 
microbenchmark that pumps out a steady stream of writes won't see this benefit.

Drew Wilson

Asif Iqbal wrote:

>On Nov 20, 2007 7:01 AM, Chad Mynhier <[EMAIL PROTECTED]> wrote:
>  
>
>>On 11/20/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
>>    
>>
>>>On Nov 19, 2007 1:43 AM, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
>>>      
>>>
>>>>On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
>>>>        
>>>>
>>>>>(Including storage-discuss)
>>>>>
>>>>>I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
>>>>>ST3300007FC (300GB - 10000 RPM FC-AL)
>>>>>          
>>>>>
>>>>Those disks are 2Gb disks, so the tray will operate at 2Gb.
>>>>
>>>>        
>>>>
>>>That is still 256MB/s . I am getting about 194MB/s
>>>      
>>>
>>2Gb fibre channel is going to max out at a data transmission rate
>>    
>>
>
>But I am running 4GB fiber channels with 4GB NVRAM on a 6 tray of
>300GB FC 10K rpm (2Gb/s) disks
>
>So I should get "a lot" more than ~ 200MB/s. Shouldn't I?
>
>
>  
>
>>around 200MB/s rather than the 256MB/s that you'd expect.  Fibre
>>channel uses an 8-bit/10-bit encoding, so it transmits 8-bits of data
>>in 10 bits on the wire.  So while 256MB/s is being transmitted on the
>>connection itself, only 200MB/s of that is the data that you're
>>transmitting.
>>
>>Chad Mynhier
>>
>>    
>>
>
>
>
>  
>

  

_______________________________________________ perf-discuss mailing list [EMAIL PROTECTED]

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to