Kyle McDonald wrote:
> Richard Elling wrote:
>   
>> Nick wrote:
>>   
>>     
>>> I have been tasked with putting together a storage solution for use in a 
>>> virtualization setup, serving NFS, CIFS, and iSCSI, over GigE. I've 
>>> inherited a few components to work with:
>>>
>>>     x86 dual core server , 512MB LSI-8888ELP RAID card
>>>     12 x 300GB 15Krpm SAS disks & array
>>>     2GB Flash to IDE "disk"/adaptor.
>>>
>>> The system will be serving virtual hard disks to a range of vmware systems 
>>> connected by GigE, running enterprise workloads that are impossible to 
>>> predict at this point.
>>>
>>>     Using the RAID cards capability for RAID6 sounds attractive?
>>>   
>>>     
>>>       
>> Assuming the card works well with Solaris, this sounds like a
>> reasonable solution.
>>   
>>     
> Another solution, might be to create several (12?) single disk RAID0 
> LUNs, and let ZFS do your redundancy across them. The HW RAID card will 
> still give each RAID0 LUN the advantages of the NVRAM cache, but with 
> ZFS (RAIDZ2?) doing the redundancy, then ZFS will be able to recover 
> from more situations (at least as I understand it.)
>   
>>   
>>     
>>>     Using the Flash RAM for the ZIL?
>>>   
>>>     
>>>       
>> I'm not sure why you would want to do this.  Just carve off a
>> LUN or slice on the RAID card and use its NVRAM cache. 
>> A consumer class flash "disk" will be slower.
>>
>>   
>>     
> This is an interesting observation.
>
> Will a separate LUN or slice on the RAID card perform better not 
> separateing out the ZIL at all?
>   

Yes.  You want to avoid mixing the ZIL iops with the
regular data iops with contention at the LUN.  This is
no different than separating redo logs for databases.

> I'm trying to imagine how this works. How does the behavior of internal 
> ZIL differ from the behaivor of the external ZIL? Given that they'd be 
> sharing the same drives in this case how will it help performance?
>
>   

ZIL log can be considered a write-only workload. The
only time you read the ZIL is on an unscheduled reboot.

> I'm thinking of a comparison of an internal ZIL on a RAIDZ(2?) of 12 
> single drive RAID0 LUNs, vs. either A) 12 RAID0 LUNs made from 95%+ of 
> the disks plus a RAID (5,6,10?,Z?,Z2?, something else?) LUN made from 
> the remaining space on the 12 disks, or 11 single drive RAID0 LUNS plus 
> a single drive RAID0 LUN for the ZIL.
>
> I can see where B might be an improvement. but  no redundancy for the 
> ZIL, and un;ess it's a smaller disk, probably wastes space.
>
> A offers redundancy in the ZIL, and many spindles to use, but I'd image 
> the heads would be thrashing between the ZIL portion of the disk and the 
> ZFS portion? wouldn't that hurt performance?
>   

The ZIL log should be a mostly sequential write workload which
will likely be coalesced at least once along the way.  It is also
latency sensitive, which is why the NVRAM cache is a good
thing.

Beyond those simple observations, it is not clear which of the
multitude of possible configurations will be best. Let us know
what you find.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to