Richard Elling wrote:
> Nick wrote:
>   
>> I have been tasked with putting together a storage solution for use in a 
>> virtualization setup, serving NFS, CIFS, and iSCSI, over GigE. I've 
>> inherited a few components to work with:
>>
>>      x86 dual core server , 512MB LSI-8888ELP RAID card
>>      12 x 300GB 15Krpm SAS disks & array
>>      2GB Flash to IDE "disk"/adaptor.
>>
>> The system will be serving virtual hard disks to a range of vmware systems 
>> connected by GigE, running enterprise workloads that are impossible to 
>> predict at this point.
>>
>>      Using the RAID cards capability for RAID6 sounds attractive?
>>   
>>     
>
> Assuming the card works well with Solaris, this sounds like a
> reasonable solution.
>   
Another solution, might be to create several (12?) single disk RAID0 
LUNs, and let ZFS do your redundancy across them. The HW RAID card will 
still give each RAID0 LUN the advantages of the NVRAM cache, but with 
ZFS (RAIDZ2?) doing the redundancy, then ZFS will be able to recover 
from more situations (at least as I understand it.)
>   
>>      Using the Flash RAM for the ZIL?
>>   
>>     
>
> I'm not sure why you would want to do this.  Just carve off a
> LUN or slice on the RAID card and use its NVRAM cache. 
> A consumer class flash "disk" will be slower.
>
>   
This is an interesting observation.

Will a separate LUN or slice on the RAID card perform better not 
separateing out the ZIL at all?

I'm trying to imagine how this works. How does the behavior of internal 
ZIL differ from the behaivor of the external ZIL? Given that they'd be 
sharing the same drives in this case how will it help performance?

I'm thinking of a comparison of an internal ZIL on a RAIDZ(2?) of 12 
single drive RAID0 LUNs, vs. either A) 12 RAID0 LUNs made from 95%+ of 
the disks plus a RAID (5,6,10?,Z?,Z2?, something else?) LUN made from 
the remaining space on the 12 disks, or 11 single drive RAID0 LUNS plus 
a single drive RAID0 LUN for the ZIL.

I can see where B might be an improvement. but  no redundancy for the 
ZIL, and un;ess it's a smaller disk, probably wastes space.

A offers redundancy in the ZIL, and many spindles to use, but I'd image 
the heads would be thrashing between the ZIL portion of the disk and the 
ZFS portion? wouldn't that hurt performance?

  -Kyle
>>      Using zfs for general storage management?
>>
>>   
>>     
>
> cool.
>
>   
>> Has anyone built a similar system, what is the true path to success?
>>   
>>     
>
> Success is at the summit, but there are several paths up the mountain.
>
>   
>> What are the pitfalls?
>> What should I have on my reading list for starters?
>>   
>>     
>
> Start with the ZFS system admin guide on opensolaris.org.
> We try to keep the solarisinternals.com wikis up to date also.
>  -- richard
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to