> However, SVM+UFS is more annoying to work with as far as LiveUpgrade is
> concerned.  We'd love to use a ZFS root, but that requires that the
> entire SSD be dedicated as an rpool leaving no space for ZIL.  Or does
> it?
> 
> It appears that we could do a:
> 
>   # zfs create -V 24G rpool/zil
> 
> On our rpool and then:
> 
>   # zpool add satapool log /dev/zvol/dsk/rpool/zil
> 
> (I realize 24G is probably far more than a ZIL device will ever need)
> 
> As rpool is mirrored, this would also take care of redundancy for the
> ZIL as well.
> 
> This lets us have a nifty ZFS rpool for simplified LiveUpgrades and a
> fast SSD-based ZIL for our SATA zpool as well...
> 
> What are the downsides to doing this?  Will there be a noticeable
> performance hit?
> 
> I know I've seen this discussed here before, but wasn't able to come up
> with the right search terms...

Well, after doing a little better on my searches, it sounds like -- at
least for cache/L2ARC on zvol's, some race conditions can pop up and
this isn't necessarily the most robust or tested configuration.

Doesn't sound like something I'd want to do in production.

Perhaps the better option is to have multiple Solaris FDISK partitions
set up.  This way I could still install my rpool to the first partition
and use the remaining partition for my ZIL for the SATA zpool.

This obviously would only work on x86 systems.

Would multiple FDISK partitions be the most robust way to implement
this?

Ray
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to