On Mon, Apr 13, 2009 at 3:27 PM, Miles Nordin <car...@ivy.net> wrote:

> >>>>> "nl" == Nicholas Lee <emptysa...@gmail.com> writes:
>
>     nl>    1. Is the cache only used for RAID modes and not in JBOD
>    nl> mode?
>
> well, there are different LSI cards and firmwares and drivers, but:
>
>  The X4150 SAS RAID controllers will use the on-board battery backed cache
>  even when disks are presented as individual LUNs.
>  -- "Aaron Blew" <aaronb...@gmail.com>
>     Wed, 3 Sep 2008 15:29:29 -0700
>
>  We're using an Infortrend SATA/SCSI disk array with individual LUNs, but
>  it still uses the disk cache.
>  -- Tomas ?gren <st...@acc.umu.se>
>    Thu, 4 Sep 2008 10:20:30 +0200
>
>    nl> 2. If it is used by the controller is it driver
>    nl> dependant?  Only works if the driver can handle the cache
>
> driver is proprietary. :)  no way to know.
>
>    nl> 3. If the cache does work what happens if there is a power
>    nl> reset?
>
> Obviously it is supposed to handle this.  But, yeah, as you said,
> _when_ is the battery-backed cache flushed?  At boot during the BIOS
> probe?  What if you're using SPARC and don't do a BIOS probe?  by the
> driver?  When the ``card's firmware boots?''  How can you tell if the
> cache has got stuff in it or not?  What if you're doing maintenance
> like replacing disks---something not unlikely to coincide with unclean
> shutdowns.  Will this confuse it?
>

I didn't think about this scenario. zfs handles so much of what once would
have been done in hardware and by drivers.  While this is good, it is
leaving this huge grey area where it is hard for those of us on the front
line to make decisions about best choices.



> The driver and the ``firmware'' is all proprietary, so there's no way
> to look into the matter yourself other than exhaustive testing, and
> there's no vendor standing squarely behind the overall system like
> there is with an external array.
>
> but...it's so extremely cheap and fast that I think there's a huge


That;s the big point.  10,000 USD for a 2U 12 disk 10TB raw NAS or 100,000
USD for the equalivent appliance.



> segment of market, the segment which cares about being extremely cheap
> and fast, that uses this stuff as a matter of course.  I guess these
> are the guys who were supposed to start using ZFS but for now I guess
> the hardware cache is still faster for ``hardware'' raid-on-a-card.
>




> I think the ideal device would have a fully open-source driver stack,
> and a light on the SSD slog, or battery+RAM, or supercap+RAM+CF, to
> indicate if it's empty or not.  If it's missing and not empty then the
> pool will always refuse to auto-import but always import if
> ``forced'', and if it's missing and empty then the pool will sometimes
> auto-import (ex., always if there was a clean shutdown and sometimes
> if there wasn't), and if forced to import when the light's out the
> pool will be fsync-consistent.  Currently we're short of the ideal
> even using the ZFS-style slog, but AIUI you can get closer if you make
> a backup of your empty slog right after you attach it and stash the
> .dd.gz file somewhere outside the pool---you can force the import of a
> pool with a dirty, missing slog by substituting an old empty slog with
> the right label on it.  However, still closed driver, still nothing
> with fancy lights on it. :)
>


The only issue I have with slog-type devices at the moment is that they are
not removable and thus easily replaceable.  Seems if you want a production
system using slogs then you must mirror them - otherwise if the slog is
corrupted you can only revert to a backup.


>
>    nl> iRAM device seems like a hack,
>
> There's also the ACARD device:
>
> acard ANS-9010B                 $250
>  plus 8GB RAM                    $86
>  plus 16GB CF                    $44
>
> It's also got a battery but can dump/restore the RAM to a CF card.
> It's physically larger and not cheaper nor faster than Intel X25E but
> at least it doesn't have the fragmentation problems to worry about.
> I've not tested it myself.  Someone on the list tested it, but IIRC he
> did not use it as a slog, nor comment on how the CF dumping feature
> works (it sounds kind of sketchy.  ``buttons'' are involved, which to
> me sounds very bad).
>

I've seen these before, but dismissed them as they are 5.25" units which is
tricky in rack systems which generally only catered for 3.5".   I wonder if
it is possible to pull these apart and put them in a smaller case.


Has anyone done any specific testing with SSD devices and solaris other than
the FISHWORKS stuff?  Which is better for what - SLC and MLC?

Nicholas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to