> > That depends upon exactly what effect turning off
> the
> > ZFS cache-flush mechanism has.
>
> The only difference is that ZFS won't send a
> SYNCHRONIZE CACHE command at the end of a transaction
> group (or ZIL write). It doesn't change the actual
> read or write commands (which are always se
> Bill, you have a long-winded way of saying "I don't
> know". But thanks for elucidating the possibilities.
Hmmm - I didn't mean to be *quite* as noncommittal as that suggests: I was
trying to say (without intending to offend) "FOR GOD'S SAKE, MAN: TURN IT BACK
ON!", and explaining why (i.e.
> That depends upon exactly what effect turning off the
> ZFS cache-flush mechanism has.
The only difference is that ZFS won't send a SYNCHRONIZE CACHE command at the
end of a transaction group (or ZIL write). It doesn't change the actual read or
write commands (which are always sent as ordinary
Bill, you have a long-winded way of saying "I don't know". But thanks for
elucidating the possibilities.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
> I think the point of dual battery-backed controllers
> is
> that data should never be lost. Am I wrong?
That depends upon exactly what effect turning off the ZFS cache-flush mechanism
has. If all data is still sent to the controllers as 'normal' disk writes and
they have no concept of, say,
> From Neil's comment in the blog entry that you
> referenced, that sounds *very* dicey (at least by
> comparison with the level of redundancy that you've
> built into the rest of your system) - even if you
> have rock-solid UPSs (which have still been known to
> fail). Allowing a disk to lie to h
> We are running Solaris 10u4 is the log option in
> there?
Someone more familiar with the specifics of the ZFS releases will have to
answer that.
>
> If this ZIL disk also goes dead, what is the failure
> mode and recovery option then?
The ZIL should at a minimum be mirrored. But since that
> Sounds good so far: lots of small files in a largish
> system with presumably significant access parallelism
> makes RAID-Z a non-starter, but RAID-5 should be OK,
> especially if the workload is read-dominated. ZFS
> might aggregate small writes such that their
> performance would be good as w
> On Dec 1, 2007 7:15 AM, Vincent Fox
>
> Any reason why you are using a mirror of raid-5
> lun's?
>
> I can understand that perhaps you want ZFS to be in
> control of
> rebuilding broken vdev's, if anything should go wrong
> ... but
> rebuilding RAID-5's seems a little over the top.
Because the
> Hi Bill,
...
lots of small files in a
> largish system with presumably significant access
> parallelism makes RAID-Z a non-starter,
> Why does "lots of small files in a largish system
> with presumably
> significant access parallelism makes RAID-Z a
> non-starter"?
> thanks,
> max
Every ZFS
Hi Bill,
can you guess? wrote:
>> We will be using Cyrus to store mail on 2540 arrays.
>>
>> We have chosen to build 5-disk RAID-5 LUNs in 2
>> arrays which are both connected to same host, and
>> mirror and stripe the LUNs. So a ZFS RAID-10 set
>> composed of 4 LUNs. Multi-pathing also in use fo
[Zombie thread returns from the grave...]
> > Getting back to 'consumer' use for a moment,
> though,
> > given that something like 90% of consumers entrust
> > their PC data to the tender mercies of Windows, and
> a
> > large percentage of those neither back up their
> data,
> > nor use RAID to gu
> If it's just performance you're after for small
> writes, I wonder if you've considered putting the ZIL
> on an NVRAM card? It looks like this can give
> something like a 20x performance increase in some
> situations:
>
> http://blogs.sun.com/perrin/entry/slog_blog_or_bloggin
> g_on
That's cer
> Any reason why you are using a mirror of raid-5
> lun's?
Some people aren't willing to run the risk of a double failure - especially
when recovery from a single failure may take a long time. E.g., if you've
created a disaster-tolerant configuration that separates your two arrays and a
fire c
> We will be using Cyrus to store mail on 2540 arrays.
>
> We have chosen to build 5-disk RAID-5 LUNs in 2
> arrays which are both connected to same host, and
> mirror and stripe the LUNs. So a ZFS RAID-10 set
> composed of 4 LUNs. Multi-pathing also in use for
> redundancy.
Sounds good so far:
i did some test lately with zfs, env is:
2 node veritas cluster 5.0 on solaris 8/07 with recommended patches, 2 machines
v440 & v480, shared storage through switch on 6120 array.
2 luns from array, on every zfs pool. problem is, after installing oracle db on
one of the luns, zpool import / export
16 matches
Mail list logo