"With ZFS however the in-between cache is obsolete, as individual disk
caches can be used directly."
The statement needs to be qualified.
Storage cache, if protected, works great to reduce critical
op latency. ZFS when it writes to disk cache, will flush
data out before return to
>With ZFS however the in-between cache is obsolete, as individual disk caches
>can be used >directly. I also openly question whether even the dedicated RAID
>HW is faster than the newest >CPUs in modern servers.
Individual disk caches are typically in the 8-16 MB range; for 15 disks, that
gives
> just measured quickly that a 1.2Ghz sparc can do [400-500]MB/sec
> of encoding (time spent in misnamed function
> vdev_raidz_reconstruct) for a 3 disk raid-z group.
Strange, that seems very low.
Ah, I see. The current code loops through each buffer, either copying or XORing
it into the parity.
> It would be interesting to have a zfs enabled HBA to offload the checksum
> and parity calculations. How much of zfs would such an HBA have to
> understand?
That's an interesting question.
For parity, it's actually pretty easy. One can envision an HBA which took a
group of related write comman
UNIX admin wrote:
This is simply not true. ZFS would protect against
the same type of
errors seen on an individual drive as it would on a
pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a
LUN that is
already protected in some way by the devices internal
RAID code but i
> There are also the speed enhancement provided by a HW
> raid array, and
> usually RAS too, compared to a native disk drive but
> the numbers on
> that are still coming in and being analyzed. (See
> previous threads.)
Speed enhancements? What is the baseline of comparison?
Hardware RAIDs can
> This is simply not true. ZFS would protect against
> the same type of
> errors seen on an individual drive as it would on a
> pool made of HW raid
> LUN(s). It might be overkill to layer ZFS on top of a
> LUN that is
> already protected in some way by the devices internal
> RAID code but it
>
Anton B. Rang wrote:
JBOD probably isn't dead, simply because motherboard manufacturers are unlikely to pay
the extra $10 it might cost to use a RAID-enabled chip rather than a plain chip (and
the cost is more if you add cache RAM); but basic RAID is at least cheap.
NVidia MCPs (later NForce
The better SATA RAID cards have hardware support. One site comparing
controllers is:
http://tweakers.net/reviews/557
Five of the eight controllers they looked at implemented RAID in hardware; one
of the others implemented only the XOR in hardware. Chips like the Adaptec
AIC-8210 implement m
On Fri, 8 Sep 2006, Jim Sloey wrote:
> > Roch - PAE wrote:
> > The hard part is getting a set of simple requirements. As you go into
> > more complex data center environments you get hit with older Solaris
> > revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> > most of us se
Jim Sloey writes:
> > Roch - PAE wrote:
> > The hard part is getting a set of simple requirements. As you go into
> > more complex data center environments you get hit with older Solaris
> > revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> > most of us seem to be pl
> Roch - PAE wrote:
> The hard part is getting a set of simple requirements. As you go into
> more complex data center environments you get hit with older Solaris
> revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> most of us seem to be playing with ZFS is on the lower end
Depends on the workload. (Did I miss that email?)
Peter Sundstrom wrote:
Hmm. Appears to be differing opinions.
Another way of putting my question is can anyone guarantee that ZFS will not
perform worse that UFS on the array?
High speed performance is not really an issue, hence the reason th
Hmm. Appears to be differing opinions.
Another way of putting my question is can anyone guarantee that ZFS will not
perform worse that UFS on the array?
High speed performance is not really an issue, hence the reason the disks are
mirrored rather than striped. The client is more concerned wit
14 matches
Mail list logo