>We're back into the old argument of "put it on a co-processor, then move
>it onto the CPU, then move it back onto a co-processor" cycle.
>Personally, with modern CPUs being so under-utilized these days, and all
>ZFS-bound data having to move through main memory in any case (whether
>hardwar
James C. McPherson wrote:
Richard Elling wrote:
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the
checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expe
Richard Elling wrote:
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expensive and more
efficient to
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expensive and more
efficient to move that (flexible!) f
On September 12, 2006 11:35:54 AM -0700 UNIX admin <[EMAIL PROTECTED]>
wrote:
There are also the speed enhancement provided by a HW
raid array, and
usually RAS too, compared to a native disk drive but
the numbers on
that are still coming in and being analyzed. (See
previous threads.)
It would
Anton B. Rang writes:
> The bigger problem with system utilization for software
RAID is the cache, not the CPU cycles proper. Simply
preparing to write 1 MB of data will flush half of a 2 MB L2
cache. This hurts overall system performance far more than
the few microseconds
On Sep 9, 2006, at 1:32 AM, Frank Cusack wrote:
On September 7, 2006 12:25:47 PM -0700 "Anton B. Rang"
<[EMAIL PROTECTED]> wrote:
The bigger problem with system utilization for software RAID is the
cache, not the CPU cycles proper. Simply preparing to write 1 MB
of data
will flush half of a
On September 7, 2006 12:25:47 PM -0700 "Anton B. Rang" <[EMAIL PROTECTED]>
wrote:
The bigger problem with system utilization for software RAID is the
cache, not the CPU cycles proper. Simply preparing to write 1 MB of data
will flush half of a 2 MB L2 cache. This hurts overall system performance
On September 8, 2006 5:59:47 PM -0700 Richard Elling - PAE
<[EMAIL PROTECTED]> wrote:
Ed Gould wrote:
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
softw
Ed Gould wrote:
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to rea
> Dunno about eSATA jbods, but eSATA host ports have
> appeared on at least two HDTV-capable DVRs for storage
> expansion (looks like one model of the Scientific Atlanta
> cable box DVR's as well as on the shipping-any-day-now
> Tivo Series 3).
>
> It's strange that they didn't go with firewire
On Sep 8, 2006, at 14:22, Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago.
All of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All
were less than $150.
In other words, the days of ha
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to read a RAID volume f
Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are over
except for
On Fri, 2006-09-08 at 09:33 -0700, Richard Elling - PAE wrote:
> There has been some recent discussion about eSATA JBODs in the press. I'm not
> sure they will gain much market share. iPods and flash drives have a much
> larger
> market share.
Dunno about eSATA jbods, but eSATA host ports have
[EMAIL PROTECTED] wrote:
I don't quite see this in my crystal ball. Rather, I see all of the SAS/SATA
chipset vendors putting RAID in the chipset. Basically, you can't get a
"dumb" interface anymore, except for fibre channel :-). In other words, if
we were to design a system in a chassis with
On Fri, Sep 08, 2006 at 09:41:58AM +0100, Darren J Moffat wrote:
> [EMAIL PROTECTED] wrote:
> >Richard, when I talk about cheap JBOD I think about home users/small
> >servers/small companies. I guess you can sell 100 X4500 and at the same
> >time 1000 (or even more) cheap JBODs to the small compani
[EMAIL PROTECTED] wrote:
Richard, when I talk about cheap JBOD I think about home users/small
servers/small companies. I guess you can sell 100 X4500 and at the same
time 1000 (or even more) cheap JBODs to the small companies which for sure
will not buy the big boxes. Yes, I know, you earn more s
Torrey McMahon writes:
> Nicolas Dorfsman wrote:
> >> The hard part is getting a set of simple
> >> requirements. As you go into
> >> more complex data center environments you get hit
> >> with older Solaris
> >> revs, other OSs, SOX compliance issues, etc. etc.
> >> etc. The world where
On Thu, Sep 07, 2006 at 12:14:20PM -0700, Richard Elling - PAE wrote:
> [EMAIL PROTECTED] wrote:
> >This is the case where I don't understand Sun's politics at all: Sun
> >doesn't offer really cheap JBOD which can be bought just for ZFS. And
> >don't even tell me about 3310/3320 JBODs - they are ho
[EMAIL PROTECTED] wrote:
This is the case where I don't understand Sun's politics at all: Sun
doesn't offer really cheap JBOD which can be bought just for ZFS. And
don't even tell me about 3310/3320 JBODs - they are horrible expansive :-(
Yep, multipacks are EOL for some time now -- killed by b
On 9/7/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Nicolas Dorfsman wrote:
>> The hard part is getting a set of simple
>> requirements. As you go into
>> more complex data center environments you get hit
>> with older Solaris
>> revs, other OSs, SOX compliance issues, etc. etc.
>> etc. The worl
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Raid calculations take CPU time but I haven't seen numbers on ZFS
usage. SVM is known for using a fair bit of CPU when performing R5
calculations and I'm sure other OS have the same issue. EMC used to go
around saying that offloading raid calcu
Torrey McMahon wrote:
Raid calculations take CPU time but I haven't seen numbers on ZFS usage.
SVM is known for using a fair bit of CPU when performing R5 calculations
and I'm sure other OS have the same issue. EMC used to go around saying
that offloading raid calculations to their storage arra
Nicolas Dorfsman wrote:
The hard part is getting a set of simple
requirements. As you go into
more complex data center environments you get hit
with older Solaris
revs, other OSs, SOX compliance issues, etc. etc.
etc. The world where
most of us seem to be playing with ZFS is on the
lower end o
Roch - PAE wrote:
Thinking some more about this. If your requirements does
mandate some form of mirroring, then it truly seems that ZFS
should take that in charge if only because of the
self-healing characteristics. So I feel the storage array's
job is to export low latency Luns to ZFS.
T
Wee Yeh Tan writes:
> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> > This is simply not true. ZFS would protect against the same type of
> > errors seen on an individual drive as it would on a pool made of HW raid
> > LUN(s). It might be overkill to layer ZFS on top of a LUN that is
Jonathan Edwards wrote:
Here's 10 options I can think of to summarize combinations of zfs with
hw redundancy:
# ZFS ARRAY HWCAPACITYCOMMENTS
-- ---
1 R0 R1 N/2 hw mirror - no zfs healing (XXX)
2 R0 R5
Wee Yeh Tan wrote:
Perhaps, the question should be how one could mix them to get the best
of both worlds instead of going to either extreme.
In the specific case of a 3320 I think Jonathan's chart has a lot of
good info that can be put to use.
In the general case, well, I hate to say this
On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
This is simply not true. ZFS would protect against the same type of
errors seen on an individual drive as it would on a pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a LUN that is
already protected in some way by the
UNIX admin wrote:
My question is how efficient will ZFS be, given that
it will be layered on top of the hardware RAID and
write cache?
ZFS delivers best performance when used standalone, directly on entire disks.
By using ZFS on top of a HW RAID, you make your data susceptible to HW error
On Mon, Sep 04, 2006 at 01:59:53AM -0700, UNIX admin wrote:
> > My question is how efficient will ZFS be, given that
> > it will be layered on top of the hardware RAID and
> > write cache?
>
> ZFS delivers best performance when used standalone, directly on entire disks.
> By using ZFS on top of a
33 matches
Mail list logo