On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
> ppf> What I wanted to point out is the Al's example: he wrote about damaged
> data. Data
> ppf> were damaged by firmware _not_ disk surface ! In such case ZFS doesn't
> help. ZFS can
> ppf> detect (and repair) errors on disk surf
On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote:
> [EMAIL PROTECTED] wrote:
> >On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
> >
> >What I wanted to point out is the Al's example: he wrote about damaged
> >data. Data
> >were damaged by firmware _not_ disk surface !
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter. With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verificat
Philip Brown wrote:
raid5 IS useful in zfs+hwraid boxes, for "Mean Time To Recover" purposes.
Or, and people haven't really mentioned this yet, if you're using R5 for
the raid set and carving LUNs out of it to multiple hosts.
___
zfs-discuss mail
On Thu, Jun 29, 2006 at 09:25:21AM +1000, Nathan Kroenert wrote:
> On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> > But Joe makes a good point about RAID-Z and iSCSI.
> >
> > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> > to do that: parity computation on write,
Dale Ghent wrote:
On Jun 28, 2006, at 4:27 PM, Jim Connors wrote:
For an embedded application, I'm looking at creating a minimal
Solaris 10 U2 image which would include ZFS functionality. In
quickly taking a look at the opensolaris.org site under pkgdefs, I
see three packages that appear
Erik Trimble wrote:
Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
assuming that the proper way to get benefits from both ZFS and HW RAID
is the following:
(1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror
hwStripe1 hwStripe2"
(2) ZFS RAIDZ of HW mirrors
Roch wrote:
Philip Brown writes:
> but there may not be filesystem space for double the data.
> Sounds like there is a need for a zfs-defragement-file utility perhaps?
>
> Or if you want to be politically cagey about naming choice, perhaps,
>
> zfs-seq-read-optimize-file ? :-)
>
P
[hit send too soon...]
Richard Elling wrote:
Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed << N / 2 -1
minimum # disks to los
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verification on read
> and, if the checksum verification fails, c
On Jun 28, 2006, at 17:25, Erik Trimble wrote:
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote:
On Jun 28, 2006, at 12:32, Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote:
> > Which is better -
> > zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
>
> The latter. With a mirror of RAID-5 arrays, you get:
>
> (1) Self-healing data.
>
> (2) Tolerance of whole-array failure.
>
> (3) Tolerance of *
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote:
> On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
> > Given a reasonable number of hot-spares, I simply can't see the (very)
> > marginal increase in safety give by using HW RAID5 as out balancing the
> > considerable speed hit using RAID5
Robert Milkowski wrote On 06/28/06 15:52,:
Hello Neil,
Wednesday, June 21, 2006, 8:15:54 PM, you wrote:
NP> Robert Milkowski wrote On 06/21/06 11:09,:
Hello Neil,
Why is this option available then? (Yes, that's a loaded question.)
NP> I wouldn't call it an option, but an internal debug
Hello Erik,
Wednesday, June 28, 2006, 6:32:38 PM, you wrote:
ET> Robert -
ET> I would definitely like to see the difference between read on HW RAID5
ET> vs read on RAIDZ. Naturally, one of the big concerns I would have is
ET> how much RAM is needed to avoid any cache starvation on the ZFS
ET
Hello Peter,
Wednesday, June 28, 2006, 11:24:32 PM, you wrote:
PT> Robert,
>> PT> You really need some level of redundancy if you're using HW raid.
>> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
>> PT> that. Seems to me that the simplest way to go is to use zfs to mirror
>
> Which is better -
> zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least* three disk failures.
(4) More IOPs than raidz of hardware mirror
Hello Neil,
Wednesday, June 21, 2006, 8:15:54 PM, you wrote:
NP> Robert Milkowski wrote On 06/21/06 11:09,:
>> Hello Neil,
Why is this option available then? (Yes, that's a loaded question.)
>>
>> NP> I wouldn't call it an option, but an internal debugging switch that I
>> NP> originally ad
Hey Robert,
Well, not yet. Right now our top two priorities are improving
performance in multiple areas of zfs(soon there will be a performance
page tracking progess on the zfs community page), and also getting zfs
boot done. Hence, we're not currently working on heaps of brand new
features
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote:
>
> On Jun 28, 2006, at 12:32, Erik Trimble wrote:
>
> > The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
> >
> >
> > ZFS mirror/ RAID5: capacity = (N / 2) -1
> >
> > speed
Robert,
> PT> You really need some level of redundancy if you're using HW raid.
> PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
> PT> that. Seems to me that the simplest way to go is to use zfs to mirror
> PT> HW raid5, preferably with the HW raid5 LUNs being completely
> PT>
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
> The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
>
> ZFS mirror/ RAID5: capacity = (N / 2) -1
> speed << N / 2 -1
> minimum # disks to lose before
On Jun 28, 2006, at 4:27 PM, Jim Connors wrote:
For an embedded application, I'm looking at creating a minimal
Solaris 10 U2 image which would include ZFS functionality. In
quickly taking a look at the opensolaris.org site under pkgdefs, I
see three packages that appear to be related to ZF
For an embedded application, I'm looking at creating a minimal Solaris
10 U2 image which would include ZFS functionality. In quickly taking a
look at the opensolaris.org site under pkgdefs, I see three packages
that appear to be related to ZFS: SUNWzfskr, SUNWzfsr, and SUNWzfsu. Is
it naive t
Rich Teer wrote:
Hi all,
ISTR reading somewhere that ZFS like a generous supply of RAM.
For an X4200 with a pair of 73 GB disks (for now), what would
constitute a "generous" amount of RAM? The server currently
has a total of 2 GB of RAM, and my gut is telling me that that
isn't enough...
TIA,
>
> Dennis,
>
> You are absolutely correct that the doc needs a step to verify
> that the backup occurred.
>
> I'll work on getting this step added to the admin guide ASAP.
>
Hey, I'm sorry that I triggered more work for you.
Never meant to do that. I was just a little lost as to how to get a
g
Hi all,
ISTR reading somewhere that ZFS like a generous supply of RAM.
For an X4200 with a pair of 73 GB disks (for now), what would
constitute a "generous" amount of RAM? The server currently
has a total of 2 GB of RAM, and my gut is telling me that that
isn't enough...
TIA,
--
Rich Teer, SCN
Dennis,
You are absolutely correct that the doc needs a step to verify
that the backup occurred.
I'll work on getting this step added to the admin guide ASAP.
Thanks for feedback...
Cindy
Dennis Clarke wrote:
Am I missing something here? [1]
Dennis
[1] I am fully prepared for RTFM and
On Wed, Jun 28, 2006 at 11:15:34AM +0200, Robert Milkowski wrote:
> DV> If ZFS is providing better data integrity then the current storage
> DV> arrays, that sounds like to me an opportunity for the next generation
> DV> of intelligent arrays to become better.
>
RM> Actually they can't.
RM> If yo
Doug,
Very nice setup! As you mention, more notes would be very helpful, but
very neat stuff!
Thanks,
Tabriz
Doug Scott wrote:
I have posted a blog http://solaristhings.blogspot.com/ on how I have
configured a zfs root partition on my laptop. It is a slightly modified version
of Tabriz's
Mark Shellenbaum wrote:
Can you give us an example of a 'file' the ssh-agent wishes to open and
what the permission are on the file and also what privileges the
ssh-agent has, and what the expected results are.
The whole point is that ssh-agent should NEVER be opening any files that
the user
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to lose before loss of data:
Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed << N / 2 -1
minimum # disks to lose before loss
On Wed, Jun 21, 2006 at 04:34:59PM -0600, Mark Shellenbaum wrote:
> Can you give us an example of a 'file' the ssh-agent wishes to open and
> what the permission are on the file and also what privileges the
> ssh-agent has, and what the expected results are.
ssh-agent(1) should need to open no f
Robert Milkowski wrote:
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT> You really need some level of redundancy if you're using HW raid.
PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT> that. Seems to
Hello Noel,
Wednesday, June 28, 2006, 5:59:18 AM, you wrote:
ND> a zpool remove/shrink type function is on our list of features we want
ND> to add.
ND> We have RFE
ND> 4852783 reduce pool capacity
ND> open to track this.
Is there someone actually working on this right now?
--
Best regards,
R
Depends on your definition of firmware. In higher end arrays the data
is checksummed when it comes in and a hash is written when it gets to
disk. Of course this is no where near end to end but it is better then
nothing.
... and code is code. Easier to debug is a context sensitive term.
>Depends on your definition of firmware. In higher end arrays the data is
>checksummed when it comes in and a hash is written when it gets to disk.
>Of course this is no where near end to end but it is better then nothing.
The checksum is often stored with the data (so if the data is not writ
On Wed, 2006-06-28 at 09:05, [EMAIL PROTECTED] wrote:
> > But the point is that ZFS should detect also such errors and take
> > proper actions. Other filesystems can't.
>
> Does it mean that ZFS can detect errors in ZFS's code itself ? ;-)
In many cases, yes.
As a hypothetical: Consider a bug i
Jeremy Teo wrote:
Hello,
What I wanted to point out is the Al's example: he wrote about
damaged data. Data
were damaged by firmware _not_ disk surface ! In such case ZFS
doesn't help. ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But
cannot detect and repair
errors in
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk s
Hello przemolicc,
Wednesday, June 28, 2006, 3:05:42 PM, you wrote:
ppf> On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
>> Hello przemolicc,
>>
>> Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
>>
>> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
>> >> Case
Robert Milkowski wrote:
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV> If ZFS is providing better data integrity then the current storage
DV> arrays, that sounds like to me an opportunity for the next generation
DV> of intelligent arrays to become better.
Actually they can
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
> Hello przemolicc,
>
> Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
>
> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> >> list
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
>> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
>> list and described how faulty firmware on a Hitach HDS system damaged a
>> bunch of
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT> You really need some level of redundancy if you're using HW raid.
PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT> that. Seems to me that the simplest way
Hello Erik,
Tuesday, June 27, 2006, 6:50:52 PM, you wrote:
ET> Personally, I can't think of a good reason to use ZFS with HW RAID5;
ET> case (3) above seems to me to provide better performance with roughly
ET> the same amount of redundancy (not quite true, but close).
I can see a reason.
In o
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV> If ZFS is providing better data integrity then the current storage
DV> arrays, that sounds like to me an opportunity for the next generation
DV> of intelligent arrays to become better.
Actually they can't.
If you want end-to-end
On Wed, 28 Jun 2006 [EMAIL PROTECTED] wrote:
> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> > Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> > list and described how faulty firmware on a Hitach HDS system damaged a
> > bunch of data. The HDS system mo
eric kustarz wrote:
What's needed after that is a way (such as a script) to 'zfs send' all
the snapshot to the appropiate place.
And very importantly you need a way to preserve all of the options set
on the ZFS data set, otherwise IMO zfs send is no better than using an
archiver that uses POS
On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> list and described how faulty firmware on a Hitach HDS system damaged a
> bunch of data. The HDS system moves disk blocks around, between one disk
> and another
53 matches
Mail list logo