On Tue, Sep 12, 2006 at 03:56:00PM -0700, Matthew Ahrens wrote:
> Matthew Ahrens wrote:
[...]
> Given the overwhelming criticism of this feature, I'm going to shelve it for
> now.
I'd really like to see this feature. You say ZFS should change our view
on filesystems, I say be consequent.
In ZFS
David Dyer-Bennet wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
I've read the proposal, and followed the discussion so far.
Just a "me too" mail:
On 13 Sep 2006, at 08:30, Richard Elling wrote:
Is this use of slightly based upon disk failure modes? That is, when
disks fail do they tend to get isolated areas of badness compared to
complete loss? I would suggest that complete loss should include
someone tripping ove
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Non-recoverable reads may not represent permanent failures. In the case
of a RAID array, the data should be reconstructed and a rewrite + verify
attempted with the possibility of sparing the sector. ZFS can
reconstruct the data and relocate th
Richard Elling - PAE wrote:
Non-recoverable reads may not represent permanent failures. In the case
of a RAID array, the data should be reconstructed and a rewrite + verify
attempted with the possibility of sparing the sector. ZFS can
reconstruct the data and relocate the block.
True but i
reply below...
Torrey McMahon wrote:
Richard Elling - PAE wrote:
This question was asked many times in this thread. IMHO, it is the
single biggest reason we should implement ditto blocks for data.
We did a study of disk failures in an enterprise RAID array a few
years ago. One failure mode
Richard Elling - PAE wrote:
This question was asked many times in this thread. IMHO, it is the
single biggest reason we should implement ditto blocks for data.
We did a study of disk failures in an enterprise RAID array a few
years ago. One failure mode stands heads and shoulders above the
ot
On 9/19/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
[pardon the digression]
David Dyer-Bennet wrote:
> On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
>
>> Interestingly, the operation may succeed and yet we will get an error
>> which recommends replacing the drive. For exam
[pardon the digression]
David Dyer-Bennet wrote:
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
Interestingly, the operation may succeed and yet we will get an error
which recommends replacing the drive. For example, if the failure
prediction threshold is exceeded. You might als
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
Interestingly, the operation may succeed and yet we will get an error
which recommends replacing the drive. For example, if the failure
prediction threshold is exceeded. You might also want to replace the
drive when there are no spare
more below...
David Dyer-Bennet wrote:
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
> The more I look at it the more I think that a second copy on the same
> disk doesn't protect against very much real-w
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
> The more I look at it the more I think that a second copy on the same
> disk doesn't protect against very much real-world risk. Am I wrong
> here? Are parti
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much real-world risk. Am I wrong
here? Are partial(small) disk corruptions more common than I think?
I don't have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil A. Wilson wrote:
> This is unfortunate. As a laptop user with only a single drive, I was
> looking forward to it since I've been bitten in the past by data loss
> caused by a bad area on the disk. I don't care about the space
> consumption becau
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens wrote:
> Out of curiosity, what would you guys think about addressing this same
> problem by having the option to store some filesystems unreplicated on
> an mirrored (or raid-z) pool? This would have the same issues of
> unexpected spa
Bill Sommerfeld wrote:
One question for Matt: when ditto blocks are used with raidz1, how well
does this handle the case where you encounter one or more single-sector
read errors on other drive(s) while reconstructing a failed drive?
for a concrete example
A0 B0 C0 D0 P0
A1 B1 C
Bart Smaalders wrote:
Torrey McMahon wrote:
eric kustarz wrote:
I want per pool, per dataset, and per file - where all are done by
the filesystem (ZFS), not the application. I was talking about a
further enhancement to "copies" than what Matt is currently
proposing - per file "copies", but
Torrey McMahon wrote:
eric kustarz wrote:
I want per pool, per dataset, and per file - where all are done by the
filesystem (ZFS), not the application. I was talking about a further
enhancement to "copies" than what Matt is currently proposing - per
file "copies", but its more work (one thi
eric kustarz wrote:
I want per pool, per dataset, and per file - where all are done by the
filesystem (ZFS), not the application. I was talking about a further
enhancement to "copies" than what Matt is currently proposing - per
file "copies", but its more work (one thing being we don't have
On Wed, 2006-09-13 at 02:30, Richard Elling wrote:
> The field data I have says that complete disk failures are the exception.
> I hate to leave this as a teaser, I'll expand my comments later.
That matches my anecdotal experience with laptop drives; maybe I'm just
lucky, or maybe I'm just paying
Darren J Moffat wrote:
eric kustarz wrote:
So it seems to me that having this feature per-file is really useful.
Per-file with a POSIX filesystem is often not that useful. That is
because many applications (since you mentioned a presentation
StarOffice I know does this) don't update the
On Tue, 12 Sep 2006, Matthew Ahrens wrote:
> Torrey McMahon wrote:
> > Matthew Ahrens wrote:
> >> The problem that this feature attempts to address is when you have
> >> some data that is more important (and thus needs a higher level of
> >> redundancy) than other data. Of course in some situatio
On 9/13/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:
The only part of the proposal I don't like is space accounting.
Double or triple charging for data will only confuse those apps and
users that check for free space or block usage.
Why exactly isn't reporting the free space divided by the "copie
On 9/13/06, Richard Elling <[EMAIL PROTECTED]> wrote:
>> * Mirroring offers slightly better redundancy, because one disk from
>>each mirror can fail without data loss.
>
> Is this use of slightly based upon disk failure modes? That is, when
> disks fail do they tend to get isolated areas of
eric kustarz wrote:
So it seems to me that having this feature per-file is really useful.
Per-file with a POSIX filesystem is often not that useful. That is
because many applications (since you mentioned a presentation StarOffice
I know does this) don't update the file in place. Instead th
Torrey McMahon wrote:
eric kustarz wrote:
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to a
David Dyer-Bennet wrote:
On 9/12/06, eric kustarz <[EMAIL PROTECTED]> wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
pres
Torrey McMahon wrote:
Matthew Ahrens wrote:
The problem that this feature attempts to address is when you have
some data that is more important (and thus needs a higher level of
redundancy) than other data. Of course in some situations you can use
multiple pools, but that is antithetical to Z
On Tue, Sep 12, 2006 at 03:56:00PM -0700, Matthew Ahrens wrote:
> The problem that this feature attempts to address is when you have some
> data that is more important (and thus needs a higher level of
> redundancy) than other data. Of course in some situations you can use
> multiple pools, but
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is more im
[dang, this thread started on the one week this quarter that I don't have
any spare time... please accept this one comment, more later...]
Mike Gerdts wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many co
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much real-world risk. Am I wrong
here? Are partial(small) disk corruptions more common than I think?
I don't have a good statistical view of disk failures.
I don't
On 9/12/06, eric kustarz <[EMAIL PROTECTED]> wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
presentation to be replicated.
eric kustarz wrote:
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
s
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is more
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more im
On 9/12/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Matthew Ahrens wrote:
> Here is a proposal for a new 'copies' property which would allow
> different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when y
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more important (and thus needs
Darren said:
> Right, that is a very important issue. Would a
> ZFS "scrub" framework do copy on write ?
> As you point out if it doesn't then we still need
> to do something about the old clear text blocks
> because strings(1) over the raw disk will show them.
>
> I see the desire to have a knob
On Tue, Sep 12, 2006 at 05:17:16PM +0100, Darren J Moffat wrote:
> I see the desire to have a knob that says "make this encrypted now" but
> I personally believe that it is actually better if you can make this
> choice at the time you create the ZFS data set.
Including when creating the dataset
On Tue, Sep 12, 2006 at 10:36:30AM +0100, Darren J Moffat wrote:
> Mike Gerdts wrote:
> >Is there anything in the works to compress (or encrypt) existing data
> >after the fact? For example, a special option to scrub that causes
> >the data to be re-written with the new properties could potentiall
Neil A. Wilson wrote:
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS "framework" becomes available.
The biggest problem I see with this is one of observa
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS "framework" becomes available.
The biggest problem I see with this is one of observability, if not all
of
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
I've read the proposal, and followed the discussion so far. I have to
say that I don
This proposal would benefit greatly by a "problem statement." As it stands, it
feels like a solution looking for a problem.
The Introduction mentions a different problem and solution, but then pretends that
there is value to this solution. The Description section mentions some benefits
of 'c
The multiple copies needs to be thought out carefully for interactions
with ZFS crypto since. I'm not sure what the impact is yet, it would
help to know at what layer in the ZIO pipeline this is done - eg today
before or after compression.
--
Darren J Moffat
__
Dick Davies wrote:
On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
True - I'm a laptop
Dick Davies wrote:
The only real use I'd see would be for redundant copies
on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
--
Darren J Moffat
___
zfs-discuss
On 12/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
Flexibility is always nice, but this seems to greatly complicate things,
both techni
Mike Gerdts wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only affe
James Dickens wrote:
though I think this is a cool feature, I think i needs more work. I
think there sould be an option to make extra copies expendible. So the
extra copies are a request, if the space is availible make them, if
not complete the write, and log the event.
Are you asking for the e
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
James Dickens wrote:
> On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>> B. DESCRIPTION
>>
>> A new property will be added, 'copies', which specifies how many copies
>> of the given filesystem will be stored. Its value must be 1, 2, or
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
> would the user be held acountable for the space used by the extra
> copies?
Doh! Sorry I forgot to address that. I'll amend the proposal and
manpage to include this information...
Yes, the space used by the extra copies will be accounted
Mike Gerdts wrote:
Is there anything in the works to compress (or encrypt) existing data
after the fact? For example, a special option to scrub that causes
the data to be re-written with the new properties could potentially do
this.
This is a long-term goal of ours, but with snapshots, this is
James Dickens wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only af
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only affects
newly-written da
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accompli
58 matches
Mail list logo