-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Joerg Schilling wrote:
> What they missed to say is that you need to access the whole disk
> frequently enough in order to give SMART the ability to work.
I thought modern disks could be instructed to do "offline scanning",
using any idle time availab
It turns out that even rather poor prediction accuracy is good enough to make a
big difference (10x) in the failure probability of a RAID system.
See Gordon Hughes & Joseph Murray, "Reliability and Security of RAID Storage
Systems and D2D Archives Using SATA Disk Drives", ACM Transactions on Sto
I'd usually agree with that, but - if we have an opportunity to make
users love ZFS even more, why not at least investigate it.
A perfect example might be exactly what I did on one occasion, where I
copied a bunch of photos off a CF card. I then reformatted the CF card,
and cleaned up the the
On Feb 20, 2007, at 10:43 AM, [EMAIL PROTECTED] wrote:
If you run a 'zpool scrub preplica-1', then the persistent error log
will be cleaned up. In the future, we'll have a background scrubber
to make your life easier.
eric
Eric,
Great news! Are there any details about how thi
On 2/20/07, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
I, for one, would love to have similar functionality that we had in good
old netware, where we could 'salvage' deleted files.
The concept was that when the files were deleted, they were not actually
removed, nor were the all important ref
Hello eric,
Tuesday, February 20, 2007, 11:29:41 PM, you wrote:
>>
>> ek> If you were able to send over your complete pool, destroy the
>> ek> existing one and re-create a new one using recv, then that should
>> ek> help with fragmentation. That said, that's a very poor man's
>> ek> defragger.
ek> If you were able to send over your complete pool, destroy the
ek> existing one and re-create a new one using recv, then that should
ek> help with fragmentation. That said, that's a very poor man's
ek> defragger. The defragmentation should happen automatically or at
ek> least while the pool
I, for one, would love to have similar functionality that we had in good
old netware, where we could 'salvage' deleted files.
The concept was that when the files were deleted, they were not actually
removed, nor were the all important references to the files to allow
undeleting them.
In t
On Feb 20, 2007, at 15:05, Krister Johansen wrote:
what's the minimum allocation size for a file in zfs? I get 1024B by
my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/
znode allocation) since we never pack file data in the inode/znode.
Is this a problem? Only if you're t
Hello eric,
Tuesday, February 20, 2007, 5:55:47 PM, you wrote:
ek> On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote:
>> Hello eric,
>>
>> Wednesday, February 14, 2007, 5:04:01 PM, you wrote:
>>
>> ek> I'm wondering if we can just lower the amount of space we're
>> trying
>> ek> to alloc as
>
> If you run a 'zpool scrub preplica-1', then the persistent error log
> will be cleaned up. In the future, we'll have a background scrubber
> to make your life easier.
>
> eric
Eric,
Great news! Are there any details about how this will be implemented
yet? I am most curious to ho
Roch
what's the minimum allocation size for a file in zfs? I get 1024B by
my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/
znode allocation) since we never pack file data in the inode/znode.
Is this a problem? Only if you're trying to pack a lot files small
byte fil
On Feb 18, 2007, at 9:19 PM, Davin Milun wrote:
I have one that looks like this:
pool: preplica-1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwi
>Got it, my assumption is undelete would only act on deleted files.
>Truncating, changing the files data are not delete operations (unlink).
>You are starting to talk about versioning at that point -- in which case
>this issue becomes way more complicated. Applications may do multiple
>writes to
> > >
> > > There's a fundamental problem with an undelete facility.
> > >
> > >$ echo > FILE
> > >$ undelete FILE
> > >cannot undelete FILE: file exists
> >
> >
> > Why the assumption that an undelete command would be brain dead -- this
IS
> > Unix. =) Seems like a low bar issue,
On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote:
Hello eric,
Wednesday, February 14, 2007, 5:04:01 PM, you wrote:
ek> I'm wondering if we can just lower the amount of space we're
trying
ek> to alloc as the pool becomes more fragmented - we'll lose a
little I/
ek> O performance, but i
On Tue, Feb 20, 2007 at 10:14:24AM -0600, [EMAIL PROTECTED] wrote:
>
> [EMAIL PROTECTED] wrote on 02/20/2007 08:10:59 AM:
>
> > On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote:
> > > Hello Jeremy,
> > >
> > > Monday, February 19, 2007, 1:58:18 PM, you wrote:
> > >
> > > >> Someth
[EMAIL PROTECTED] wrote on 02/20/2007 08:10:59 AM:
> On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote:
> > Hello Jeremy,
> >
> > Monday, February 19, 2007, 1:58:18 PM, you wrote:
> >
> > >> Something similar was proposed here before and IIRC someone even has
a
> > >> working i
Uwe,
It was also unclear to me that legacy mounts were causing your
troubles. The ZFS Admin Guide describes ZFS mounts and legacy
mounts, here:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view
Richard, I think we need some more basic troubleshooting info, such
as this mount failure. I
On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote:
> Hello Jeremy,
>
> Monday, February 19, 2007, 1:58:18 PM, you wrote:
>
> >> Something similar was proposed here before and IIRC someone even has a
> >> working implementation. I don't know what happened to it.
>
> JT> That would
Hello Jeremy,
Monday, February 19, 2007, 1:58:18 PM, you wrote:
>> Something similar was proposed here before and IIRC someone even has a
>> working implementation. I don't know what happened to it.
JT> That would be me. AFAIK, no one really wanted it. The problem that it
JT> solves can be solv
>
> As I understand the issue, a readdirplus is
> 2X slower when data is already cached in the client
> than when it is not.
Yes, that's the issue. It's not always 2X slower, but ALWAYS SLOWER.
My another 2runs on NFS/ZFS show:
1. real 3:14.185
user2.249
sys33.083
2.
Hello Nicholas,
Tuesday, February 20, 2007, 12:55:05 AM, you wrote:
>
On 2/19/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
5. there's no simple answer to this question as it greatly depends on workload and data.
One thing you should keep in mind - Solaris *has* to boot in a 6
Sorry to insist but I am not aware of a small file problem
with ZFS (which doesn't mean there isn't one, nor that we
agree on definition of 'problem'). So if anyone has data on
this topic, I'm interested.
Also note, ZFS does a lot more than VxFS.
-r
Claude Teissedre writes:
> Hello Roch,
Richard Elling <[EMAIL PROTECTED]> wrote:
> >
> > Link to the paper is http://labs.google.com/papers/disk_failures.pdf
>
> As for the spares debate, that is easy: use spares :-)
What they missed to say is that you need to access the whole disk
frequently enough in order to give SMART the ability
25 matches
Mail list logo