On Oct 18, 2011, at 10:35, Brian Wilson wrote:
> Where ZFS doesn't have an fsck command - and that really used to bug me - it
> does now have a -F option on zpool import. To me it's the same functionality
> for my environment - the ability to try to roll back to a 'hopefully' good
> state and
On Oct 18, 2011, at 20:35, Edward Ned Harvey wrote:
> In fact, I saw, actual work started on this task about a month ago. So it's
> not just planned, it's really in the works. Now we're talking open source
> timelines here, which means, "you'll get it when it's ready," and nobody
> knows when th
On Oct 18, 2011, at 20:26, Edward Ned Harvey wrote:
> Yes, but when scrub encounters uncorrectable errors, it doesn't attempt to
> correct them. Fsck will do things like recover lost files into the
> lost+found directory, and stuff like that...
You say "recover lost files" like you know that the
> From: Fajar A. Nugraha [mailto:w...@fajar.net]
> Sent: Tuesday, October 18, 2011 7:46 PM
>
> > * In btrfs, there is no equivalent or alternative to "zfs send | zfs
> > receive"
>
> Planned. No actual working implementation yet.
In fact, I saw, actual work started on this task about a month ago
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> On Wed, 19 Oct 2011, Peter Jeremy wrote:
> >> Doesn't a scrub do more than what 'fsck' does?
> >
> > It does different things. I'm not sure about "more".
>
> Zfs scrub val
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
>
> I had and have redundant storage, it has *NEVER* automatically fixed
> it. You're the first person I've heard that has had it automatically fix
it.
That's probably just because
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> I have done a "poor man's" rebalance by copying data after adding
> devices. I know this is not a substitute for a real online rebalance,
> but it gets the job done (if you can t
On Tue, Oct 18, 2011 at 7:18 PM, Edward Ned Harvey
wrote:
> I recently put my first btrfs system into production. Here are the
> similarities/differences I noticed different between btrfs and zfs:
>
> Differences:
> * Obviously, one is meant for linux and the other solaris (etc)
> * In btrfs, the
On Tue, Oct 18, 2011 at 8:38 PM, Gregory Shaw wrote:
> I came to the conclusion that btrfs isn't ready for prime time. I'll
> re-evaluate as development continues and the missing portions are provided.
For someone with @oracle.com email address, you could probably arrive
to that conclusion fast
On Wed, 19 Oct 2011, Peter Jeremy wrote:
Doesn't a scrub do more than what 'fsck' does?
It does different things. I'm not sure about "more".
Zfs scrub validates user data while 'fsck' does not. I consider that
as being definitely "more".
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.u
On 2011-Oct-18 23:18:02 +1100, Edward Ned Harvey
wrote:
>I recently put my first btrfs system into production. Here are the
>similarities/differences I noticed different between btrfs and zfs:
Thanks for that.
>* zfs has storage tiering. (cache & log devices, such as SSD's to
>accelerate perf
On 10/19/11 09:31 AM, Tim Cook wrote:
I had and have redundant storage, it has *NEVER* automatically fixed
it. You're the first person I've heard that has had it automatically
fix it.
I'm another, I have had many cases of ZFS fixing corrupted data on a
number of different pool configurati
On Tue, Oct 18, 2011 at 10:31 PM, Tim Cook wrote:
>
>
> I had and have redundant storage, it has *NEVER* automatically fixed it.
> You're the first person I've heard that has had it automatically fix it.
Well, here comes another person - I have ZFS automatically fixing
corrupted data on a number
On Tue, Oct 18, 2011 at 4:31 PM, Tim Cook wrote:
> I had and have redundant storage, it has *NEVER* automatically fixed it.
> You're the first person I've heard that has had it automatically fix it.
I have had ZFS automatically repair corrupted raw data when one
component of the redundancy
On Tue, Oct 18, 2011 at 3:27 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
> >
> >
> > On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> > wrote:
> >>
> >> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >> >
> >> > Every scrub I've ever done that has found an er
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
>
>
> On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> wrote:
>>
>> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
>> >
>> > Every scrub I've ever done that has found an error required manual
>> > fixing.
>> > Every pool I've ever created has b
On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >
> > Every scrub I've ever done that has found an error required manual
> fixing.
> > Every pool I've ever created has been raid-z or raid-z2, so the silent
> > healing, while a great stor
On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
>
> Every scrub I've ever done that has found an error required manual fixing.
> Every pool I've ever created has been raid-z or raid-z2, so the silent
> healing, while a great story, has never actually happened in practice in any
> environment I'v
On Tue, Oct 18, 2011 at 2:41 PM, Kees Nuyt wrote:
> On Tue, 18 Oct 2011 12:05:29 -0500, Tim Cook wrote:
>
> >> Doesn't a scrub do more than what
> >> 'fsck' does?
> >>
> > Not really. fsck will work on an offline filesystem to correct errors
> and
> > bring it back online. Scrub won't even wor
On Tue, 18 Oct 2011 12:05:29 -0500, Tim Cook wrote:
>> Doesn't a scrub do more than what
>> 'fsck' does?
>>
> Not really. fsck will work on an offline filesystem to correct errors and
> bring it back online. Scrub won't even work until the filesystem is already
> imported and online. If it's co
On 10/19/11 01:18 AM, Edward Ned Harvey wrote:
I recently put my first btrfs system into production. Here are the
similarities/differences I noticed different between btrfs and zfs:
Differences:
* Obviously, one is meant for linux and the other solaris (etc)
* In btrfs, there is only raid1. T
On 10/19/11 03:12 AM, Paul Kraus wrote:
On Tue, Oct 18, 2011 at 9:13 AM, Darren J Moffat
wrote:
On 10/18/11 14:04, Jim Klimov wrote:
2011-10-18 16:26, Darren J Moffat пишет:
ZFS does slightly biases new vdevs for new writes so that we will get
to a more even spread. It doesn't go and move a
On Tue, Oct 18, 2011 at 12:53 PM, Cindy Swearingen
wrote:
> Your 1-3 is very sensible advice
Unfortunately, I don't think I have ever seen the recommendations
I made stated quite so plainly.
>and I must ask about this
> statement:
>>I have yet to have any data loss with ZFS.
>
> Maybe this
In message <4e9db04b.80...@oracle.com>, Cindy Swearingen writes:
>This is CR 7102272.
Anyone out there have Western Digital's competing 3TB Passport
drive handy to duplicate this bug?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opens
On Tue, Oct 18, 2011 at 11:46 AM, Mark Sandrock wrote:
>
> On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
>
> > On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
> >> I just wanted to add something on fsck on ZFS - because for me that used
> to
> >> make ZFS 'not ready for prime-time' in 24
On 10/18/11 11:46 AM, Mark Sandrock wrote:
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
I just wanted to add something on fsck on ZFS - because for me that used to
make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime environments.
This is CR 7102272.
cs
On 10/18/11 10:50, John D Groenveld wrote:
In message <4e9da8b1.7020...@oracle.com>, Cindy Swearingen writes:
1. If you re-create the pool on the whole disk, like this:
# zpool create foo c1t0d0
Then, resend the prtvtoc output for c1t0d0s0.
# zpool create snafu c1t0d
Hi Paul,
Your 1-3 is very sensible advice and I must ask about this
statement:
>I have yet to have any data loss with ZFS.
Maybe this goes without saying, but I think you are using
ZFS redundancy.
Thanks,
Cindy
On 10/18/11 08:52, Paul Kraus wrote:
On Tue, Oct 18, 2011 at 9:38 AM, Gregory Sh
In message <4e9da8b1.7020...@oracle.com>, Cindy Swearingen writes:
>1. If you re-create the pool on the whole disk, like this:
>
># zpool create foo c1t0d0
>
>Then, resend the prtvtoc output for c1t0d0s0.
# zpool create snafu c1t0d0
# zpool status snafu
pool: snafu
state: ONLINE
scan: none req
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
> On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
>> I just wanted to add something on fsck on ZFS - because for me that used to
>> make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime environments.
>> Where ZFS doesn't have an fsck comm
Yeah, okay, duh. I should have known that large sector size
support is only available for a non-root ZFS file system.
A couple more things if you're still interested:
1. If you re-create the pool on the whole disk, like this:
# zpool create foo c1t0d0
Then, resend the prtvtoc output for c1t0d0
On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
> I just wanted to add something on fsck on ZFS - because for me that used to
> make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime environments.
> Where ZFS doesn't have an fsck command - and that really used to bug me - it
> does now have
In message <4e9d98b1.8040...@oracle.com>, Cindy Swearingen writes:
>I'm going to file a CR to get this issue reviewed by the USB team
>first, but if you could humor me with another test:
>
>Can you run newfs to create a UFS file system on this device
>and mount it?
# uname -srvp
SunOS 5.11 151.0.1
Hi John,
I'm going to file a CR to get this issue reviewed by the USB team
first, but if you could humor me with another test:
Can you run newfs to create a UFS file system on this device
and mount it?
Thanks,
Cindy
On 10/18/11 08:18, John D Groenveld wrote:
In message <201110150202.p9f22w2n
On Tue, Oct 18, 2011 at 9:38 AM, Gregory Shaw wrote:
> Another item that made me nervous was my experience with ZFS. Even when
> called 'ready for production', a number of bugs were found that were pretty
> nasty.
> They've since been fixed (years ago), but there were some surprises there
> th
On 10/18/11 07:18 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Harry Putnam
As a common slob who isn't very skilled, I like to see some commentary
from some of the pros here as to any comparison of zfs against b
In message <201110150202.p9f22w2n000...@elvis.arl.psu.edu>, John D Groenveld
writes:
>I'm baffled why zpool import is unable to find the pool on the
>drive, but the drive is definitely functional.
Per Richard Elling, it looks like ZFS is unable to find
the requisite labels for importing.
John
gr
On Tue, 18 Oct 2011, Gregory Shaw wrote:
I'm seriously thinking about converting the Linux system in question
into a FreeBSD system so that I can use ZFS.
FreeBSD is a wonderfully stable, coherent, and well-documented system
which has stood the test of time and has an excellent development
Gregory Shaw writes:
> I looked into btrfs some time ago for the same reasons. I had a Linux
> system that I wanted to do more intelligent things with storage.
Great details, thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On Tue, Oct 18, 2011 at 9:13 AM, Darren J Moffat
wrote:
> On 10/18/11 14:04, Jim Klimov wrote:
>>
>> 2011-10-18 16:26, Darren J Moffat пишет:
>>>
>>> ZFS does slightly biases new vdevs for new writes so that we will get
>>> to a more even spread. It doesn't go and move already written blocks
>>>
Edward Ned Harvey
writes:
> I recently put my first btrfs system into production. Here are the
> similarities/differences I noticed different between btrfs and zfs:
Great input.. thanks for the details.
___
zfs-discuss mailing list
zfs-discuss@openso
I looked into btrfs some time ago for the same reasons. I had a Linux system
that I wanted to do more intelligent things with storage.
However, I reverted to Ext3/4 and MD because of the portions of btrfs that
haven't been completed. It seems that btrfs development is very slow, which
doesn
On 10/18/11 14:04, Jim Klimov wrote:
2011-10-18 16:26, Darren J Moffat пишет:
On 10/18/11 13:18, Edward Ned Harvey wrote:
* btrfs is able to balance. (after adding new blank devices,
rebalance, so
the data& workload are distributed across all the devices.) zfs is not
able to do this yet.
ZFS
2011-10-18 16:26, Darren J Moffat пишет:
On 10/18/11 13:18, Edward Ned Harvey wrote:
* btrfs is able to balance. (after adding new blank devices,
rebalance, so
the data& workload are distributed across all the devices.) zfs is not
able to do this yet.
ZFS does slightly biases new vdevs for ne
On 10/18/11 13:18, Edward Ned Harvey wrote:
* btrfs is able to balance. (after adding new blank devices, rebalance, so
the data& workload are distributed across all the devices.) zfs is not
able to do this yet.
ZFS does slightly biases new vdevs for new writes so that we will get to
a more
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Harry Putnam
>
> FreeNAS and freebsd.
>
> Maybe you can give a little synopsis of those too. I mean when it
> comes to utilizing zfs; is it much the same as if running it on
> solaris?
For s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Harry Putnam
>
> As a common slob who isn't very skilled, I like to see some commentary
> from some of the pros here as to any comparison of zfs against btrfs.
I recently put my first btrfs sy
47 matches
Mail list logo