admirable about locking up the OS
if there is enough redundancy to continue without that particular
chunk of metal.
--
Peter Bortas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
commendation is to change the
> kernelbase. It worked for me. See:
>
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046710.html
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046715.html
Thanks Marc!
--
Peter Bortas
___
On Thu, Aug 7, 2008 at 5:32 AM, Peter Bortas <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 6, 2008 at 7:31 PM, Bryan Allen <[EMAIL PROTECTED]> wrote:
>>
>> Good afternoon,
>>
>> I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
>>
to the
non-RAID bios version running on a single P4 CPU. Any known problems
with that configuration?
TIA,
--
Peter Bortas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/30/07, Peter Bortas <[EMAIL PROTECTED]> wrote:
I'm currently doing a complete scrub, but according to zpool status
latest estimate it will be 63h before I know how that went...
The scrub has now completed with 0 errors and the there are no longer
any corruption errors reported
On 6/30/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Peter Bortas wrote:
> According to the zdb dump, object 0 seems to be the DMU node on each
> file system. My understanding of this part of ZFS is very shallow, but
> why does it allow the filesystems to be mounted rw with dama
ystems? Or are there redundant DMU nodes it's
now using, and in that case, why doesn't it automatically fix the
damaged ones?
I'm currently doing a complete scrub, but according to zpool status
latest estimate it will be 63h before I know how that
/or it could be on
a server(s) that is less stable than the mailhost.
--
Peter Bortas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss