On Sun, May 24, 2009 at 6:11 PM, Anil Gulecha wrote:
> One example is StormOS, and XFCE based distro being built on NCP2.
> According to the latest blog entry.. a release is imminent. Perhaps
> you'll have better desktop experience with this. (www.stormos.org)
So.Tried it just now. Shortly: I'd s
I must admit that this question originates in the context of Sun's
Storage 7210 product, which impose additional restrictions on the
kind of knobs I can turn.
But here's the question: suppose I have an installation where ZFS
is the storage for user home directories. Since I need quotas, each
direc
Frank Middleton writes:
> Exactly. My whole point. And without ECC there's no way of knowing.
> But if the data is damaged /after/ checksum but /before/ write, then
> you have a real problem...
we can't do much to protect ourselves from damage to the data itself
(an extra copy in RAM will help l
May be what you saying is true wrt. NexentaCore 2.0. But hey, think
about open source principals and development process. We do hope that
NexentaCore will become an official Debian distribution some day! We
evolving and driven completely by the community here. Anyone can
participate and fix the
Frank brings up some interesting ideas, some of which might
need some additional thoughts...
Frank Middleton wrote:
On 05/23/09 10:21, Richard Elling wrote:
This forum is littered with claims of "zfs checksums are broken" where
the root cause turned out to be faulty hardware or firmware in the
On Tue, 26 May 2009, Frank Middleton wrote:
Just asking if an option for machines with no ecc and their inevitable
memory errors is a reasonable thing to suggest in an RFE.
Machines lacking ECC do not suffer from "inevitable memory errors".
Memory errors are not like death and taxes.
Exactl
Bob Friesenhahn wrote:
On Tue, 26 May 2009, Frank Middleton wrote:
Just asking if an option for machines with no ecc and their inevitable
memory errors is a reasonable thing to suggest in an RFE.
Machines lacking ECC do not suffer from "inevitable memory errors".
Memory errors are not like de
On 26-May-09, at 10:21 AM, Frank Middleton wrote:
On 05/26/09 03:23, casper@sun.com wrote:
And where exactly do you get the second good copy of the data?
From the first. And if it is already bad, as noted previously, this
is no worse than the UFS/ext3 case. If you want total freedom fro
On 25-May-09, at 11:16 PM, Frank Middleton wrote:
On 05/22/09 21:08, Toby Thain wrote:
Yes, the important thing is to *detect* them, no system can run
reliably
with bad memory, and that includes any system with ZFS. Doing nutty
things like calculating the checksum twice does not buy anything
On Tue, 26 May 2009, Frank Middleton wrote:
1) could be fixed in the documentation - "ZFS should be used with caution
on machines with no ECC since random bit flips can cause unrecoverable
checksum failures on mirrored drives". Or "ZFS isn't supported on
machines with memory that has no ECC".
On 05/26/09 03:23, casper@sun.com wrote:
And where exactly do you get the second good copy of the data?
From the first. And if it is already bad, as noted previously, this
is no worse than the UFS/ext3 case. If you want total freedom from
this class of errors, use ECC.
If you copy the c
On 05/23/09 10:21, Richard Elling wrote:
This forum is littered with claims of "zfs checksums are broken" where
the root cause turned out to be faulty hardware or firmware in the data
path.
I think that before you should speculate on a redesign, we should get to
the root cause.
The hardware
On Tue, 26 May 2009 10:19:06 +0200
Willi Burmeister wrote:
> Hi,
>
> I'm trying to get Solaris 10U6 on a old V240 with two new Seagate disks
> using zfs as the root filesystem, but failed with this status:
>
> --
> # zpool stat
So you recommend I also do speed test on larger volumes? The test data I
had on the b114 server was only 90GB. Previous tests included 500G ufs
on zvol etc. It is just it will take 4 days to send it to the b114
server to start with ;) (From Sol10 servers).
Lund
Dirk Wriedt wrote:
Jorgen,
Jorgen,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too not only size of the
snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman wrote:
Sorry, yes. It is straight;
# time zfs send zpool1/l
Hi,
I'm trying to get Solaris 10U6 on a old V240 with two new Seagate disks
using zfs as the root filesystem, but failed with this status:
--
# zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be
>On 05/22/09 21:08, Toby Thain wrote:
>> Yes, the important thing is to *detect* them, no system can run reliably
>> with bad memory, and that includes any system with ZFS. Doing nutty
>> things like calculating the checksum twice does not buy anything of
>> value here.
>
>All memory is "bad" if i
17 matches
Mail list logo