There is an explicit check in ZFS for the checksum, as you deduced. I suspect
that by disabling this check you could recover much, if not all, of your data.
You could probably do this with mdb by 'simply' writing a NOP over the branch
in dmu_recv_stream.
It appears that 'zfs send' was designed
Jonathan Wheeler wrote:
> Thanks for the information, I'm learning quite a lot from all this.
>
> It seems to me that zfs send *should* be doing some kind of verification,
> since some work has clearly been put into zfs so that zfs's can be dumped
> into files/pipes. It's a great feature to have,
Looking at what you wrote, you claim that hot plug events on ports 6 and 7
generally work, but other ports are not immediately discovered. Since
there is no special code for ports 6 & 7 and no one else has reported this
sort of behavior, it would make me think that you have a hardware issue.
Possi
paul wrote:
> Bob wrote:
>
>> ... Given the many hardware safeguards against single (and several) bit
>> errors,
>> the most common data error will be large. For example, the disk drive may
>> return data from the wrong sector.
>>
>
> - actually data integrity check bits as may exist with
On Wed, 13 Aug 2008, paul wrote:
> Shy extremely noisy hardware and/or literal hard failure, most
> errors will most likely always be expressed as 1 bit out of some
> very large N number of bits.
This claim ignores the fact that most computers today are still based
on synchronously clocked pa
> "jw" == Jonathan Wheeler <[EMAIL PROTECTED]> writes:
jw> A common example used all over the place is zfs send | ssh
jw> $host. In these examples is ssh guaranteeing the data delivery
jw> somehow?
it is really all just appologetics. It sounds like a zfs bug to me.
The only alte
Bob wrote:
> ... Given the many hardware safeguards against single (and several) bit
> errors,
> the most common data error will be large. For example, the disk drive may
> return data from the wrong sector.
- actually data integrity check bits as may exist within memory systems and/or
communi
Mario,
>> Latest BeleniX OpenSolaris uses the Caiman installer so it may be
>> worth installing it just to see what it is like. I installed it under
>> VirtualBox yesterday. Installing using "whole disk" did not work with
>> VirtualBox but the suggested default partitioning did work.
>>
On Wed, 13 Aug 2008, paul wrote:
> Given that the checksum algorithms utilized in zfs are already fairly CPU
> intensive, I
> can't help but wonder if it's verified that a majority of checksum
> inconsistency failures
> appear to be single bit; if it may be advantageous to utilize some
> comput
Thanks. This is useful to know. I see that 6282725 went into B63.
Regards,
Moinak.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Actually the SRDF copy has to be imported on the standby(R2) host if the
primary host/storage has to be offlined for some reason; but thanks for the
note.
Regards,
Moinak.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
On Aug 13, 2008, at 5:58 AM, Moinak Ghosh wrote:
> I have to help setup a configuration where a ZPOOL on MPXIO on
> OpenSolaris is being used with Symmetrix devices with replication
> being handled via Symmetrix Remote Data Facility (SRDF).
> So I am curious whether anyone has used this confi
Given that the checksum algorithms utilized in zfs are already fairly CPU
intensive, I
can't help but wonder if it's verified that a majority of checksum
inconsistency failures
appear to be single bit; if it may be advantageous to utilize some
computationally
simpler hybrid form of a checksum/ha
I see that a driver patch has now been released for marvell88sx
hardware. I expect that this is the patch that Thumper owners have
been anxiously waiting for. The patch ID is 138053-02.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/u
Robert Milkowski wrote:
> Wednesday, August 13, 2008, 1:58:34 PM, you wrote:
>
> MG> I have to help setup a configuration where a ZPOOL on MPXIO on
> MG> OpenSolaris is being used with Symmetrix devices with replication
> MG> being handled via Symmetrix Remote Data Facility (SRDF).
> MG> So I
Hello Moinak,
Wednesday, August 13, 2008, 1:58:34 PM, you wrote:
MG> I have to help setup a configuration where a ZPOOL on MPXIO on
MG> OpenSolaris is being used with Symmetrix devices with replication
MG> being handled via Symmetrix Remote Data Facility (SRDF).
MG> So I am curious whether anyone
I used BCwipe to zero the drives. How do you:
boot Knoppix again and zero out the start and end sectors manually (erasing all
GPT data)
??
thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
Oh, Jeff's write script gives around 60MB/s IIRC.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you ever figure this out?
I have the same hardware: Intel DG33TL motherboard with Intel gigabit nic and
ICH9R but with Hitachi 1TB drives.
I'm getting 2MB/s write speeds.
I've tried the zeroing out trick. No luck.
Network is fine. Disks are fine, the write at around 50MB/s when formatted w
> Latest BeleniX OpenSolaris uses the Caiman installer so it may be
> worth installing it just to see what it is like. I installed it under
> VirtualBox yesterday. Installing using "whole disk" did not work with
> VirtualBox but the suggested default partitioning did work.
OpenSolaris 2008.05
On Tue, 12 Aug 2008, Lori Alt wrote:
>>
> There are no plans to add zfs root support to the existing
> install GUI. GUI install support for zfs root will be
> provided by the new Caiman installer.
Latest BeleniX OpenSolaris uses the Caiman installer so it may be
worth installing it just to see w
Miles Nordin <[EMAIL PROTECTED]>
> "cs" == Cromar Scott <[EMAIL PROTECTED]> writes:
cs> We opened a call with Sun support. We were told that the
cs> corruption issue was due to a race condition within ZFS. We
cs> were also told that the issue was known and was scheduled for
c
We had done several benchmarks on Thumpers. Config 1 is definetly better on
most of the loads.
Some Raid1 configs perform better on certain loads.
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +902123352
Thanks for the information, I'm learning quite a lot from all this.
It seems to me that zfs send *should* be doing some kind of verification, since
some work has clearly been put into zfs so that zfs's can be dumped into
files/pipes. It's a great feature to have, and I can't believe that this wa
> "cs" == Cromar Scott <[EMAIL PROTECTED]> writes:
cs> We opened a call with Sun support. We were told that the
cs> corruption issue was due to a race condition within ZFS. We
cs> were also told that the issue was known and was scheduled for
cs> a fix in S10U6.
nice. Is the
> "jw" == Jonathan Wheeler <[EMAIL PROTECTED]> writes:
> "mp" == Mattias Pantzare <[EMAIL PROTECTED]> writes:
jw> Miles: zfs receive -nv works ok
one might argue 'zfs receive' should validate checksums with the -n
option, so you can check if a just-written dump is clean before
countin
Mattias Pantzare wrote:
> 2008/8/13 Jonathan Wheeler <[EMAIL PROTECTED]>:
>> So far we've established that in this case:
>> *Version mismatches aren't causing the problem.
>> *Receiving across the network isn't the issue (because I have the exact same
>> issue restoring the stream directly on
>> m
2008/8/13 Jonathan Wheeler <[EMAIL PROTECTED]>:
> So far we've established that in this case:
> *Version mismatches aren't causing the problem.
> *Receiving across the network isn't the issue (because I have the exact same
> issue restoring the stream directly on
> my file server).
> *All that's l
Miles Nordin <[EMAIL PROTECTED]>
> "cs" == Cromar Scott <[EMAIL PROTECTED]> writes:
cs> It appears that the metadata on that pool became corrupted
cs> when the processor failed. The exact mechanism is a bit of a
cs> mystery,
[...]
cs> We were told that the probability of me
Hi Mattias & Miles.
To test the version mismatch theory, I setup a snv_91 VM (using virtualbox) on
my snv_95 desktop, and tried the zfs receive again. Unfortunately the symptoms
are exactly the same: around the ~20GB mark, the justhome.zfs stream still
bombs out with the checksum error.
I didn
I have to help setup a configuration where a ZPOOL on MPXIO on OpenSolaris is
being used with Symmetrix devices with replication being handled via Symmetrix
Remote Data Facility (SRDF).
So I am curious whether anyone has used this configuration and have any
feedback/suggestions.
Will there be an
31 matches
Mail list logo