Someone posted about CERN having a bad network card which injected faulty bits
into the data stream. And ZFS detected it, because of end-to-end checksum. Does
anyone has more information on this?
--
This message posted from opensolaris.org
___
zfs-disc
Hi,
> Someone posted about CERN having a bad network card which injected faulty
> bits into the data stream.
I've regularly heard people mention data corruption in network layer here - and
other physics sites like FNAL and SLAC - but I don't have details of any
specific incident. We do see our
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps wrote:
> Hi Eric,
>
> Thank you for your help. At least one part is clear now.
>
> I still am confused about how the system is still functional after one disk
> fails.
>
> Consider my earlier example of 3 disks zpool configured for raidz-1. To
> keep i
On Wed, Aug 11, 2010 at 4:05 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:
> Someone posted about CERN having a bad network card which injected faulty
> bits into the data stream. And ZFS detected it, because of end-to-end
> checksum. Does anyone has more information on this?
> --
> Th
On Tue, 10 Aug 2010, seth keith wrote:
# zpool status
pool: brick
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool fro
Erik Trimble wrote:
> On 8/10/2010 9:57 PM, Peter Taps wrote:
> > Hi Eric,
> >
> > Thank you for your help. At least one part is clear
> now.
> >
> > I still am confused about how the system is still
> functional after one disk fails.
> >
> > Consider my earlier example of 3 disks zpool
> configure
On Tue, August 10, 2010 23:13, Ian Collins wrote:
> On 08/11/10 03:45 PM, David Dyer-Bennet wrote:
>> cannot receive incremental stream: most recent snapshot of
>> bup-wrack/fsfs/zp1/ddb does not
>> match incremental source
> That last error occurs if the snapshot exists, but has changed, it has
On Tue, August 10, 2010 16:41, Dave Pacheco wrote:
> David Dyer-Bennet wrote:
>> If that turns out to be the problem, that'll be annoying to work around
>> (I'm making snapshots every two hours and deleting them after a couple
>> of
>> weeks). Locks between admin scripts rarely end well, in my e
Just wanted to post a bit of closure to this thread quick...
Most of the "import taking too long threads" I've found on the list tend to
fade out without any definitive answer as to what went wrong. I needed
something a bit more concrete to make me happy.
After zfs send'ing everything to a fre
Hello,
In OpenSolaris b111 with autoreplace=on and a pool without spares,
ZFS is not kicking the resilver after a faulty disk is replaced and
shows up with the same device name, even after waiting several
minutes. The solution is to do a manual `zpool replace` which returns
the following:
# zpoo
On Tue, Aug 10 at 21:57, Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
The data for any given sector striped across all drives can be thought
of as:
A+B+C = P
where A..C
Thank you all for your help. It appears my understanding of parity was rather
limited. I kept on thinking about parity in memory where the extra bit would be
used to ensure that the total of all 9 bits is always even.
In case of zfs, the above type of checking is actually moved into checksum.
I found strange issue.
Let say I have zfs filesystem export/test1, which is shared over NFSv3
then I
zfs create export/test1/test2
chown myuser /export/test1/test2
ls -l /export/test1/test2 (it will output that myusers is owner).
But if I do ls -l on NFS client where /export/test1 is mounted, I
On Wed, 11 Aug 2010, Seth Keith wrote:
When I do a zdb -l /dev/rdsk/ I get the same output for all my
drives in the pool, but I don't think it looks right:
# zdb -l /dev/rdsk/c4d0
What about /dev/rdsk/c4d0s0?
___
zfs-discuss mailing list
zfs-discu
Hi Giovanni,
The spare behavior and the autoreplace property behavior are separate
but they should work pretty well in recent builds.
You should not need to perform a zpool replace operation if the
autoreplace property is set. If autoreplace is set and a replacement
disk is inserted into the sam
Peter wrote:
> One question though. Marty mentioned that raidz
> parity is limited to 3. But in my experiment, it
> seems I can get parity to any level.
>
> You create a raidz zpool as:
>
> # zpool create mypool raidzx disk1 diskk2
>
> Here, x in raidzx is a numeric value indicating the
> d
this is for newbies like myself: I used using 'zdb -l' wrong, just using the
drive name from 'zpool status' or format which is like c6d1, didn't work. I
needed to add s0 to the end:
zdb -l /dev/dsk/c6d1s0
gives me a good looking label ( I think ). The pool_guid values are the same
for all
I know that performance has been discussed often here, but I
have just gone through some testing in preparation for deploying a
large configuration (120 drives is a large configuration for me) and I
wanted to share my results, both to share the results as well as to
see if anyone sees anyth
On Wed, 11 Aug 2010, seth keith wrote:
NAME STATE READ WRITE CKSUM
brick DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c13d0 ONLINE 0 0 0
c4d0
I am looking for references of folks using ZFS with either NFS
or iSCSI as the backing store for VMware (4.x) backing store for
virtual machines. We asked the local VMware folks and they had not
even heard of ZFS. Part of what we are looking for is a recommendation
for NFS or iSCSI, and all
On Wed, Aug 11, 2010 at 10:36 AM, David Dyer-Bennet wrote:
>
> On Tue, August 10, 2010 16:41, Dave Pacheco wrote:
>> David Dyer-Bennet wrote:
>
>>> If that turns out to be the problem, that'll be annoying to work around
>>> (I'm making snapshots every two hours and deleting them after a couple
>>>
I'm stumbling over BugID 6961707 on build 134.
OpenSolaris Development snv_134 X86
Via the b134 Live CD, when I try to "zpool import -f -F -n rpool"
I get this helpful panic.
panic[cpu4]/thread=ff006cd06c60: zfs: allocating allocated
segment(offset=95698377728 size=16384)
ff006cd06580
p...@kraus-haus.org said:
> Based on these results, and our capacity needs, I am planning to go with 5
> disk raidz2 vdevs.
I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009,
and more recently comparing X4170/J4400 and X4170/MD1200:
http://acc.ohsu.edu/~hakansom/thumper
I am running ZFS file system version 5 on Nexenta.
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you, Eric. Your explanation is clear to understand.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Paul,
I am using EXSi 4.0 with a NFS-on-ZFS datastore running on OSOL b134. It
previously ran on Solaris 10u7 with VMware Server 2.x. Disks are SATAs in a
JBOD over FC.
I'll try to summarize my experience here, albeit our system does not provide
services to end users and thus is not very st
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
> Sent: Wednesday, August 11, 2010 3:53 PM
> To: ZFS Discussions
> Subject: [zfs-discuss] ZFS and VMware
>
>I am looking for references of folks
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen
wrote:
> Hi Giovanni,
>
> The spare behavior and the autoreplace property behavior are separate
> but they should work pretty well in recent builds.
>
> You should not need to perform a zpool replace operation if the
> autoreplace property is set.
On Aug 11, 2010, at 04:05, Orvar Korvar wrote:
Someone posted about CERN having a bad network card which injected
faulty bits into the data stream. And ZFS detected it, because of
end-to-end checksum. Does anyone has more information on this?
CERN generally uses Linux AFAICT:
http:
Folks,
When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2,
etc. How do I get back this information for an existing pool? The status
command does not reveal this information:
# zpool status mypool
When this command is run, I can see the disks in use. However, I don't
On 12/08/10 09:21 AM, Peter Taps wrote:
Folks,
When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2,
etc. How do I get back this information for an existing pool? The status
command does not reveal this information:
# zpool status mypool
When this command is run, I
Hello,
I would like to backup my main zpool (originally called "data") inside an
equally originally named "backup"zpool, which will also holds other kinds of
backups.
Basically I'd like to end up with
backup/data
backup/data/dataset1
backup/data/dataset2
backup/otherthings/dataset1
backup/othe
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
>I am looking for references of folks using ZFS with either NFS
> or iSCSI as the backing store for VMware (4.x) backing store for
I'll try to clearly separate what I know
On Wed, Aug 11, 2010 at 7:27 PM, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Paul Kraus
> >
> >I am looking for references of folks using ZFS with either NFS
> > or iSCSI as the backing store for VMwa
Greetings,
I am seeing some unexplained performance drop using the above cpus,
using a fairly up-to-date build ( late 145).
Basically, the system seems to be 98% idle, spending most if its time in this
stack:
unix`i86_mwait+0xd
unix`cpu_idle_mwait+0xf1
u
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tim Cook
> Sent: Wednesday, August 11, 2010 8:46 PM
> To: Edward Ned Harvey
> Cc: ZFS Discussions
> Subject: Re: [zfs-discuss] ZFS and VMware
>
>
>
> On Wed, Aug
>
>
>
> This is not entirely correct either. You're not forced to use VMFS.
>
It is entirely true. You absolutely cannot use ESX with a guest on a block
device without formatting the LUN with VMFS. You are *FORCED* to use VMFS.
You can format the LUN with VMFS, then put VM files inside the VMF
Actually, this brings up a related issue. Does anyone have experience
with running VirtualBox on iSCSI volumes vs NFS shares, both of which
would be backed by a ZFS server?
-Erik
On Wed, 2010-08-11 at 21:41 -0500, Tim Cook wrote:
>
>
>
> This is not entirely c
> -Original Message-
> From: Tim Cook [mailto:t...@cook.ms]
> Sent: Wednesday, August 11, 2010 10:42 PM
> To: Saxon, Will
> Cc: Edward Ned Harvey; ZFS Discussions
> Subject: Re: [zfs-discuss] ZFS and VMware
>
>
> I still think there are reasons why iSCSI would be
> better than NFS
>
>
>
> My understanding is that if you wanted to use MS Cluster Server, you'd need
> to use a LUN as an RDM for the quorum drive. VMDK files are locked when
> open, so they can't typically be shared. VMware's Fault Tolerance gets
> around this somehow, and I have a suspicion that their Lab Manager
I am having a similar issue at the moment.. 3 GB RAM under ESXi, but dedup for
this zvol (1.2 T) was turned off and only 300 G was used. The pool does contain
other datasets with dedup turned on but are small enough so I'm not hitting the
memory limits (been there, tried that, never again withou
On 08/12/10 04:16, Steve Gonczi wrote:
Greetings,
I am seeing some unexplained performance drop using the above cpus,
using a fairly up-to-date build ( late 145).
Basically, the system seems to be 98% idle, spending most if its time in this
stack:
unix`i86_mwait+0xd
I have three zpools on a server and want to add a mirrored pair of ssd's for
the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or
is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
___
zfs-discuss
Hi,
I am looking for the file system ownership information of ZFS file system. I
would like to know the amount of space used and number of files owned by
each user in ZFS file system. I could get the user space using 'ZFS
userspace' command. However i didn't find any switch to get Number of files
Hi James,
Appreciate your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
45 matches
Mail list logo