Re: [zfs-discuss] ZFS bug - CVE-2010-2392

2010-07-15 Thread Garrett D'Amore
On Thu, 2010-07-15 at 13:47 -0500, Dave Pooser wrote: > Looks like the bug affects through snv_137. Patches are available from the > usual location-- for OpenSolaris. Got a CR number for this? (Or a link to where I can find out about the CVE number?)

[zfs-discuss] ZFS bug - CVE-2010-2392

2010-07-15 Thread Dave Pooser
Looks like the bug affects through snv_137. Patches are available from the usual location-- for OpenSolaris. -- Dave Pooser, ACSA Manager of Information Services Alford Media http://www.alfordmedia.com ___ zfs

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
-- Da: Victor Latushkin A: Gabriele Bulfon Cc: zfs-discuss@opensolaris.org Data: 28 giugno 2010 16.14.12 CEST Oggetto: Re: [zfs-discuss] ZFS bug - should I be worried about this? On 28.06.10 16:16, Gabriele Bulfon wrote: Yes...they're still running...but being

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Garrett D'Amore
On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote: > Yes...they're still running...but being aware that a power failure causing an > unexpected poweroff may make the pool unreadable is a pain > > Yes. Patches should be available. > Or adoption may be lowering a lot... I don't have ac

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Victor Latushkin
On 28.06.10 16:16, Gabriele Bulfon wrote: Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Pool integrity is not affected by this issue. ___ zfs-discuss maili

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Yes. Patches should be available. Or adoption may be lowering a lot... -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Dick Hoogendijk
On 28-6-2010 12:13, Gabriele Bulfon wrote: *sweat* These systems are all running for years nowand I considered them safe... Have I been at risk all this time?! They're still running, are they not? So, stop sweating. But you're right about the changed patching service from Oracle. It sucks

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
mmmI double checked some of the running systems. Most of them have the first patch (sparc-122640-05 and x86-122641-06), but not the second one (sparc-142900-09 and x86-142901-09)... ...I feel I'm right in the middle of the problem... How much am I risking?! These systems are all mirrored via

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
Yes, I did read it. And what worries me is patches availability... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Ian Collins
On 06/28/10 08:15 PM, Gabriele Bulfon wrote: I found this today: http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29&utm_content=FriendFeed+Bot How can I be sure my

[zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
I found this today: http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29&utm_content=FriendFeed+Bot How can I be sure my Solaris 10 systems are fine? Is latest OpenSola

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
Of course I meant 2009.06   :-) Trevor Pretty wrote: BTW Reading your bug. I assumed you meant? zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool I then tried on OpenSolaris 2008.11 r...@norton:~# zfs set mountpoint= r...@norton:~# zfs set mountpoint=

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Jeremy Kister
On 9/22/2009 11:17 PM, Trevor Pretty wrote: zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool ahha, I dumbed down the process too much (trying to make it simple to reproduce). the key is in the /Auto/pool snippet that i put in the CR, but switched to /dev/null in the reproduce

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
BTW Reading your bug. I assumed you meant? zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool I then tried on OpenSolaris 2008.11 r...@norton:~# zfs set mountpoint= r...@norton:~# zfs set mountpoint=/home/pool tank r...@norton:~# zpool export tank r...@norton:~# rm -r /home/p

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
Jeremy You sure? http://bugs.opensolaris.org/view_bug.do%3Bjsessionid=32d28f683e21e4b5c35832c2e707?bug_id=6883885 BTW:  I only found this by hunting for one of my bugs  6428437 and changing the URL!  I think the searching is broken - but using bugster has always been a black art even when

[zfs-discuss] zfs bug

2009-09-22 Thread Jeremy Kister
I entered CR 6883885 at bugs.opensolaris.org. someone closed it - not reproducible. Where do i find more information, like which planet's gravitational properties affect the zfs source code ?? -- Jeremy Kister http://jeremy.kister.net./ ___ zfs-di

Re: [zfs-discuss] ZFS Bug: Value too large for defined data type

2008-01-06 Thread Jorgen Lundman
We had that with NetApps, and added this to /etc/system nfs:nfs_allow_preepoch_time=1 But that might be entirely unrelated. Lund Sengor wrote: > Hi, > > Not sure if it's the case here. However I've seen "Value too > large for defined data type" errors on systems which had date (year) > set

Re: [zfs-discuss] ZFS Bug: Value too large for defined data type

2008-01-06 Thread Sengor
Hi, Not sure if it's the case here. However I've seen "Value too large for defined data type" errors on systems which had date (year) set incorrectly. On 1/7/08, Arne Schwabe <[EMAIL PROTECTED]> wrote: > Hi, > > I have a strange problem with a zfs filesystem. > > zfs scrub stuff reports no error

[zfs-discuss] ZFS Bug: Value too large for defined data type

2008-01-06 Thread Arne Schwabe
Hi, I have a strange problem with a zfs filesystem. zfs scrub stuff reports no errors. [16:50]charon:...kaputt/Crossroads# pwd /stuff/backups/kaputt/Crossroads [16:51]charon:...kaputt/Crossroads# ls 01 - Introspection (Crossroads by Mind.In.A.Box).flac [...] [16:51]charon:...kaputt/Crossroads#

Re: [zfs-discuss] zfs bug

2007-07-02 Thread Drew Perttula
For the record, I was able to get this same panic when I dd'd /dev/zero over a file that was part of my own test pool. Specifically, I had files of sizes 128M 192M 192M 256M in a simple pool with copies=2. I put a 109M file on the filesystem and ran 'dd count=1 bs=192M if=/dev/zero of=disk2'. Th

Re: [zfs-discuss] zfs bug

2007-06-09 Thread Eric Schrock
You have created an unreplicated pool of the form: pool raidz /export/sl1 /export/sl2 /export/sl3 /export/sl4 I believe 'zpool add' will warn you about this, hence needing the '-f'. You then overwrite the entire cont

[zfs-discuss] zfs bug

2007-06-09 Thread Fyodor Ustinov
dd if=/dev/zero of=sl1 bs=512 count=256000 dd if=/dev/zero of=sl2 bs=512 count=256000 dd if=/dev/zero of=sl3 bs=512 count=256000 dd if=/dev/zero of=sl4 bs=512 count=256000 zpool create -m /export/test1 test1 raidz /export/sl1 /export/sl2 /export/sl3 zpool add -f test1 /export/sl4 dd if=/dev/zero of

Re: [zfs-discuss] ZFS bug

2006-07-04 Thread Eric Schrock
No, this is expected behavior due to the limitations of NFS. The problem is that .zfs/snapshot is technically a separate filesystem, but due to limitations in NFS (although mirror mounts might solve this), we have to present it as a single filesystem. This means that we have multiple filesystems

[zfs-discuss] ZFS bug

2006-07-03 Thread James Dickens
Hi i found a bug its a bit hard to reproduce. # zfs create pool2/t1 # touch /pool2/t1/file # zfs snapshot pool2/[EMAIL PROTECTED] # zfs clone pool2/[EMAIL PROTECTED] pool2/t2 # zfs share pool2/t2 on a second box nfs mount the filesystem, same error if a solaris express box or linux # mount e