I think this is user error: the man page explicitly says:

         -f       Forces import, even if the pool appears  to  be
                  potentially active.

and that's exactly what you did. If the behaviour had been the same without the -f option, I guess this would be a bug.

HTH

Mathias F wrote:
Hi,

we are testing ZFS atm as a possible replacement for Veritas VM.
While testing, we encountered a serious problem, which corrupted the whole 
filesystem.

First we created a standard Raid10 with 4 disks.
[b]NODE2:../# zpool create -f swimmingpool mirror c0t3d0 c0t11d0 mirror c0t4d0 
c0t12d0

NODE2:../# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
swimmingpool           33.5G     81K   33.5G     0%  ONLINE     -

NODE2:../# zpool status
pool: swimmingpool
state: ONLINE
scrub: none requested
config:

NAME         STATE     READ WRITE CKSUM
swimmingpool  ONLINE       0     0     0
mirror     ONLINE       0     0     0
c0t3d0   ONLINE       0     0     0
c0t11d0  ONLINE       0     0     0
mirror     ONLINE       0     0     0
c0t4d0   ONLINE       0     0     0
c0t12d0  ONLINE       0     0     0
errors: No known data errors
[/b]

After that we made a new ZFS and copied a testing file on it.

[b]NODE2:../# zfs create swimmingpool/babe

NODE2:../# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
swimmingpool           108K  33.0G  25.5K  /swimmingpool
swimmingpool/babe     24.5K  33.0G  24.5K  /swimmingpool/babe

NODE2:../# cp /etc/hosts /swimmingpool/babe/
[/b]

Now we test the behaviour if importing the ZFS on another system while it is 
still imported on the first one.
The expected behaviour would be that ZFS couldn't be imported due to possible 
corruption, but instead it is imported just fine!
We now were able to write simultanously from both systems on the same ZFS:

[b]NODE1:../# zpool import -f swimmingpool
NODE1:../# man man > /swimmingpool/babe/man
NODE2:../# cat /dev/urandom > /swimmingpool/babe/testfile &
NODE1:../# cat /dev/urandom > /swimmingpool/babe/testfile2 &

NODE1:../# ls -l /swimmingpool/babe/
-r--r--r--   1 root     root           2194 Sep  8 14:31 hosts
-rw-r--r--   1 root     root          17531 Sep  8 14:52 man
-rw-r--r--   1 root     root     3830447920 Sep  8 16:20 testfile2

NODE2:../# ls -l /swimmingpool/babe/
-r--r--r--   1 root     root           2194 Sep  8 14:31 hosts
-rw-r--r--   1 root     root     3534355760 Sep  8 16:19 testfile
[/b]

This can't be supposed to be the normal behaviour.
Did we encounter a bug or is this still under development?
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Michael Schuster                  +49 89 46008-2974 / x62974
visit the online support center:  http://www.sun.com/osc/

Recursion, n.: see 'Recursion'
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to