A question:
You're forcing the import of the pool on the other host.  That disregards any checks, similar to a forced import of a veritas disk group.
Does the same thing happen if you try to import the pool without the force option?

On Sep 13, 2006, at 1:44 AM, Mathias F wrote:

Hi,

we are testing ZFS atm as a possible replacement for Veritas VM.
While testing, we encountered a serious problem, which corrupted the whole filesystem.

First we created a standard Raid10 with 4 disks.
[b]NODE2:../# zpool create -f swimmingpool mirror c0t3d0 c0t11d0 mirror c0t4d0 c0t12d0

NODE2:../# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
swimmingpool           33.5G     81K   33.5G     0%  ONLINE     -

NODE2:../# zpool status
pool: swimmingpool
state: ONLINE
scrub: none requested
config:

NAME         STATE     READ WRITE CKSUM
swimmingpool  ONLINE       0     0     0
mirror     ONLINE       0     0     0
c0t3d0   ONLINE       0     0     0
c0t11d0  ONLINE       0     0     0
mirror     ONLINE       0     0     0
c0t4d0   ONLINE       0     0     0
c0t12d0  ONLINE       0     0     0
errors: No known data errors
[/b]

After that we made a new ZFS and copied a testing file on it.

[b]NODE2:../# zfs create swimmingpool/babe

NODE2:../# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
swimmingpool           108K  33.0G  25.5K  /swimmingpool
swimmingpool/babe     24.5K  33.0G  24.5K  /swimmingpool/babe

NODE2:../# cp /etc/hosts /swimmingpool/babe/
[/b]

Now we test the behaviour if importing the ZFS on another system while it is still imported on the first one.
The expected behaviour would be that ZFS couldn't be imported due to possible corruption, but instead it is imported just fine!
We now were able to write simultanously from both systems on the same ZFS:

[b]NODE1:../# zpool import -f swimmingpool
NODE1:../# man man > /swimmingpool/babe/man
NODE2:../# cat /dev/urandom > /swimmingpool/babe/testfile &
NODE1:../# cat /dev/urandom > /swimmingpool/babe/testfile2 &

NODE1:../# ls -l /swimmingpool/babe/
-r--r--r--   1 root     root           2194 Sep  8 14:31 hosts
-rw-r--r--   1 root     root          17531 Sep  8 14:52 man
-rw-r--r--   1 root     root     3830447920 Sep  8 16:20 testfile2

NODE2:../# ls -l /swimmingpool/babe/
-r--r--r--   1 root     root           2194 Sep  8 14:31 hosts
-rw-r--r--   1 root     root     3534355760 Sep  8 16:19 testfile
[/b]

This can't be supposed to be the normal behaviour.
Did we encounter a bug or is this still under development?


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list

-----
Gregory Shaw                            Programmer, SysAdmin
fmSoft, Inc.                            Network Planner
[EMAIL PROTECTED]                   And homebrewer...
Prayer belongs in schools like facts belong in organized religion. 
                                Superintendant Chalmers - "The Simpsons"



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to