Hi forum,
I'm currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypoolONLINE 0
[...]
> a product which is *not* currently multi-host-aware to
> behave in the
> same safe manner as one which is.
That`s the point we figured out while testing it ;)
I just wanted to have our thoughts reviewed by other ZFS users.
Our next steps IF the failover would have succeeded would be to cr
> I think I get the whole picture, let me summarise:
>
> - you create a pool P and an FS on host A
> - Host A crashes
> - you import P on host B; this only works with -f, as
> "zpool import" otherwise
> refuses to do so.
> - now P is imported on B
> - host A comes back up and re-accesses P, there
Without -f option, the ZFS can't be imported while "reserved" for the other
host, even if that host is down.
As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are
using atm. So as a result our tests have failed and we have to keep on using
Veritas.
Thanks for all your an
Well, we are using the -f parameter to test failover functionality.
If one system with mounted ZFS is down, we have to use the force to mount it on
the failover system.
But when the failed system comes online again, it remounts the ZFS without
errors, so it is mounted simultanously on both nodes.
Hi,
we are testing ZFS atm as a possible replacement for Veritas VM.
While testing, we encountered a serious problem, which corrupted the whole
filesystem.
First we created a standard Raid10 with 4 disks.
[b]NODE2:../# zpool create -f swimmingpool mirror c0t3d0 c0t11d0 mirror c0t4d0
c0t12d0
NO