> As others have pointed out you could use the fully supported alternate
> root support for this.
>
> The "zpool create -R" and "zpool import -R" commands allow
Yes. I tried that. It should work well.
In addition, I'm happy to note that '-R /' appears to be valid, allowing
all the fil
Darren Dunham wrote:
Exactly. What method could such a framework use to ask ZFS to import a
pool *now*, but not also automatically at next boot? (How does the
upcoming SC do it?)
I don't know how Sun Cluster does it and I don't know where the source is.
As others have pointed out you could u
> > Again, the difference is that with UFS your filesystems won't auto
> > mount at boot. If you repeated with UFS, you wouldn't try to mount
> > until you decided you should own the disk.
>
> Normally on Solaris UFS filesystems are mounted via /etc/vfstab so yes
> the will probably automaticall
Frank Cusack wrote:
On September 13, 2006 7:07:40 PM -0700 Richard Elling
<[EMAIL PROTECTED]> wrote:
Dale Ghent wrote:
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support
for HA-ZFS
and until that version is released you will not be running in a
suppor
On September 13, 2006 7:07:40 PM -0700 Richard Elling <[EMAIL PROTECTED]> wrote:
Dale Ghent wrote:
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support
for HA-ZFS
and until that version is released you will not be running in a
supported
configuration
Dale Ghent wrote:
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support
for HA-ZFS
and until that version is released you will not be running in a
supported
configuration and so any errors you encounter are *your fault
alone*.
Still, after reading
On September 13, 2006 4:33:31 PM -0700 Frank Cusack <[EMAIL PROTECTED]> wrote:
You'd typically have a dedicated link for heartbeat, what if that cable
gets yanked or that NIC port dies. The backup system could avoid mounting
the pool if zfs had its own heartbeat. What if the cluster software
ha
On September 13, 2006 6:44:44 PM +0100 Darren J Moffat <[EMAIL PROTECTED]>
wrote:
Frank Cusack wrote:
Sounds cool! Better than depending on an out-of-band heartbeat.
I disagree it sounds really really bad. If you want a high availability
cluster you really need
a faster interconnect than s
Frank Cusack wrote:
...[snip James McPherson's objections to PMC]
I understand the objection to mickey mouse configurations, but I don't
understand the objection to (what I consider) simply improving safety.
...
And why should failover be limited to SC? Why shouldn't VCS be able to
play? Why
> Still, after reading Mathias's description, it seems that the former
> node is doing an implicit forced import when it boots back up. This
> seems wrong to me.
>
> zpools should be imported only of the zpool itself says it's not already
> taken, which of course would be overidden by a manual
On Wed, Sep 13, 2006 at 06:37:25PM +0100, Darren J Moffat wrote:
> Dale Ghent wrote:
> >On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote:
> >
> >>Storing the hostid as a last-ditch check for administrative error is a
> >>reasonable RFE - just one that we haven't yet gotten around to.
> >>Claiming t
On Sep 13, 2006, at 1:37 PM, Darren J Moffat wrote:
That might be acceptable in some environments but that is going to
cause disks to spin up. That will be very unacceptable in a
laptop and maybe even in some energy conscious data centres.
Introduce an option to 'zpool create'? Come to th
Frank Cusack wrote:
Sounds cool! Better than depending on an out-of-band heartbeat.
I disagree it sounds really really bad. If you want a high availability
cluster you really need a faster interconnect than spinning rust which
is probably the slowest interface we have now!
--
Darren J Mof
On September 13, 2006 1:28:47 PM -0400 Dale Ghent <[EMAIL PROTECTED]>
wrote:
On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote:
Storing the hostid as a last-ditch check for administrative error is a
reasonable RFE - just one that we haven't yet gotten around to.
Claiming that it will solve the c
Dale Ghent wrote:
On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote:
Storing the hostid as a last-ditch check for administrative error is a
reasonable RFE - just one that we haven't yet gotten around to.
Claiming that it will solve the clustering problem oversimplifies the
problem and will lead
On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote:
Storing the hostid as a last-ditch check for administrative error is a
reasonable RFE - just one that we haven't yet gotten around to.
Claiming that it will solve the clustering problem oversimplifies the
problem and will lead to people who think
On September 13, 2006 9:32:50 AM -0700 Eric Schrock <[EMAIL PROTECTED]>
wrote:
On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote:
Why again shouldn't zfs have a hostid written into the pool, to prevent
import if the hostid doesn't match?
See:
6282725 hostname/hostid should be stor
On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote:
>
> Why again shouldn't zfs have a hostid written into the pool, to prevent
> import if the hostid doesn't match?
See:
6282725 hostname/hostid should be stored in the label
Keep in mind that this is not a complete clustering solution
On September 13, 2006 6:09:50 AM -0700 Mathias F
<[EMAIL PROTECTED]> wrote:
[...]
a product which is *not* currently multi-host-aware to
behave in the
same safe manner as one which is.
That`s the point we figured out while testing it ;)
I just wanted to have our thoughts reviewed by other ZFS
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support for
HA-ZFS
and until that version is released you will not be running in a supported
configuration and so any errors you encounter are *your fault alone*.
Still, after reading Mathias's descriptio
Hi Mathias,
Mathias F wrote:
Without -f option, the ZFS can't be imported while "reserved" for the other
host, even if that host is down.
As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are
using atm. So as a result our tests have failed and we have to keep on using
Mathias F wrote:
Without -f option, the ZFS can't be imported while "reserved" for the other
host, even if that host is down.
This is the correct behaviour. What do you want to cause? data corruption?
As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we
are using atm. So a
Mathias F wrote:
Without -f option, the ZFS can't be imported while "reserved" for the other
host, even if that host is down.
As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are
using atm. So as a result our tests have failed and we have to keep on using
Veritas.
Tha
Without -f option, the ZFS can't be imported while "reserved" for the other
host, even if that host is down.
As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are
using atm. So as a result our tests have failed and we have to keep on using
Veritas.
Thanks for all your an
On Wed, Sep 13, 2006 at 12:28:23PM +0200, Michael Schuster wrote:
> Mathias F wrote:
> >Well, we are using the -f parameter to test failover functionality.
> >If one system with mounted ZFS is down, we have to use the force to mount
> >it on the failover system.
> >But when the failed system com
Mathias F wrote:
Well, we are using the -f parameter to test failover functionality.
If one system with mounted ZFS is down, we have to use the force to mount it on
the failover system.
But when the failed system comes online again, it remounts the ZFS without
errors, so it is mounted simultano
Well, we are using the -f parameter to test failover functionality.
If one system with mounted ZFS is down, we have to use the force to mount it on
the failover system.
But when the failed system comes online again, it remounts the ZFS without
errors, so it is mounted simultanously on both nodes.
27 matches
Mail list logo