- Original Message -
> Hi,
>
> I have a working 2 node HA setup running on CentOS 6.5 with a very simple
> Apache webserver with replicated index.html using DRBD 8.4. The setup is
> configured based on the "Clusters from Scratch" Edition 5 with Fedora 13.
>
> I now with to replace Apach
- Original Message -
> Hi there,
>
> we are using exportfs for building datastores for VMWare.
>
> After migrating 1 NFS resource to another node (after sucessful fencing
> e.g.),
> VMWare doesn't see that datastore until I manually fire _exportfs -f_ on the
> new cluster node.
>
> I t
Thanks Alexandre. Changing the cluster-recheck-interval worked for me :)
Regards
Arjun
On Mon, Nov 17, 2014 at 12:44 PM, Alexandre wrote:
>
> Le 13 nov. 2014 12:09, "Arjun Pandey" a écrit :
> >
> > Hi
> >
> > I am running a 2 node cluster with this config
> >
> > Master/Slave Set: foo-master
Hi there,
we are using exportfs for building datastores for VMWare.
After migrating 1 NFS resource to another node (after sucessful fencing e.g.),
VMWare doesn't see that datastore until I manually fire _exportfs -f_ on the
new cluster node.
I tried to modify the resource agent itself like:
Found it in tools/hawk_invoke.c:
setenv("PATH", SBINDIR":"BINDIR":/bin", 1);
If you use Solaris/Illumos the gnu utilities are in /usr/gnu/bin. And if you
use OpenCSW there exists the path /opt/opencsw/bin.
And with pkgsrc the path must contain “/opt/local/abin:/opt/local/bin”. Pkgsrc
can be use
"Grüninger, Andreas (LGL Extern)"
writes:
> I managed to get Hawk running on Solaris.
> I use as prefix /opt and not /usr.
>
> The last problem seems to be the runtime environment of the ruby application.
>
> For testing purposes I started lighthhtpd in a shell in the foreground.
> In this shell