On Mon, 21 Aug 2006, Ez-Aton wrote:

> RH Cluster is a bad joke.

linux-ha is also not so good (e.g. it cannot recover from loss of access
to external disks).

> I have used various HA solutions, including VCS, SunCluster, HACMP, and
> even MSCS, and without a doubt, RH Cluster sux. It lacks features, and
> its main defensemechanism against split-brain is "Shoot the Other in
> the Head" via its UPS, it's Fibre link, or the likes (they call if
> "Fence"). Instead of better logic (how to detect split-brain? How to
> prevent it?), they use brute-force in a way I didn't like.

there is no theoretical solution to a real "split brains" situation. most
clustering software use some sort of SCSI reservation to prevent this -
but then, if the split is complete - there'll not be proper access to the
disks anyway (and in a real HA system, you have two sets of disks - so
there can be a split between them as well).

> In my simple tests (used HTTPD as a resource) the cluster was unable to
> recover from a simple "pkill httpd" on the active node, and completely
> flunked my tests.
>
>
> I would recommend you check Linux-HA. It is looking OK, seems adjustable
> to your needs, and would probably work better. It is a bit more
> complicated to setup (although it's not too complicated), but it can be
> controlled via simple scripts, which can probably do what you wanted it
> to do.

albeit linux-ha being better - it is too problematic (and uses the same
STONITH method during split-brains - and ofcourse STONITH can't work when
there is a real communications problem between the two servers).

note that there are some commercial cluster software for linux, which
looks far better, features-wide, when compared to redhat cluster or to
linux-HA.

> Ez.
>
>
> Ira Abramov wrote:
>
> > Quoting Vitaly Karasik, from the post of Sun, 20 Aug:
> >
> >>> so, is there a config error here, or should I dump the whole iSCSI
> >>>concept? is there a way to install a red-hat cluster of three
> >>>CENTOS3 machines with no common storage? I just need IP addresses
> >>>and processes moving around between the nodes, the application
> >>>vendor ONLY supports Red Hat 3 and its clustering, but won't supply
> >>>instructions or recommended procedures. arrrrggh!
> >>>
> >> As far as I remember, RHEL3 Cluster Manager cannot work without shared
> >> storage anddoesn't support iSCSI device as a shared storage (at
> >> least, RH doesn't promise that this configuration will work stable)
> >>
> >
> > it works just fine. RHEL Cluster with two common raw devices for the
> > quorum, I didn't bother setting up GFS atthe end, since it was not
> > important.
> >
> > I was very disappointed from the RH cluster manager though. all it does
> > it move a list of services without dependency on eachother. it's quite a
> > lot but it's missing some needed features, like defining a logical link
> > or block - service A and B must migrate to new nodes together, but not
> > to one that already runs service C for instance. nope, I can only define
> > to which nodes each service migrates and that's it. For instance, y
> > client wanted a very simple case where three machines run two services.
> > if any of the three machines fails, the other two take over the two
> > services that need to run, but I can't have both services migrating to
> > the same node, and now I cannot prevent this using this tool, I'll have
> > to make funny improvizations in the startup files to get it to "fail"
> > for the cluster manager and force it to migrate it further to another
> > node if this one is busy. this is an ugly kludge, and the only "right"
> > solutiong, per RHEL, is to have 4 rather than 3 machines, each pair
> > takes care of one service and that it. rediculous :-(
> >
> >
>

-- 
guy

"For world domination - press 1,
 or dial 0, and please hold, for the creator." -- nob o. dy


=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to