Dne 23.11.2010 16:34, Andrew Beekhof napsal(a):
On Tue, Nov 23, 2010 at 1:19 PM, Vit Pelcak<vpel...@suse.cz>  wrote:
Hello.

I have prepared several scripts for automation of HA testing.

I'd like to ask for some help with porting them into CTS, or at least part
of them. As there is no documentation for CTS and I'm just learning Python,
I could use some help here and there while trying to figure out how to do
that.

If I understand it right, CIB.py contains configuration of resources and new
ones are added like:
    stonith_sbd_resource_template = """
<resources>
<primitive class="stonith" id="sbd_stonith" type="external/sbd">
<meta_attributes id="sbd_stonith-meta_attributes">
<nvpair id="sbd_stonith-meta_attributes-target-role" name="target-role"
value="Started"/>
</meta_attributes>
<operations>
<op id="sbd_stonith-monitor-1" name="monitor" interval="120s"
prereq="nothing" timeout="300s"/>
<op id="sbd_stonith-monitor-2" name="start" prereq="nothing"
  timeout="120s"/>
<op id="sbd_stonith-monitor-3" name="stop" timeout="120s"/>
</operations>
</operations>
<instance_attributes id="sbd_stonith-instance_attributes">
<nvpair id="sbd_stonith-instance_attributes-sbd_device" name="sbd_device"
value="%s"/>
</instance_attributes>
</primitive>
</resources>"""

Could this be replaced just by running crm configure .... ? I think, that is
would be better (but I can be wrong, of course).
Yes, we do that actually.
The above xml is only for older versions of pacemaker when the shell
didnt exist.

Ah. I see:

class HASI(CIB10):
    def add_resources(self):
        # DLM resource
self._create('''primitive dlm ocf:pacemaker:controld op monitor interval=120s''') self._create('''clone dlm-clone dlm meta globally-unique=false interleave=true''')

        # O2CB resource
self._create('''primitive o2cb ocf:ocfs2:o2cb op monitor interval=120s''') self._create('''clone o2cb-clone o2cb meta globally-unique=false interleave=true''') self._create('''colocation o2cb-with-dlm INFINITY: o2cb-clone dlm-clone''') self._create('''order start-o2cb-after-dlm mandatory: dlm-clone o2cb-clone''')

BTW, shouldn't section for O2CB resource (and perhaps ohers) be updated also with recommended start/stop timeouts? Those default are perhaps not optimal. At least I see crm tool complaining about it when creating them.

Then I would call something like:
stonith_resource = self.stonith_sbd_resource_template % (p_value)

Is that right?
yep

In the process of SBD creation, I need to call:
sbd -d $sbd_disk create

on one of nodes, and

sbd -d $sbd_disk allocate $(hostname)

on every node. How can I do that?
CTS doesn't try to configure an entire cluster from scratch, there are
still some things that are specific to your test environment and this
looks like one of them (not all clusters have shared storage for
example)

OK. Then I'll handle this thing by my own scripts which are not viable to be ported to CTS.

There is scope for creating more complex test scenarios though, see
the HAE section of the same file.

I didn't find it.

Did you mean this?:

class HASI(CIB10):
    def add_resources(self):
        # DLM resource
self._create('''primitive dlm ocf:pacemaker:controld op monitor interval=120s''') self._create('''clone dlm-clone dlm meta globally-unique=false interleave=true''')

        # O2CB resource
self._create('''primitive o2cb ocf:ocfs2:o2cb op monitor interval=120s''') self._create('''clone o2cb-clone o2cb meta globally-unique=false interleave=true''') self._create('''colocation o2cb-with-dlm INFINITY: o2cb-clone dlm-clone''') self._create('''order start-o2cb-after-dlm mandatory: dlm-clone o2cb-clone''')

In that scenario one can assume shared storage so adding sbd wouldn't
be a problem.

# SBD resource
crm configure primitive sbd_stonith stonith:external/sbd meta target-role="Started" op monitor interval="15" timeout="15" params sbd_device="%s"

?

I assume, that CTS can handle formatting of disks for correct fs easily by
running mkfs or any other commands just as bash would. But how can I make it
do so on one specific machine while other machines would do something else?
Not sure. It wasn't really designed for this.

OK. Maybe some parts are not good idea to be ported to CTS.

Possibly you could just loop through the configured machines and treat
the first as special.

I'll consider that, however, most probably, I'll prepare cluster by own scripts for specific scenario as well and then run CTS.


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

Reply via email to