On 9 Sep 2014, at 12:29 pm, Patrick Hemmer <pacema...@feystorm.net> wrote:

> From: Andrew Beekhof <and...@beekhof.net>
> Sent: 2014-09-02 02:58:53 EDT
> To: The Pacemaker cluster resource manager <pacemaker@oss.clusterlabs.org>
> Subject: Re: [Pacemaker] pacemaker-remote container as a clone resource
> 
>> On 1 Sep 2014, at 1:32 pm, Patrick Hemmer <pacema...@feystorm.net>
>>  wrote:
>> 
>> 
>>> From: Andrew Beekhof <and...@beekhof.net>
>>> 
>>> Sent: 2014-08-31 23:16:10 EDT
>>> To: The Pacemaker cluster resource manager 
>>> <pacemaker@oss.clusterlabs.org>
>>> 
>>> Subject: Re: [Pacemaker] pacemaker-remote container as a clone resource
>>> 
>>> 
>>>> On 1 Sep 2014, at 12:41 pm, Patrick Hemmer <pacema...@feystorm.net>
>>>> 
>>>>  wrote:
>>>> 
>>>> 
>>>> 
>>>>> From: Andrew Beekhof <and...@beekhof.net>
>>>>> 
>>>>> 
>>>>> Sent: 2014-08-31 19:57:43 EDT
>>>>> To: The Pacemaker cluster resource manager 
>>>>> 
>>>>> <pacemaker@oss.clusterlabs.org>
>>>>> 
>>>>> 
>>>>> Subject: Re: [Pacemaker] pacemaker-remote container as a clone resource
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 31 Aug 2014, at 6:09 pm, Patrick Hemmer <pacema...@feystorm.net>
>>>>>> 
>>>>>> 
>>>>>>  wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> I'm interested in creating a resource that will control host containers 
>>>>>>> running pacemaker-remote. The catch is that I want this resource to be 
>>>>>>> a clone, so that if I want more containers, I simply increase the 
>>>>>>> `clone-max` property.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> What kind of container?
>>>>>> 
>>>>>> 
>>>>> Well I would hope the solution is generic enough to work with any 
>>>>> container. But in my specific case, EC2 instances. Plus maybe docker for 
>>>>> development work.
>>>>> 
>>>>> 
>>>>> 
>>>>>> Most development was done with VirtualMachine which needs a unique name 
>>>>>> anyway (ie. cant be cloned).
>>>>>> 
>>>>>> 
>>>>> With EC2 (and docker), you create an instance/container and the name & 
>>>>> address is returned after creation.
>>>>> 
>>>>> 
>>>> Thats going to be challenging then.
>>>> Since there is no way to know which containers are allowed to be 
>>>> attempting a connection or even which ones match up to the implicit 
>>>> connection managers we have started.
>>>> 
>>>> 
>>> That was the reason for my thought about setting an attribute on the clone 
>>> child from within the resource agent. The resource agent would start the 
>>> container, the container management service would respond with the address, 
>>> and the resource agent would call `crm_attribute` on itself (the clone 
>>> child) to set the `remote-node` property to the address of the container 
>>> (like master/slave resource agents do for setting scores).
>>> 
>> Except, by design, the clone child doesn't exist as an addressable entity in 
>> the cib.
>> 
>> What settings do you have for globally-unique=false and clone-node-max btw?
>> 
> globally-unique would be false, and clone-node-max would be fairly high,

Those two settings are in conflict. 
globally-unique=false requires clone-node-max=1 as by definition all instances 
are identical and it is not possible to distinguish between different copies of 
the clone.

> maybe 100 or so (we haven't written the code to fully implement this yet).
> 
> 
>> 
>> Even ignoring pacemaker-remote, I suspect you're going to have issues with 
>> reprobes (which can happen at any time).
>> This is because you need a way to match the clone child's name to the 
>> specific container it started.
>> 
> This is easy. Worse case scenario, you could simply write a file to the 
> filesystem mapping the clone child's name to the container id. But in the 
> case of EC2, you can set arbitrary tags on the instance.

Not a bad idea.
So the container RA would need to be extended to a) write this on start and b) 
check this on monitor

> 
> 
>> 
>> Normally the resource name is in some way related to the container's - have 
>> you got some equivalent in the case of clones?
>> If not, the cluster will get horribly confused on a regular basis (because 
>> multiple monitor ops will find the same container and report themselves as 
>> active).
>> 
> Well in the case of a clone, all the clones share the parent name. If the 
> resource is "ec2_instance", the clone children are simply "ec2_instance:0", 
> "ec2_instance:1", etc. This is perfectly fine. When all the 
> instances/containers are doing the same thing, they don't need friendly names.

Friendly no, unique yes.

> Now it might be nice

s/nice/critical,essential,fundamental,imperative/

> to identify which container/instance "ec2_instance:1" corresponds to, but 
> there are any number of ways you could do this. The resource agent could set 
> a tag on the container, it could write out a file, update a db, whatever 
> (would be nice if you could do crm_attribute to set an attribute though to 
> the container's ID though).
> 
> 
> 
>> 
>>> Though I don't follow on what you mean on "which containers are allowed to 
>>> be attempting a connection". The pacemaker-remote connection is initiated 
>>> by the host, not the remote. So no container should be connecting,
>>> 
>> Right, I managed to confuse myself for a moment there.
>> 
>> 
>>> and the resource agent will instruct the host who it should be connecting 
>>> to.
>>> 
>>> 
>>>> I assume all of these containers are doing the same thing?
>>>> 
>>> Basically yes. They'll all be capable of running resources as instructed by 
>>> pacemaker.
>>> 
>>> 
>>>>>> Interesting concept though, I'm sure we can figure out some way to get 
>>>>>> it done.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> The problem is that the `remote-node` meta parameter is set on the 
>>>>>>> clone resource, and not the children. So I can't see a way to tell 
>>>>>>> pacemaker the address of all the clone children containers that get 
>>>>>>> created.
>>>>>>> The only way I can see something like this being possible is if the 
>>>>>>> resource agent set an attribute on the clone child when the child was 
>>>>>>> started.
>>>>>>> 
>>>>>>> Is there any way to accomplish this? Or will I have to create separate 
>>>>>>> resources for every single container? If this isn't possible, would 
>>>>>>> this be considered as a future feature?
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> -Patrick
>>>>>>> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to