Very helpful indeed. Thank you Nicholas.

On Sunday, 26 April 2015, Nicholas Chammas <nicholas.cham...@gmail.com>
wrote:

> The Spark web UI offers a JSON interface with some of this information.
>
> http://stackoverflow.com/a/29659630/877069
>
> It's not an official API, so be warned that it may change unexpectedly
> between versions, but you might find it helpful.
>
> Nick
>
> On Sun, Apr 26, 2015 at 9:46 AM michal.klo...@gmail.com
> <javascript:_e(%7B%7D,'cvml','michal.klo...@gmail.com');> <
> michal.klo...@gmail.com
> <javascript:_e(%7B%7D,'cvml','michal.klo...@gmail.com');>> wrote:
>
>> Not sure if there's a spark native way but we've been using consul for
>> this.
>>
>> M
>>
>>
>>
>> On Apr 26, 2015, at 5:17 AM, James King <jakwebin...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','jakwebin...@gmail.com');>> wrote:
>>
>> Thanks for the response.
>>
>> But no this does not answer the question.
>>
>> The question was: Is there a way (via some API call) to query the number
>> and type of daemons currently running in the Spark cluster.
>>
>> Regards
>>
>>
>> On Sun, Apr 26, 2015 at 10:12 AM, ayan guha <guha.a...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','guha.a...@gmail.com');>> wrote:
>>
>>> In my limited understanding, there must be single   "leader" master  in
>>> the cluster. If there are multiple leaders, it will lead to unstable
>>> cluster as each masters will keep scheduling independently. You should use
>>> zookeeper for HA, so that standby masters can vote to find new leader if
>>> the primary goes down.
>>>
>>> Now, you can still have multiple masters running as leaders but
>>> conceptually they should be thought as different clusters.
>>>
>>> Regarding workers, they should follow their master.
>>>
>>> Not sure if this answers your question, as I am sure you have read the
>>> documentation thoroughly.
>>>
>>> Best
>>> Ayan
>>>
>>> On Sun, Apr 26, 2015 at 6:31 PM, James King <jakwebin...@gmail.com
>>> <javascript:_e(%7B%7D,'cvml','jakwebin...@gmail.com');>> wrote:
>>>
>>>> If I have 5 nodes and I wish to maintain 1 Master and 2 Workers on each
>>>> node, so in total I will have 5 master and 10 Workers.
>>>>
>>>> Now to maintain that setup I would like to query spark regarding the
>>>> number Masters and Workers that are currently available using API calls and
>>>> then take some appropriate action based on the information I get back, like
>>>> restart a dead Master or Worker.
>>>>
>>>> Is this possible? does Spark provide such API?
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>

Reply via email to