[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1207:
-----------------------------

    Description: 
There are a few components to this.

1) The Framework:  This is going to be responsible for starting up and managing 
the fail over of brokers within the mesos cluster.  This will have to get some 
Kafka focused paramaters for launching new replica brokers, moving topics and 
partitions around based on what is happening in the grid through time.

2) The Scheduler: This is what is going to ask for resources for Kafka brokers 
(new ones, replacement ones, commissioned ones) and other operations such as 
stopping tasks (decommissioning brokers).  I think this should also expose a 
user interface (or at least a rest api) for producers and consumers so we can 
have producers and consumers run inside of the mesos cluster if folks want 
(just add the jar)

3) The Executor : This is the task launcher.  It launches tasks kills them off.

4) Sharing data between Scheduler and Executor: I looked at the a few 
implementations of this.  I like parts of the Storm implementation but think 
using the environment variable ExectorInfo.CommandInfo.Enviornment.Variables[] 
is the best shot.  We can have a command line 
bin/kafka-mesos-scheduler-start.sh that would build the contrib project if not 
already built and support conf/server.properties to start.

The Framework and operating Scheduler would run in on an administrative node.  
I am probably going to hook Apache Curator into it so it can do it's own 
failure to a another follower.  Running more than 2 should be sufficient as 
long as it can bring back it's state (e.g. from zk).  I think we can add this 
in after once everything is working.

  was:
What I was thinking for the Utils.loadProps is to create some 
ResourceNegotiatedConfiguration class or something to sit with it so various 
frameworks can be supported.

For Mesos I am thinking of storing the properties in zookeeper. For a new node 
joining => registering a default config (originally set by a tool) and saving 
it unique for that broker in a znode.  This znode would be the input to the 
serverConfig for the KafkaServerStartable.  

I am going to create a kafka.tools.ResourceNegotiator.ApacheMesos tool too.  
that tool will take in a server.properties file to set the original default. 
Once loaded in zookeeper another command to take a property and flag it with a 
function to run (like the "use new broker id value") and such.

I would rather have some implementation in Kafka.scala and have object 
KafkaMesos be in Kafka.scala too but wasn't sure what other thoughts folks 
might have?


> Launch Kafka from within Apache Mesos
> -------------------------------------
>
>                 Key: KAFKA-1207
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1207
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Joe Stein
>              Labels: mesos
>             Fix For: 0.8.1
>
>
> There are a few components to this.
> 1) The Framework:  This is going to be responsible for starting up and 
> managing the fail over of brokers within the mesos cluster.  This will have 
> to get some Kafka focused paramaters for launching new replica brokers, 
> moving topics and partitions around based on what is happening in the grid 
> through time.
> 2) The Scheduler: This is what is going to ask for resources for Kafka 
> brokers (new ones, replacement ones, commissioned ones) and other operations 
> such as stopping tasks (decommissioning brokers).  I think this should also 
> expose a user interface (or at least a rest api) for producers and consumers 
> so we can have producers and consumers run inside of the mesos cluster if 
> folks want (just add the jar)
> 3) The Executor : This is the task launcher.  It launches tasks kills them 
> off.
> 4) Sharing data between Scheduler and Executor: I looked at the a few 
> implementations of this.  I like parts of the Storm implementation but think 
> using the environment variable 
> ExectorInfo.CommandInfo.Enviornment.Variables[] is the best shot.  We can 
> have a command line bin/kafka-mesos-scheduler-start.sh that would build the 
> contrib project if not already built and support conf/server.properties to 
> start.
> The Framework and operating Scheduler would run in on an administrative node. 
>  I am probably going to hook Apache Curator into it so it can do it's own 
> failure to a another follower.  Running more than 2 should be sufficient as 
> long as it can bring back it's state (e.g. from zk).  I think we can add this 
> in after once everything is working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to