Re: Graceful termination of kafka broker after draining all the data consumed

2013-02-18 Thread 王国栋
Thanks Jun. I will check more info about 0.8 On Tue, Feb 19, 2013 at 12:51 AM, Jun Rao wrote: > In 0.7, it's not very easy to decommission a broker using ZK based > producers. It's possible to do that with a vip (then you can't do > partitioning). In 0.8 (probably 0.8.1), you can use a tool t

Re: Graceful termination of kafka broker after draining all the data consumed

2013-02-18 Thread Jun Rao
In 0.7, it's not very easy to decommission a broker using ZK based producers. It's possible to do that with a vip (then you can't do partitioning). In 0.8 (probably 0.8.1), you can use a tool to move all partitions off a broker first and then decommission it. Thanks, Jun On Sun, Feb 17, 2013 at

Re: Graceful termination of kafka broker after draining all the data consumed

2013-02-17 Thread 王国栋
Hi Jun, If we use high level producer based on zookeeper, how can we decommission a broker without message loss? Since we want to partition the log with IP, if all the brokers use the same vip, we can not use the customized partition strategy. Thanks. On Mon, Jan 7, 2013 at 12:52 AM, Jun Rao

Re: Graceful termination of kafka broker after draining all the data consumed

2013-01-07 Thread Bae, Jae Hyeon
0.8 sounds really great! OK, I will try after you release stable build of 0.8 Thank you Best, Jae On Sun, Jan 6, 2013 at 10:36 AM, Neha Narkhede wrote: > In 0.8, we will provide a way for your to shutdown the broker in a > controlled fashion. What that would include is moving all the leaders aw

Re: Graceful termination of kafka broker after draining all the data consumed

2013-01-06 Thread Neha Narkhede
In 0.8, we will provide a way for your to shutdown the broker in a controlled fashion. What that would include is moving all the leaders away from the broker so that it does not take any more produce requests. Once that is done, you can shutdown the broker normally. You don't have to wait until the

Re: Graceful termination of kafka broker after draining all the data consumed

2013-01-06 Thread Jun Rao
In 0.7, one way to do this is to use a vip. All producers send data to the vip. To decommission a broker, you first take the broker out of vip so no new data will be produced to it. Then you let the consumer drain the data (you can use ConsumerOffsetChecker to check if all data has been consumed).

Graceful termination of kafka broker after draining all the data consumed

2013-01-05 Thread Bae, Jae Hyeon
Hi If I want to terminate kafka broker gracefully. Before termination, it should stop receiving the traffic from producers and wait until all data will be consumed. I don't think that kafka 0.7.x is supporting this feature. If I want to implement this feature for myself, could you give me a brief