Yeah, but that gives them all the partitions and does not differentiate
between available vs unavailable right.
Thanks,
Mayuresh
On Thu, Mar 5, 2015 at 9:14 AM, Guozhang Wang wrote:
> I think today people can get the available partitions by calling
> partitionsFor() API, and iterate the partit
I think today people can get the available partitions by calling
partitionsFor() API, and iterate the partitions and filter those whose
leader is null, right?
On Wed, Mar 4, 2015 at 4:40 PM, Mayuresh Gharat
wrote:
> Cool. So then this is a non issue then. To make things better we can expose
> th
Cool. So then this is a non issue then. To make things better we can expose
the availablePartitons() api through Kafka producer. What do you think?
Thanks,
Mayuresh
On Tue, Mar 3, 2015 at 4:56 PM, Guozhang Wang wrote:
> Hey Jun,
>
> You are right. Previously I thought only in your recent patch
Hey Jun,
You are right. Previously I thought only in your recent patches you add the
partitionWithAvailableLeaders that this gets exposed, however it is the
opposite case.
Guozhang
On Tue, Mar 3, 2015 at 4:40 PM, Jun Rao wrote:
> Guozhang,
>
> Actually, we always return all partitions in the m
Guozhang,
Actually, we always return all partitions in the metadata response whether
the leaders are available or not.
Thanks,
Jun
On Sat, Feb 28, 2015 at 10:46 PM, Guozhang Wang wrote:
> Hi Honghai,
>
> 1. If a partition has no leader (i.e. all of its replicas are down) it will
> become offl
Filed as https://issues.apache.org/jira/browse/KAFKA-1998
Evan
On Mon, Mar 2, 2015 at 5:19 PM, Guozhang Wang wrote:
> That is a valid point, today the returned metadata response already
> contains partitions even with error code, so we can expose that in the
> Cluster / KafkaProducer class. Cou
That is a valid point, today the returned metadata response already
contains partitions even with error code, so we can expose that in the
Cluster / KafkaProducer class. Could you file a JIRA?
Guozhang
On Sun, Mar 1, 2015 at 7:11 PM, Evan Huus wrote:
> Which I think is my point - based on my cu
Which I think is my point - based on my current understanding, there is
*no* way to find out the total number of partitions for a topic besides
hard-coding it or manually reading it from zookeeper. The kafka metadata
API does not reliably expose that information.
Evan
On Sun, Mar 1, 2015 at 10:07
I see.
If you need to make sure messages are going to the same partition during
broker bouncing / failures, then you should not depend on the partitioner
to decide the partition id but explicitly set it before calling send().
For example, you can use the total number of partitions for the topic, n
My concern is more with the partitioner that determines the partition of
the message. IIRC, it does something like "hash(key) mod #partitions" in
the normal case, which means if the # of partitions changes because some of
them are offline, then certain messages will be sent to the wrong (online)
pa
Evan,
In the java producer, partition id of the message is determined in the
send() call and then the data is appended to the corresponding batch buffer
(one buffer for each partition), i.e. the partition id will never change
once it is decided. If the partition becomes offline after this, the sen
On Sun, Mar 1, 2015 at 1:46 AM, Guozhang Wang wrote:
> Hi Honghai,
>
> 1. If a partition has no leader (i.e. all of its replicas are down) it will
> become offline, and hence the metadata response will not have this
> partition's info.
>
If I am understanding this correctly, then this is a probl
Hi Honghai,
1. If a partition has no leader (i.e. all of its replicas are down) it will
become offline, and hence the metadata response will not have this
partition's info.
2. Any of the brokers cache metadata and hence can handle the metadata
request. It's just that their cache are updated async
13 matches
Mail list logo