Regards acks=all: ------------------------- Interesting point. Will check acks and min.insync.replicas values. If I understand the root cause that you are suggesting correctly, given my RF=2 and 3 brokers in cluster: min.insync.replicas > 1 and acks=all, removing one broker -------> partition that had a replica on the removed broker can't get written until the replica is up on another broker?
Regards number of partitions ----------------------------------------- The producer to this topic is using librdkafka, using partioner_cb callback, which receives number of partition as partitions_cnt. Still trying to understand how the library obtains partitions_cnt value. I wonder if the behavior is similar to Java library, where it the default partitioner uses the number of available partitions as the number of current partitions... On 17/05/2020, 20:59, "Peter Bukowinski" <pmb...@gmail.com> wrote: If your producer is set to use acks=all, then it won’t be able to produce to the topic topic partitions that had replicas on the missing broker until the replacement broker has finished catching up to be included in the ISR. What method are you using that reports on the number of topic partitions? If some partitions go offline, the cluster still knows how many there are supposed to be, so I’m curious what is reporting 10 when there should be 15. -- Peter > On May 17, 2020, at 10:36 AM, Victoria Zuberman <victoria.zuber...@imperva.com> wrote: > > Hi, > > Kafka cluster with 3 brokers, version 1.0.1. > Topic with 15 partitions, replication factor 2. All replicas in sync. > Bringing down one of the brokers (ungracefully), then adding a broker in version 1.0.1 > > During this process, are we expected either of the following to happen: > > 1. Some of the partitions become unavailable for producer to write to > 2. Cluster reports the number of partitions at the topic as 10 and not 15 > It seems like both issues take place in our case, for about a minute. > > We are trying to understand whether it is an expected behavior and if not, what can be causing it. > > Thanks, > Victoria > ------------------------------------------- > NOTICE: > This email and all attachments are confidential, may be proprietary, and may be privileged or otherwise protected from disclosure. They are intended solely for the individual or entity to whom the email is addressed. However, mistakes sometimes happen in addressing emails. If you believe that you are not an intended recipient, please stop reading immediately. Do not copy, forward, or rely on the contents in any way. Notify the sender and/or Imperva, Inc. by telephone at +1 (650) 832-6006 and then delete or destroy any copy of this email and its attachments. The sender reserves and asserts all rights to confidentiality, as well as any privileges that may apply. Any disclosure, copying, distribution or action taken or omitted to be taken by an unintended recipient in reliance on this message is prohibited and may be unlawful. > Please consider the environment before printing this email.