Hi all,
We previously have replica.max.lag.message set to 4000 and use sync
producer to send data to kafka, one message at a time. With this, we don't
see many unclean leader election.
Recently, we switched to use sync producer and batch messages. After that,
we see unclean leader election more o
am still > > > looking.*
On Tue, Feb 24, 2015 at 12:03 PM, Jun Rao wrote:
> Ah, yes. You are right. That's a more obvious bug. Will fix that in
> KAFKA-1984.
>
> Thanks,
>
> Jun
>
> On Tue, Feb 24, 2015 at 8:37 AM, Xiaoyu Wang wrote:
>
> > Hi Jun,
>
n issue when there
> are more than one thread producing data to the same producer instance. This
> is being tracked in KAFKA-1984. How many producing threads do you have in
> your test?
>
> Thanks,
>
> Jun
>
> On Tue, Feb 24, 2015 at 7:56 AM, Xiaoyu Wang wrote:
>
> &
> Do you see this issue with just a single thread producing messages? The
> current logic seems to work correctly in that case.
>
> Thanks,
>
> Jun
>
> On Fri, Feb 20, 2015 at 12:45 PM, Xiaoyu Wang
> wrote:
>
> > Found the problem - it is a bug with Partitions of kaf
It is possible to attend the meetup remotely? I am in east coast, really
want to be able to attend this.
On Mon, Feb 23, 2015 at 3:02 PM, Allen Wang
wrote:
> We (Steven Wu and Allen Wang) can talk about Kafka use cases and operations
> in Netflix. Specifically, we can talk about how we scale and
ull) {
return partitions.get(partition).partition();
}
}
On Fri, Feb 20, 2015 at 2:35 PM, Xiaoyu Wang wrote:
> Update:
>
> I am using kafka.clients 0.8.2-beta. Below are the test steps
>
>1. setup local kafka clusters with 2 brokers, 0 and 1
>2. create topic X with rep
ble partition
return Utils.abs(counter.getAndIncrement()) % numPartitions;
On Fri, Feb 20, 2015 at 1:48 PM, Xiaoyu Wang wrote:
> Hello,
>
> I am experimenting sending data to kafka using KafkaProducer and found
> that when a partition is completely offline, e.g. a topic with replication
&
Hello,
I am experimenting sending data to kafka using KafkaProducer and found that
when a partition is completely offline, e.g. a topic with replication
factor = 1 and some broker is down, KafkaProducer seems to be hanging
forever. Not even exit with the timeout setting. Can you take a look?
I ch
Hi All,
Just want to double check with you regarding producers using required.acks
= -1.
- Producer is guaranteed to receive responses within certain time.
Because the satisfied request checking is for in-sync replica. If some
replica got stuck, it will be removed from in-sync replica an
@Sa,
the required.acks is producer side configuration. Set to -1 means requiring
ack from all brokers.
On Fri, Jan 2, 2015 at 1:51 PM, Sa Li wrote:
> Thanks a lot, Tim, this is the config of brokers
>
> --
> broker.id=1
> port=9092
> host.name=10.100.70.128
> num.network.threads=4
> num
Hello,
I am looking at 0.8.1.1, the kafka.producer.async.DefaultEventHandler
file. Below is the dispatchSerializedData function. Looks like we are
catching exception outside the loop and purely logs an error message.
We then return failedProduceRequests.
In case one broker is having problem, mess
Hello,
I am looking at 0.8.1.1, the kafka.producer.async.DefaultEventHandler
file. Below is the dispatchSerializedData function. Looks like we are
catching exception outside the loop and purely logs an error message.
We then return failedProduceRequests.
In case one broker is having problem, mess
on the producer.
>
> Thanks,
>
> Jun
>
> On Wed, Dec 17, 2014 at 10:34 AM, Xiaoyu Wang
> wrote:
> >
> > I have tested using "async" producer with "required.ack=-1" and got
> really
> > good performance.
> >
> > We have not us
ch?
On Wed, Dec 17, 2014 at 1:16 PM, Xiaoyu Wang wrote:
>
> Thanks Jun.
>
> We have tested our producer with the different required.ack config. Even
> with the required.ack=1, the producer is > 10 times slower than with
> required.ack=0. Does this confirm with your testing?
>
> http://kafka.apache.org/documentation.html#monitoring).
>
> Thanks,
>
> Jun
>
> On Sun, Dec 14, 2014 at 7:20 AM, Xiaoyu Wang wrote:
> >
> > Hello,
> >
> > If I understand it correctly, when the number of messages a replica is
> > behind from the lead
Hello,
If I understand it correctly, when the number of messages a replica is
behind from the leader is < replica.lag.max.messages, the replica is
considered in sync with the master and are eligible for leader election.
This means we can lose at most replica.lag.max.messages messages during
leade
>
> On Mon, Dec 8, 2014 at 11:29 AM, Gwen Shapira
> wrote:
>
> > I think that A will not be able to become a follower until B becomes a
> > leader.
> >
> > On Sun, Dec 7, 2014 at 11:07 AM, Xiaoyu Wang
> wrote:
> > > On preferred replica election, contr
ase B will return "not leader for partition" error as soon as the
> leader is re-elected and I imagine the producer will correct itself.
>
> -Thunder
>
>
> -Original Message-
> From: Xiaoyu Wang [xw...@rocketfuel.com]
> Received: Saturday, 06 Dec 2014, 6:49P
as error code 6.
>
> I don't see anything special in the producer side to handle this
> specific (although I'd expect a forced metadata refresh and then a
> re-send).
>
> Gwen
>
> On Sat, Dec 6, 2014 at 6:46 PM, Xiaoyu Wang wrote:
> > Hello,
> >
&
Hello,
I am looking at producer code and found that producer updates its
broker/partition info under the two conditions
1. has reached the topicMetadataRefreshInterval
2. failed sending message, before retry
So, assume we have broker A and B, B is the current lead and A is the
preferred le
Hi all,
As I remembered 0.7.0 requires manually create partitions for existing
topics when we add new brokers. Does 0.7.1 automatically create partitions
on newly added brokers? It seems to be doing that, just want to confirm.
Also, if that true, is there a way to prevent existing topic being add
21 matches
Mail list logo