Hi Jun,
What about https://issues.apache.org/jira/browse/KAFKA-3100?
Thanks,
Allen
On Fri, Feb 5, 2016 at 1:19 PM, Ismael Juma wrote:
> Hi Becket,
>
> On Fri, Feb 5, 2016 at 9:15 PM, Becket Qin wrote:
>
> > I am taking KAFKA-3177 off the list because the correct fix might involve
> > some re
>From looking at the design document, it seems quota is implemented purely
at server side. So it should work with 0.8.X clients. But I would like to
get confirmation.
Thanks,
Allen
We have two applications that consume all messages from one Kafka cluster.
We found that the MessagesPerSec metric started to diverge after some time.
One of them matches the MessagesInPerSec metric from the Kafka broker,
while the other is lower than the broker metric and appears to have some
mess
you try out the 0.8.2 broker version and see if this is still
> easily re-producible, i.e. starting a bunch of producers to send data for a
> while, and terminate them?
>
> Guozhang
>
> On Tue, Mar 10, 2015 at 1:00 PM, Allen Wang
> wrote:
>
> > Hello,
> >
>
Hello,
We are using Kafka 0.8.1.1 on the broker and 0.8.2 producer on the client.
After running for a few days, we have found that there are way too many
open file descriptors on the broker side. When we compare the connections
on the client side, we found some connections are already gone on the
We (Steven Wu and Allen Wang) can talk about Kafka use cases and operations
in Netflix. Specifically, we can talk about how we scale and operate Kafka
clusters in AWS and how we migrate our data pipeline to Kafka.
Thanks,
Allen
On Mon, Feb 23, 2015 at 12:15 PM, Ed Yakabosky <
eyak
before?
>
> Guozhang
>
> On Fri, Jan 23, 2015 at 3:56 PM, Allen Wang
> wrote:
>
> > Hello,
> >
> > We tried the ReassignPartitionsCommand to move partitions to new brokers.
> > The execution initially showed message "Successfully started reassignment
>
Hello,
We tried the ReassignPartitionsCommand to move partitions to new brokers.
The execution initially showed message "Successfully started reassignment
of partitions ...". But when I tried to verify using --verify option, it
reported some reassignments have failed:
ERROR: Assigned replicas (0,
buffers need to be allocated anyway.
>
> What is the requests per sec that you see without failed requests?
>
> Finally, is this easily reproducible?
>
> Joel
>
>
> On Wed, Jan 21, 2015 at 11:03:40AM -0800, Allen Wang wrote:
> > After a closer look to other metrics and b
t; It is odd that restarting your consumers appears to have resolved your
> issues. What config overrides did you use for your consumers? E.g.,
> did you override the max wait time?
>
> How many consumers/producers are we talking about here?
>
> Thanks,
>
> Joel
>
> On
older leader epoch 18 for partition [mapcommandaudit,4],
current leader epoch is 18
On Thu, Jan 15, 2015 at 11:55 AM, Allen Wang wrote:
> We are using the scala producer. From producer side, we have seen a lot of
> error messages in producer during the time of incoming message drop:
>
efresh metadata and discover the
> new leader. Are you using the Java producer? Do you see any errors in
> the producer logs?
>
> On Wed, Jan 14, 2015 at 06:36:27PM -0800, Allen Wang wrote:
> > Hello,
> >
> > We did a manual leadership rebalance (using
> > Prefer
Hello,
We did a manual leadership rebalance (using
PreferredReplicaLeaderElectionCommand) under heavy load and found that
there is a significant drop of incoming messages to the broker cluster for
more than an hour. Looking at broker log, we found a lot of errors like
this:
2015-01-15 00:00:03,33
Brokers may have temporary problems catching up with the leaders. So I
would not worry about it if it happens only once a while and goes away.
Occasionally we have seen under replicated topics for long time, which
might be caused by ZooKeeper session problem as indicated by such log
messages:
[in
sumer fetcher's requests count in
> BytesOutPerSec.
>
> Guozhang
>
>
> On Fri, Nov 21, 2014 at 11:13 AM, Allen Wang
> wrote:
>
> > We observed that for a topic, BytesIn is greater than BytesOut. We are
> > under the impression that BytesOut should incl
We observed that for a topic, BytesIn is greater than BytesOut. We are
under the impression that BytesOut should include replication. The topic
has two replicas for each partitions and all replicas are in sync. Then
BytesOut should be at least same as BytesIn since it always needs to
replicate to a
the tool to support that. It's probably
> more intuitive to have TopicCommand just take the replica-assignment (for
> the new partitions) when altering a topic. Could you file a jira?
>
> Thanks,
>
> Jun
>
> On Fri, Nov 7, 2014 at 4:17 PM, Allen Wang
> wrote:
>
&g
I am trying to figure out how to add partitions and assign replicas using
one admin command. I tried kafka.admin.TopicCommand to increase the
partition number from 9 to 12 with the following options:
/apps/kafka/bin/kafka-run-class.sh kafka.admin.TopicCommand --zookeeper
${ZOOKEEPER} --alter --to
thing?
>
> Thanks,
> Neha
>
> On Thu, Nov 6, 2014 at 12:02 PM, Allen Wang
> wrote:
>
> > After digging more into the stack trace got from flight recorder (which
> is
> > attached), it seems that Kafka (0.8.1.1) can optimize the usage of Crc32.
> > The st
= analyzeAndValidateMessageSet(messages)
The second time is from line 252 in the same function:
validMessages = validMessages.assignOffsets(offset, appendInfo.codec)
If one of the Crc32 invocation can be eliminated, we are looking at saving
at least 7% of CPU usage.
Thanks,
Allen
On Wed, Nov 5, 2014 at 6:32 PM, Allen
Hi,
Using flight recorder, we have observed high CPU usage of CRC32
(kafka.utils.Crc32.update()) on Kafka broker. It uses as much as 25% of CPU
on an instance. Tracking down stack trace, this method is invoked by
ReplicaFetcherThread.
Is there any tuning we can do to reduce this?
Also on the top
n your consumer and
> retry the test?
>
> On Wed, Oct 29, 2014 at 10:34 AM, Allen Wang wrote:
>
> > After executing PreferredReplicaLeaderElectionCommand on broker instance,
> > we observed one of the consumers cannot find the leadership and stopped
> > consuming. The fol
After executing PreferredReplicaLeaderElectionCommand on broker instance,
we observed one of the consumers cannot find the leadership and stopped
consuming. The following exception is all over the log file and it appears
that the consumer cannot recover from it:
2014-10-29 00:53:30,492 WARN
suroro
23 matches
Mail list logo