Thanks!
I'm also trying to understand how replicas will catch up once the leader
goes down. Say, we have 3 brokers with IDs 1, 2, 3. The leader is broker 1.
Followers are 2 and 3. Consider the following scenario assuming that all
messages fall into the same partition:
1. Producer sends message A
Hi Yury,
If I understand correctly, the case you're describing is equivalent to the
leader re-election (in terms of data consistency). In that case messages
can be lost depending on your "acks" setting:
https://kafka.apache.org/documentation.html
see: request.required.acks:
E.g. "only messages th
Daniel,
We have the same question. We noticed that the compression tests we ran
using the built in performance tester was not realistic. I think on disk
compression was 200:1. (yes that is two hundred to one) I had planned to
try and edit the producer performance tester source and do the foll
Hi Michal,
Thanks for the perfect links. They really help. Now it looks like with
request.required.acks=1 (let alone 0) messages can be lost in the case I
described. The aphyr's article, seemingly, describes a more tricky case
than I have.
I'm still not sure on Kafka behavior in case of request.r
If broker 1 is down in step 4, the producer will get a broken socket error
immediately. If broker 1 is up in step 4 and just the leader is moved
(e.g., due to preferred leader balancing), the producer will get an error
after the timeout specified in the producer request.
Thanks,
Jun
On Mon, Jun
Yes, this is a problem and will indeed affect the producer performance when
compression is turned on. Perhaps we should fill in the values with some
randomized bytes. Could you file a jira for this?
Thanks,
Jun
On Sun, Jun 29, 2014 at 11:24 PM, Daniel Compton
wrote:
> Hi folks
>
> I was doing
You can sort of doing that by setting the fetch size to be the message size
plus overhead and setting the buffered chunk size to be 1.
Thanks,
Jun
On Sat, Jun 28, 2014 at 4:34 PM, Jorge Marizan
wrote:
> Hi guys,
>
> I was wondering if is there a way for a consumer to fetch just a message
> at
log.retention.minutes is only available in 0.8.1.*.
Thanks,
Jun
On Fri, Jun 27, 2014 at 2:13 PM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:
> Running a mixed 2 broker cluster. Mixed as in one of the broker1 is
> running 0.8.0 and broker2 one 0.8.1.1 (from the apache release
The retention.minute property is only introduced in 0.8.1:
https://issues.apache.org/jira/browse/KAFKA-918
if you are running 0.8.0 then it will not be recognized.
Guozhang
On Fri, Jun 27, 2014 at 2:13 PM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:
> Running a mixed 2 brok
Which version of Kafka are you using?
Thanks,
Jun
On Fri, Jun 27, 2014 at 11:57 AM, England, Michael wrote:
> Neha,
>
> In state-change.log I see lots of logging from when I last started up
> kafka, and nothing after that. I do see a bunch of errors of the form:
> [2014-06-25 13:21:37,124] ER
Hi Daniel,
If you do not expect the consumer to stop due to no more data coming with a
time out exception, then you would try/catch the exception. On the other
hand, throwing the timeout exception does not necessarily stop the
background fetcher threads, if you really want to shut down the consume
Hi Bert
What you are describing could be done partially with the console producer. It
will read from a file and send each line to the Kafka broker. You could make a
really big file or alter that code to repeat a certain number of times. The
source is pretty readable, I think that might be an ea
Ah, gotcha.
Thanks,
Sonali
-Original Message-
From: Steve Morin [mailto:steve.mo...@gmail.com]
Sent: Friday, June 27, 2014 4:45 PM
To: users@kafka.apache.org
Cc:
Subject: Re: kafka producer pulling from custom restAPI
The answer is no, it doesn't work that way. You would have to write
That did the trick! this is utterly awesome, thanks!
On lun 30 jun 2014 12:12:41 AST, Jun Rao wrote:
You can sort of doing that by setting the fetch size to be the message size
plus overhead and setting the buffered chunk size to be 1.
Thanks,
Jun
On Sat, Jun 28, 2014 at 4:34 PM, Jorge Mariz
14 matches
Mail list logo