Having the same question: what happened to 0.8.2 release, when it's
supposed to happen?
Thanks.
On Tue, Sep 30, 2014 at 12:49 PM, Jonathan Weeks
wrote:
> I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
> weeks out.
>
> If we truly believe that 0.8.2 will go “golden” a
But it looks like some clients don't implement it?
o all partitions? Noone would be able to join?
On Tue, Oct 1, 2013 at 6:33 AM, Neha Narkhede wrote:
> There are 2 types of consumer clients in Kafka - ZookeeperConsumerConnector
> and SimpleConsumer. Only the former has the re balancing logic.
>
> Thanks,
> Neha
> On Oct 1, 2013 6:30
simple consumers can independently
> consume from the same partition(s).
>
> Guozhang
>
>
> On Tue, Oct 1, 2013 at 8:11 AM, Kane Kane wrote:
>
> > Yeah, I noticed that, i'm curious how balancing happens if SimpleConsumer
> > is used. I.e. i can provide a partition to read
onality. It ensures that a partition is only consumed by
> one high level consumer at any given time. Is there any particular reason
> why you want to use SimpleConsumer instead?
>
> Thanks,
> Neha
>
>
> On Tue, Oct 1, 2013 at 8:42 AM, Kane Kane wrote:
>
> > So
SimpleConsumer api, but I see now that everything should implemented on the
client side.
Thanks.
On Tue, Oct 1, 2013 at 8:52 AM, Guozhang Wang wrote:
> I do not understand your question, what are you trying to implement?
>
>
> On Tue, Oct 1, 2013 at 8:42 AM, Kane Kane wrote:
>
> &g
;
> https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPIand
> this is being planned for the 0.9 release. Once this is complete, the
> non-java clients can easily leverage the group management feature.
>
> Thanks,
> Neha
>
>
> On Tue, Oct
isplay/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPIand
> this is being planned for the 0.9 release. Once this is complete, the
> non-java clients can easily leverage the group management feature.
>
> Thanks,
> Neha
>
>
> On Tue, Oct 1, 2013 at 8:56 AM, Kane Kane wrote:
>
&
mplementing it in kafka-python
>
> Cheers
> -David
>
>
> On 10/1/13 11:56 AM, Kane Kane wrote:
>
>> The reason i was asking is that this library seems to have support only
>> for
>> SimpleConsumer
>> https://github.com/mumrah/**kafka-python/<https:
it
> probably won't be ready for a bit.
>
> Cheers,
>
> Keith.
>
>
> On Tue, Oct 1, 2013 at 1:39 PM, Kane Kane wrote:
>
> > Thanks for reply, David, your library is great and indeed the rebalancing
> > is currently somewhat quirky and complicated. And I guess it
I'm also curious to know what is the limiting factor of kafka write
throughput?
I've never seen reports higher than 100mb/sec, obviously disks can provide
much more. In my own test with single broker, single partition, single
replica:
bin/kafka-producer-perf-test.sh --topics perf --threads 10 --br
After unclean shutdown kafka reports this error on startup:
[2013-10-14 16:44:24,898] FATAL Fatal error during KafkaServerStable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.IllegalArgumentException: requirement failed: Corrupt index
found, index file (/disk1/kafka-log
index files, that will
> make the Kafka server rebuild the index on startup.
>
> Thanks,
> Neha
>
>
> On Mon, Oct 14, 2013 at 3:05 PM, Kane Kane wrote:
>
> > After unclean shutdown kafka reports this error on startup:
> > [2013-10-14 16:44:24,898] FATAL Fatal
Thanks!
On Mon, Oct 14, 2013 at 5:29 PM, Neha Narkhede wrote:
> Just deleting the index files should fix this issue.
>
>
> On Mon, Oct 14, 2013 at 5:23 PM, Kane Kane wrote:
>
> > Thanks a lot Neha, so I have to delete only index files not log files
> > themselves? Wh
up.
> >
> >Thanks,
> >Neha
> >
> >
> >On Mon, Oct 14, 2013 at 3:05 PM, Kane Kane wrote:
> >
> >> After unclean shutdown kafka reports this error on startup:
> >> [2013-10-14 16:44:24,898] FATAL Fatal error during KafkaServerStable
I couldn't reproduce it yet, I'm rolling out the fresh install and trying
to do so.
On Mon, Oct 14, 2013 at 8:40 PM, Jun Rao wrote:
> Is this issue reproducible?
>
> Thanks,
>
> Jun
>
>
> On Mon, Oct 14, 2013 at 8:30 PM, Kane Kane wrote:
>
> > Yes, m
I have 3 brokers and a topic with replication factor of 3.
Somehow all partitions ended up being on the same broker.
I've created topic with 3 brokers alive, and they didn't die since then.
Even when i try to reassign it:
bin/kafka-reassign-partitions.sh --zookeeper
10.80.42.147:2181--broker-list
tus-check-json-file option of the reassign partitions command to
> determine whether partition reassignment has completed or not.
>
> Joel
>
>
> On Tue, Oct 15, 2013 at 3:46 PM, Kane Kane wrote:
> > I have 3 brokers and a topic with replication factor of 3.
> > Somehow al
is partition reassignment which
> completes when all the reassigned replicas are in sync with the
> original replica(s). You can check the status of the command using the
> option I mentioned earlier.
>
> On Tue, Oct 15, 2013 at 7:02 PM, Kane Kane wrote:
> > I thought if i h
ionTool
>
> On Wed, Oct 16, 2013 at 12:15 AM, Kane Kane wrote:
> > Oh i see, what is the better way to initiate the leader change? As I told
> > somehow all my partitions have the same leader for some reason. I have 3
> > brokers and all partitions have leader on single on
Thanks for advise!
On Wed, Oct 16, 2013 at 7:57 AM, Jun Rao wrote:
> Make sure that there is no under replicated partitions (use the
> --under-replicated option in the list topic command) before you run that
> tool.
>
> Thanks,
>
> Jun
>
>
> On Wed, Oct 16, 2013
Hello, as I understand send is not atomic, i.e. i have something like this
in my code:
val requests = new ArrayBuffer[KeyedMessage[AnyRef, AnyRef]]
for (message <- messages) {
requests += new KeyedMessage(topic, null, message, message)
}
producer.send(requests)
That means ba
t fails the consumers will just
> re-issue the request starting with the previous offsets again.
>
> Guozhang
>
>
> On Wed, Oct 16, 2013 at 8:56 AM, Kane Kane wrote:
>
> > Hello, as I understand send is not atomic, i.e. i have something like
> this
> >
I see this MBean:
"kafka.server":name="AllTopicsMessagesInPerSec",type="BrokerTopicMetrics"
Does it return number per broker or per cluster? If it's per broker how to
get global value per cluster and vice versa?
Thanks.
>>publishing to and consumption from the partition will halt
and will not resume until the faulty leader node recovers
Can you confirm that's the case? I think they won't wait until leader
recovered and will try to elect new leader from existing non-ISR replicas?
And in case if they wait, and faul
If i set request.required.acks to -1, and set relatively short
request.timeout.ms and timeout happens before all replicas acknowledge the
write - would be message written to the leader or dropped?
gt; the
> > messages will be dropped.
> >
> > Guozhang
> >
> >
> > On Thu, Oct 24, 2013 at 6:50 PM, Kane Kane
> wrote:
> >
> > > If i set request.required.acks to -1, and set relatively short
> > > request.timeout.ms and ti
, Oct 25, 2013 at 7:41 AM, Neha Narkhede wrote:
> The producer acknowledgement is independent of the leader follower
> replication. So if the message is written to the leader and the followers
> are healthy, the message will get committed.
>
> Thanks,
> Neha
> On Oct 24, 201
I have cluster of 3 kafka brokers. With the following script I send some
data to kafka and in the middle do the controlled shutdown of 1 broker. All
3 brokers are ISR before I start sending. When i shutdown the broker i get
a couple of exceptions and I expect data shouldn't be written. Say, I send
the message may still be committed.
>
> Did you shut down the leader broker of the partition or a follower broker?
>
> Guozhang
>
> On Fri, Oct 25, 2013 at 8:45 AM, Kane Kane wrote:
>
> > I have cluster of 3 kafka brokers. With the following script I send some
>
Or, to rephrase it more generally, is there a way to know exactly if
message was committed or no?
On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane wrote:
> Hello Guozhang,
>
> My partitions are split almost evenly between broker, so, yes - broker
> that I shutdown is the leader for
o sent back to the producer, it means the
> message may or may not have been committed.
>
> Guozhang
>
>
> On Fri, Oct 25, 2013 at 8:05 AM, Kane Kane wrote:
>
> > Hello Neha,
> > Can you explain please what this means:
> > request.timeout.ms - The amount of time
ics of the broker.
> On 25 Oct 2013 23:23, "Kane Kane" wrote:
>
> > Or, to rephrase it more generally, is there a way to know exactly if
> > message was committed or no?
> >
> >
> > On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane
> wrote:
> >
once.
On Fri, Oct 25, 2013 at 11:26 AM, Kane Kane wrote:
> Hello Aniket,
>
> Thanks for the answer, this totally makes sense and implementing that
> layer on consumer side
> to check for dups sound like a good solution to this issue.
>
> Can we get a confirmation from kafka
s.
> > The best way to right now deal with duplicate msgs is to build the
> > processing engine (layer where your consumer sits) to deal with at least
> > once semantics of the broker.
> > On 25 Oct 2013 23:23, "Kane Kane" wrote:
> >
> > > Or, to r
> wrote:
>
> > Kane and Aniket,
> > I am interested in knowing what the pattern/solution that people
> usually
> > use to implement exactly once as well.
> > -Steve
> >
> >
> > On Fri, Oct 25, 2013 at 11:39 AM, Kane Kane
> wrote:
> >
> >
actually sent, not the ones that are duplicates.
>
> Guozhang
>
>
> On Fri, Oct 25, 2013 at 5:00 PM, Kane Kane wrote:
>
> > There are a lot of exceptions, I will try to pick an example of each:
> > ERROR async.DefaultEventHandler - Failed to send requests for topics
>
I think there was the plan to make kafka producer and consumer pure in
Java, so scala version wouldn't matter. And I think that's mostly why
people want certain scala version, not because of kafka itself, but they
just need producer/consumer libraries.
On Tue, Oct 29, 2013 at 8:32 PM, Aniket Bhat
I've got the following exception running produce/consume loop for several
hours,
that was just single exception, but during that time both producers and
consumers slowed down a lot. After that looks like everything works fine,
though i have
suspicion some messages were lost.
Can anyone explain what
When machine with kafka dies, most often broker cannot start itself with
errors about index files being corrupt. After i delete them manually broker
usually can boot up successfully. Shouldn't kafka try to delete/rebuild
broken index files itself?
Also this exception looks a bit weird:
java.lang.I
ike a bug. Can you file a JIRA?
>
> Thanks,
> Neha
>
>
> On Fri, Nov 1, 2013 at 1:51 AM, Kane Kane wrote:
>
> > When machine with kafka dies, most often broker cannot start itself with
> > errors about index files being corrupt. After i delete them manually
>
t
possible to be related?
Thanks.
On Fri, Nov 1, 2013 at 3:50 AM, Neha Narkhede wrote:
> I think you are hitting https://issues.apache.org/jira/browse/KAFKA-824.
> Was the consumer being shutdown at that time?
>
>
> On Fri, Nov 1, 2013 at 1:28 AM, Kane Kane wrote:
>
> >
Hello Jun Rao, not it's not the head, I've compiled it a couple of weeks
ago. Should i try with latest?
On Fri, Nov 1, 2013 at 7:58 AM, Jun Rao wrote:
> Are you using the latest code in the 0.8 branch?
>
> Thanks,
>
> Jun
>
>
> On Fri, Nov 1, 2013 at 7:36 AM,
013 7:59 AM, "Jun Rao" wrote:
>
> > Are you using the latest code in the 0.8 branch?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Fri, Nov 1, 2013 at 7:36 AM, Kane Kane wrote:
> >
> > > Neha, yes when i kill it with -9, su
Hello Neha, rest of the log goes to my consumer code, are you interested
in? It's a little bit modified version of ConsoleConsumer.
Thanks.
On Fri, Nov 1, 2013 at 9:24 AM, Neha Narkhede wrote:
> Could you send the entire consumer log?
>
>
> On Fri, Nov 1, 2013 at 7:45 AM,
Guozhang Wang wrote:
>
> > Currently the index files will only be deleted on startup if there are
> any
> > .swap file indicating the server crashed while opening the log segments.
> We
> > should probably change this logic.
> >
> > Guozhang
> >
> >
&g
I think
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.8.8")
Should be updated to 0.9.0 at least to successfully compile. I've had
an issue with assembly-package-dependency.
> On Fri, Nov 1, 2013 at 4:14 PM, Kane Kane wrote:
>
>> I think
>> addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.8.8")
>>
>> Should be updated to 0.9.0 at least to successfully compile. I've had
>> an issue with assembly-package-dependency.
>>
I had only 1 topic with 45 partitions replicated across 3 brokers.
After several hours of uploading some data to kafka 1 broker died with
the following exception.
I guess i can fix it raising limit for open files, but I wonder how it
happened under described circumstances.
[2013-11-02 00:19:14,86
j file of the consumer around the time
> of the error in question.
>
> Thanks,
> Neha
>
>
> On Fri, Nov 1, 2013 at 11:35 AM, Kane Kane wrote:
>
>> Hello Neha, rest of the log goes to my consumer code, are you interested
>> in? It's a little bit modified ver
Thanks, Jun.
On Sat, Nov 2, 2013 at 8:31 PM, Jun Rao wrote:
> The # of required open file handlers is # client socket connections + # log
> segment and index files.
>
> Thanks,
>
> Jun
>
>
> On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane wrote:
>
>> I had only 1
n repo and I need to set
>> scalaVersion 2.10.2. Thoughts?
>>
>> Bob
>>
>> On 11/1/13, 9:36 PM, "Kane Kane" wrote:
>>
>> >Yes, I've had problem, which resolved with updating sbt-assembly. Will
>> >open a ticket and provide a patch.
>&
aching a patch to
>>https://issues.apache.org/jira/browse/KAFKA-1116 to enable cross
>>compilation to scala 2.10.2?
>>
>>Thanks,
>>
>>Jun
>>
>>
>>On Mon, Nov 4, 2013 at 9:53 PM, Kane Kane wrote:
>>
>>> I'm usin
Hello,
What would happen if disk is full? Does it make sense to have
additional variable to set the maximum size for all logs combined?
Thanks.
37 PM, Neha Narkhede wrote:
> You are probably looking for log.retention.bytes. Refer to
> http://kafka.apache.org/documentation.html#brokerconfigs
>
>
> On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane wrote:
>
>> Hello,
>>
>> What would happen if disk is full? Does
hink it will be useful if we can put an overall limit on total log
size, so disk doesn't get full.
Also what is the recovery strategy in this case? Is it possible to
recover from this state or I have to delete all data?
Thanks.
On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane wrote:
> I've ch
How it's possible to have async consumer?
On Sun, Nov 10, 2013 at 11:06 AM, Marc Labbe wrote:
> Hi David,
>
> check for mahendra's fork of kafka-python, he has implemented gevent
> support in a branch (https://github.com/mahendra/kafka-python/tree/gevent) but
> it hasn't made it to the main repo
New client rewrite proposal includes async Producer, but not async
Consumer, i think there is a reason. You can not send new consume
request before previous one is finished?
On Sun, Nov 10, 2013 at 11:42 AM, Marc Labbe wrote:
> Kane, you can probably achieve async consumer using the client direct
Is it possible to update from 0.8 on the fly (rolling upgrade)?
On Wed, Mar 12, 2014 at 2:17 PM, Michael G. Noll
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Many thanks to everyone involved in the release!
>
> Please let me share two comments:
>
> One, there's a typo on the Downl
Is there a recommended cap for the concurrent producers threads?
We plan to have around 4000 connections across cluster writing to
kafka, i assume there shouldn't be any performance implications
related to that?
Thanks.
node cluster at all times.
>>You
>>might have to bump up the limit for the number of open file handles per
>>broker though.
>>
>>Thanks,
>>Neha
>>
>>
>>On Tue, Mar 25, 2014 at 3:41 PM, Kane Kane
>>wrote:
>>
>>> Is there
Hello all,
After updating to latest 0.8.1.1 jmxtrans fails with Connection timeout error.
I can connect with jconsole, but it is very unstable, reconnects every
few seconds and sometimes fails with
May 01, 2014 2:03:13 PM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check connection: jav
Hello,
According to this:
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
0.9 supposed to be released around this time. Is there any updated
release plans, I am particularly interested when 0.9 will be released
approximately.
Thanks.
I've tried to run mirrormaker tools in async mode and I get
WARN Produce request with correlation id 263 failed due to
[benchmark1,30]: kafka.common.MessageSizeTooLargeException
(kafka.producer.async.DefaultEventHandler)
I don't get error in sync mode. My message.max.bytes is default
(100). As
Yes, messages were compressed with gzip and I've enabled the same
compression in mirrormaker producer.
On Sat, Jun 7, 2014 at 12:56 PM, Guozhang Wang wrote:
> Kane,
>
> Did you use any compression method?
>
> Guozhang
>
>
> On Fri, Jun 6, 2014 at 2:15 PM, Kane Kane
nt to the broker, and the broker will reject the
> request if this single message's size is larger than 1MB. So you need to
> either change your max request size on broker or reduce your producer batch
> size.
>
> Guozhang
>
>
> On Sat, Jun 7, 2014 at 2:09 PM, Kane Ka
at 3:49 PM, Guozhang Wang wrote:
> In the old producer yes, in the new producer (available in 0.8.1.1) the
> batch size is by bytes instead of #. messages, which gives you a better
> control.
>
> Guozhang
>
>
> On Sat, Jun 7, 2014 at 2:48 PM, Kane Kane wrote:
>
>>
Last time I've checked it, producer sticks to partition for 10 minutes.
On Mon, Jun 9, 2014 at 4:13 PM, Prakash Gowri Shankor
wrote:
> Hi,
>
> This is with 0.8.1.1 and I ran the command line console consumer.
> I have one broker, one producer and several consumers. I have one topic,
> many partit
Hello Neha, can you explain your statements:
>>Bringing one node down in a cluster will go smoothly only if your
replication factor is 1 and you enabled controlled shutdown on the brokers.
Can you elaborate your notion of "smooth"? I thought if you have
replication factor=3 in this case, you shoul
reach a quorum. It is less likely but still risky to some
> extent.
>
>
> On Tue, Jun 24, 2014 at 2:44 AM, Hemath Kumar
> wrote:
>
>> Yes kane i have the replication factor configured as 3
>>
>>
>> On Tue, Jun 24, 2014 at 2:42 AM, Kane Kane wrote:
>>
Sorry, i meant 5 nodes in previous question.
On Tue, Jun 24, 2014 at 12:36 PM, Kane Kane wrote:
> Hello Neha,
>
>>>ZK cluster of 3 nodes will tolerate the loss of 1 node, but if there is a
> subsequent leader election for any reason, there is a chance that the
> cluster do
For example, with four machines ZooKeeper can only handle the
> failure of a single machine; if two machines fail, the remaining two
> machines do not constitute a majority. However, with five machines
> ZooKeeper can handle the failure of two machines."
>
> Hope that helps.
>
&
> remaining nodes to be up to reach quorum (of course it will be still
> possible, but it *might* fail). In case of a 5-node cluster having 1 node
> down is not that risky, because you still have 4 nodes and you need only 3
> of them to reach quorum.
>
> M.
>
> Kind regards,
Hello Jun, is new producer, consumer and offset management in the
trunk already? Can we start developing libraries with 0.8.2 support
against trunk?
Thanks!
On Tue, Jul 8, 2014 at 9:32 PM, Jun Rao wrote:
> Yes, 0.8.2 is compatible with 0.8.0 and 0.8.1 in terms of wire protocols
> and the upgrade
Hi Jay, I don't have a lot of experience patching kafka, but I would
really like to help starting with some minor tasks.
Thanks.
On Wed, Jul 16, 2014 at 3:09 PM, Jay Kreps wrote:
> Hey All,
>
> A number of people have been submitting really nice patches recently.
>
> If you are interested in con
Hello Guozhang,
Is storing offsets in kafka topic already in master branch?
We would like to use that feature, when do you plan to release 0.8.2?
Can we use master branch meanwhile (i.e. is it stable enough).
Thanks.
On Fri, Aug 8, 2014 at 1:38 PM, Guozhang Wang wrote:
> Hi Roman,
>
> Current K
76 matches
Mail list logo