Hello everyone,
After doing some searching on the mailing list for best practices on
integrating Avro with Kafka there appears to be at least 3 options for
integrating the Avro Schema; 1) embedding the entire schema within the
message 2) embedding a unique identifier for the schema in the message
Hi, Everyone,
Joe Stein has kindly agreed to drive this release. We'd like to
get KAFKA-937 committed to 0.8 (should happen today) and then call a vote.
Thanks,
Jun
Hi,
I have a Kafka 0.8 cluster with two nodes connected to three ZKs, with the
same configuration but the brokerId (one is 0 and the other 1). I created
three topics A, B and C with 4 partitions and a replication factor of 1. My
idea was to have 2 partitions per topic in each broker. However, when
If the leaders exist in both brokers, the producer should be able to
connect to both of them, assuming you don't provide any key when sending
the data. Could you try restarting the producer? If there has been broker
failures, it may take topic.metadata.refresh.interval.ms for the producer
to pick u
At LinkedIn, we are using option 2.
Thanks,
Jun
On Wed, Jun 12, 2013 at 7:14 AM, Shone Sadler wrote:
> Hello everyone,
>
> After doing some searching on the mailing list for best practices on
> integrating Avro with Kafka there appears to be at least 3 options for
> integrating the Avro Schema
Actually, currently our schema id is the md5 of the schema itself. Not
fully sure how this compares with an explicit version field in the schema.
Thanks,
Jun
On Wed, Jun 12, 2013 at 8:29 AM, Jun Rao wrote:
> At LinkedIn, we are using option 2.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jun 12, 2013 at
Hi Jun,
Thanks for your prompt answer. The producer yields those errors in the
beginning, so I think the topic metadata refresh has nothing to do with it.
The problem is one of the brokers isn't leader on any partition assigned to
it and because topics were created with a replication factor of 1,
Jun,
I like the idea of an explicit version field, if the schema can be derived
from the topic name itself. The storage (say 1-4 bytes) would require less
overhead than a 128 bit md5 at the added cost of managing the version#.
Is it correct to assume that your applications are using two schemas th
For one of our key Kafka-based applications, we ensure that all messages in the
stream have a common binary format, which includes (among other things) a
version identifier and a schema identifier. The version refers to the format
itself, and the schema refers to the "payload," which s the data
For IntelliJ I've always used the gen-idea sbt plugin:
https://github.com/mpeltonen/sbt-idea
-Dragos
On 6/11/13 10:41 PM, "Jason Rosenberg" wrote:
>Try the one under core/targets?
>
>
>On Tue, Jun 11, 2013 at 3:34 PM, Florin Trofin wrote:
>
>> I downloaded the latest 0.8 snapshot and I want t
Thanks Dragos, I've been using that plugin before, that will work on a
developer's machine when you try to build and debug the project but I also
need this to work with my automated build system.
That's why I need maven to work.
I've made a bit more progress:
> cd kafka
> ./sbt make-pom
> cd core
Hello -- we're using 0.72. We're looking at the source, but want to be sure. :-)
We create a single ConsumerConnector, call createMessageStreams, and
hand the streams off to individual threads. If one of those threads
calls next() on a stream, gets some messages, and then *blocks* in
some subseque
I tried running java examples using READ Me but when I say ./sbt getting
following error.
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory
file:
/tmp/hsperfdata_root/1312
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
Could you pl
Joe,
KAFKA-937 is now committed to 0.8. Could you start the release process for
0.8.0 beta?
Thanks,
Jun
On Wed, Jun 12, 2013 at 7:54 AM, Jun Rao wrote:
> Hi, Everyone,
>
> Joe Stein has kindly agreed to drive this release. We'd like to
> get KAFKA-937 committed to 0.8 (should happen today) a
Any error in state-change.log? Also, are you using the latest code in the
0.8 branch?
Thanks,
Jun
On Wed, Jun 12, 2013 at 9:27 AM, Alexandre Rodrigues <
alexan...@blismedia.com> wrote:
> Hi Jun,
>
> Thanks for your prompt answer. The producer yields those errors in the
> beginning, so I think
Yes, we just have customized encoder that encodes the first 4 bytes of md5
of the schema, followed by Avro bytes.
Thanks,
Jun
On Wed, Jun 12, 2013 at 9:50 AM, Shone Sadler wrote:
> Jun,
> I like the idea of an explicit version field, if the schema can be derived
> from the topic name itself. T
Dragos,
After the sbt upgrade 3-4 months ago, some of us are struggling to get the
Kafka code cleanly loaded to Intellij after doing "./sbt gen-idea". Were
you able to do that successfully?
Thanks,
Jun
On Wed, Jun 12, 2013 at 10:45 AM, Dragos Manolescu <
dragos.manole...@servicenow.com> wrote:
Yes, when the consumer is consuming multiple topics, if one thread stops
consuming topic 1, it can prevent new data getting into the consumer for
topic 2.
Thanks,
Jun
On Wed, Jun 12, 2013 at 7:43 PM, Philip O'Toole wrote:
> Hello -- we're using 0.72. We're looking at the source, but want to b
Jun -- thanks.
But if the topic is the same, doesn't each thread get a partition?
Isn't that how it works?
Philip
On Wed, Jun 12, 2013 at 9:08 PM, Jun Rao wrote:
> Yes, when the consumer is consuming multiple topics, if one thread stops
> consuming topic 1, it can prevent new data getting into
Also, what is it in the ConsumerConnection that causes this behaviour?
I'm proposing here that we move to a model of ConsumerConnection per
thread. This will decouple the flow for each partition, and allow each
to flow, right? We only have one topic on the cluster.
Thanks,
Philip
On Wed, Jun 12
Actually, you are right. This can happen on a single topic too, if you have
more than one consumer thread. Each consumer thread pulls data from a
blocking queue, one or more fetchers are putting data into the queue. Say,
you have two consumer threads and two partitions from the same broker.
There i
21 matches
Mail list logo