Yes, the new producer in trunk is compatible with 0.8.1.1 broker. In trunk,
we have evolved the format of OffsetCommitRequest, which is used by the old
consumer (and will be used by the new consumer). So, to use the consumer in
trunk, you will need to upgrade the broker first.
Thanks,
Jun
On Fr
The goal of batching is mostly to reduce the # RPC calls to the broker. If
compression is enabled, a larger batch typically implies better compression
ratio.
The reason that we have to fail the whole batch is that the error code in
the produce response is per partition, instead of per message.
Re
Could you explain the goals of batches? I was assuming this was simply a
performance optimization, but this behavior makes me think I'm missing
something.
is a batch more than a list of *independent* messages?
Why would you reject the whole batch? One invalid message causes the loss
of batch.num.m
So it looks like you are depending on a Kafka artifact which was build
with Scala 2.9.2, and importing Scala 2.10.1
Scala is not binary compatible between versions, so you need either
both 2.10 or both 2.9.
Gwen
On Fri, Aug 29, 2014 at 11:41 AM, Parin Jogani wrote:
> so I always had this in my
so I always had this in my project:
>
>
> org.scala-lang
> scala-library
> 2.10.1
>
-Parin
On Thu, Aug 28, 2014 at 12:00 AM, Jun Rao wrote:
> Kafka is written in scala. So to run Kafka, you need a scala jar.
>
> Thanks,
>
> Jun
>
>
> On Wed, Aug 27,
Yes, the protocol for metadata, fetch, and produce is the same across
both clients and all 0.8.x versions.
-Jay
On Fri, Aug 29, 2014 at 10:09 AM, Jonathan Weeks
wrote:
> Thanks, Jay. Follow-up questions:
>
> Some of our services will produce and consume. Is there consumer code on
> trunk that i
I think Jun was referring to "consumer" clients. The new producer is
compatible with existing brokers. However the new consumer requires a
server-side consumer coordinator.
On Fri, Aug 29, 2014 at 10:37:12AM -0700, Jonathan Weeks wrote:
> Hi Jun,
>
> Jay indicated that the new producer client on
Hi Jun,
Jay indicated that the new producer client on trunk is backwards compatible
with 0.8.1.1 (see thread below) — can you elaborate?
Given the consumer re-write for 0.9, I can definitely see how that would break
backwards compatibility, but Jay indicates that the producer on the trunk will
The old clients with be compatible with the new broker. However, in order
to use the new clients, you will need to upgrade to the new broker first.
Thanks,
Jun
On Fri, Aug 29, 2014 at 10:09 AM, Jonathan Weeks
wrote:
> Thanks, Jay. Follow-up questions:
>
> Some of our services will produce and
Thanks, Jay. Follow-up questions:
Some of our services will produce and consume. Is there consumer code on trunk
that is backwards compatible with an existing 0.8.1.1 broker cluster? If not
0.8.1.1, will the consumer code on trunk work with a 0.8.2 broker cluster when
0.8.2 is released?
(Our c
A couple of points to keep in mind during the rolling update:
- "Controlled shutdown" should be used to bring brokers down (
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.ControlledShutdown),
so that brokers gracefully transfer leadership before actually go
11 matches
Mail list logo