We're seeing a situation in one of our clusters where a message will
occasionally be duplicated on an incorrect topic. No identifiable issues
spotted in either the client application or kafka logs.
Has anyone else see this? Seems like something that would raise concern.
Any recommendations for enh
I'm curious what the recommended best practice is for migrating a
production environment with replication from 0.7 to 0.8 given the protocol
upgrade. Some specific questions I have are:
a) Is it possible to mix 0.7 and 0.8 servers for a given partition during
the migration?
b) If we can't mix ser
Jay - Thanks for the call for comments. Here's some initial input:
- Make message serialization a client responsibility (making all messages
byte[]). Reflection-based loading makes it harder to use generic codecs
(e.g. Envelope) or build up codec programmatically.
Non-default partitioning should
emed cleaner with a special purpose object. I wasn't
> actually aware of plans for improved futures in java 8 or the other
> integrations. Maybe you could elaborate on this a bit and show how it would
> be used? Sounds promising, I just don't know a lot about it.
>
> -Jay
&
grade to a new kafka
> version with more configs those will be exposed too. If you realize that
> you need to change a default you can just go through your configs and
> change it everywhere as it will have the same name everywhere.
>
> -Jay
>
>
>
>
> On Sun, Jan 26
re: "Using package to avoid ambiguity" - Unlike Scala, this is really
cumbersome in Java as it doesn't support package imports or import aliases,
so the only way to distinguish is to use the fully qualified path.
re: Closable - it can throw IOException but is not required to. Same with
AutoClosea
Wrestling through the at-least/most-once semantics of my application and I
was hoping for some confirmation of the semantics. I'm not sure I can
classify the high level consumer as either type.
False ack scenario:
- Thread A: call next() on the ConsumerIterator, advancing the
PartitionTopicInfo o
Jay et al,
What are your current thoughts on ensuring that the next-generation APIs
play nicely with both lambdas and the extensions to the standard runtime in
Java 8?
My thoughts are that if folks are doing the work to reimplement/redesign
the API, it should be as compatible as possible with the
,
>
> In practice, the client app code need to always commit offset after it has
> processed the messages, and hence only the second case may happen, leading
> to "at least once".
>
> Guozhang
>
>
> On Wed, Jan 29, 2014 at 11:51 AM, Clark Breyman wrote:
>
> >
ich will be smaller than the
> current value, still leading to duplicates but not data losses.
>
> Guozhang
>
>
> On Wed, Jan 29, 2014 at 12:31 PM, Clark Breyman wrote:
>
> > Guozhang,
> >
> > Thank make sense except for the following:
> >
> > -
Thibaud,
Sounds like one of your issues will be upstream of Kafka. Robust and UDP
aren't something I usually think of together unless you have additional
bookkeeping to detect and request lost messages. 8MB/s shouldn't be much of
a problem unless the messages are very small and looking for individ
Is anyone running 0.8 (or pre-0.8.1) with the latest Zookeeper? Any known
compatibility issues? I didn't see any in JIRA but thought I'd give a
shout.
Guozhang,
All the patches for KAFKA-992 look like they were committed in August,
which was before 0.8 was shipped. Should we really be seeing this on 0.8?
Thanks, Clark
x27;d definitely be interested in hearing your
> > results if you do. We're going to be experimenting with the latest
> version
> > soon to evaluate it.
> >
> > -Todd
> >
> > On 2/14/14 4:32 PM, "Clark Breyman" >
> > wrote:
> &g
the exactly once
> of an aggregating consumer.
>
> Thank you,
> Robert
>
> > On Feb 15, 2014, at 12:55 PM, Clark Breyman wrote:
> >
> > Thanks Bae. I'll report back with our experiences.
> >
> >
> >> On Sat, Feb 15, 2014 at 10:48 AM,
Asif - Kafka was a writer.
https://twitter.com/jaykreps/status/421065665160548352
On Wed, Mar 19, 2014 at 10:05 AM, Muhammad Asif Abbasi <
asif.abb...@gmail.com> wrote:
> Hi,
>
> I was trying to understand why "Kafka" is called "Kafka".
>
> Any help would be high appreciated.
>
> Best Regards,
>
Was there an answer for 0.8.1 getting stuck in preferred leader election?
I'm seeing this as well. Is there a JIRA ticket on this issue?
On Fri, Mar 21, 2014 at 1:15 PM, Ryan Berdeen wrote:
> So, for 0.8 without "controlled.shutdown.enable", why would ShutdownBroker
> and restarting cause under
I'm seeing a lot of this in my logs on a non-controller broker:
2014-04-02 15:42:23,078] ERROR Error handling event ZkEvent[New session
event sent to
kafka.controller.KafkaController$SessionExpirationListener@204a18ac]
(org.I0Itec.zkclient.ZkEventThread)
java.lang.NullPointerException
at
kafka.con
Hey Tim. Small world :).
Kafka 0.8.1_2.10
On Wed, Apr 2, 2014 at 3:54 PM, Timothy Chen wrote:
> Hi Clark,
>
> What version of Kafka are you running this from?
>
> Thanks,
>
> Tim
>
>
> On Wed, Apr 2, 2014 at 3:49 PM, Clark Breyman wrote:
>
> > I
Tim - Actually I was looking at the wrong commit (0.8.1 has 0.8.1.1 commits
in it). The 0.8.1.0 tag has deleteTopicManager.shutdown() on
KafkaController.scala:340. Not sure how deleteTopicManager can skip
initialization.
On Wed, Apr 2, 2014 at 4:00 PM, Clark Breyman wrote:
> Hey Tim. Sm
our 0.8.1.1 release.
>
> I can actually repro the problem you're seeing, seems like we're
> calling onControllerResignation assuming it's the controller while the
> broker might be just re-establishing zookeeper session.
>
> I'll file a jira and fix this.
>
> Ti
are on the 0.8.1 branch. It will
> be great if you give it a try to see if your issue is resolved.
>
> Thanks,
> Neha
>
>
> On Wed, Apr 2, 2014 at 12:59 PM, Clark Breyman wrote:
>
> > Was there an answer for 0.8.1 getting stuck in preferred leader election?
> >
ile, you will have to just build the code yourself for now,
> unfortunately.
>
> Thanks,
> Neha
>
>
> On Thu, Apr 3, 2014 at 12:01 PM, Clark Breyman wrote:
>
> > Thank Neha - Is there a maven repo for pulling snapshot CI builds from?
> > Sorry if this is answe
I was under the impression that a KafkaStream would only own a single
topic/partition at a time. Is this correct or will it multiple multiple
topic-partitions into a single KafkaStream?
Thanks,
Clark
>
> Thanks,
>
> Joel
>
> On Fri, Apr 11, 2014 at 01:37:23PM -0700, Clark Breyman wrote:
> > I was under the impression that a KafkaStream would only own a single
> > topic/partition at a time. Is this correct or will it multiple multiple
> > topic-partitions into a single KafkaStream?
> >
> > Thanks,
> > Clark
>
>
ce semantics, so auto commit is false.
> >
> > Thanks again.
> > Clark
> >
> >
> >
> > On Fri, Apr 11, 2014 at 5:35 PM, Joel Koshy wrote:
> >
> > > A single stream (or consumer iterator) receive data from multiple
> > > partitions.
I've got some consumers under decent GC pressure and, as a result, they are
having ZK sessions expire and the consumers never recover. I see a number
of rebalance failures in the log after the ZK session expiration followed
by silence (and consumed partitions).
My hypothesis is that, since the GC
Thanks David. One hypothesis we have is that using different
rebalance.backoff.ms settings for the different ConsumerConnectors on the
same JVM will keep them from synchronizing their rebalance attempts enough
so that one can finish.
On Mon, Apr 14, 2014 at 12:58 PM, David DeMaagd wrote:
> Corre
Mike,
It's nowhere near a full broker, but I've had luck with using PowerMock
(Mockito version) since it can mock out the
static Consumer.createJavaConsumerConnector and the java Producer
constructor. It doesn't guarantee that your mocks behave like Kafka but
it's something. If you find/create som
his mocking of
> Consumer.createJavaConsumer please?
>
> Thanks,
> Matt
>
> On Apr 30, 2014, at 6:39 AM, Clark Breyman wrote:
>
> > Mike,
> >
> > It's nowhere near a full broker, but I've had luck with using PowerMock
> > (Mockito version) since it can mock
David - We're currently running 3.4.6 without issues, though our load is
modest.
Clark
On Thu, May 1, 2014 at 2:26 PM, Neha Narkhede wrote:
> Through our experience of operating zookeeper in production at LinkedIn, we
> found 3.3.4 to be very stable. It is likely at 3.4.x is stable now, but we
>
31 matches
Mail list logo