Hi,
During a round of kafka data discrepancy investigation I came across a
bunch of recurring errors below:
producer.log
>
2015-06-14 13:06:25,591 WARN [task-thread-9]
> (k.p.a.DefaultEventHandler:83) - Produce request with correlation id 624
> failed due to [mytopic,21]: kafka.common.NotLeader
2015-04-30 8:50 GMT+03:00 Ewen Cheslack-Postava :
> They aren't going to get this anyway (as Jay pointed out) given the current
> broker implementation
>
Is it also incorrect to assume atomicity even if all messages in the batch
go to the same partition?
I must agree with @Roshan – it's hard to imagine anything more intuitive
and easy to use for atomic batching as old sync batch api. Also, it's fast.
Coupled with a separate instance of producer per
broker:port:topic:partition it works very well. I would be glad if it finds
its way into new producer
Does increasing PartitionFetchInfo.fetchSize help?
Speaking of Kafka API, it looks like throwing exception would be less
confusing if fetchSize is not enough to get at least one message at
requested offset.
2015-04-28 21:12 GMT+03:00 Laran Evans :
> I’ve got a simple consumer. According to GetOf
Alex,
Just wondering, did you have any success patching and running MM with exact
partitioning support?
If so, could you possibly share the patch and, as I hope, your positive
experience with the process?
Thanks!
> per-topic basis (see the per-topic configuration section)."
>
> http://kafka.apache.org/documentation.html#brokerconfigs
>
> -James
>
>> On Mar 2, 2015, at 8:57 AM, Ivan Balashov wrote:
>>
>> Svante,
>>
>> Not sure if I understand your suggesti
Svante,
Not sure if I understand your suggestion correctly, but I do think
that enabling retention for deleted values would make a useful
addition to the "compact" policy. Otherwise some data is bound to be
hanging around not used.
Guozhang, could this potentially deserve a feature request?
Than
Guozhang,
I agree, but upon restart the application still needs to init
KV-storage. And even though values are empty, keys will generate
traffic (delaying app startup time).
Besides, the idea of keeping needless data in kafka forever, even keys
only, sounds rather unsettling.
I guess we could try
Guozhang,
Thanks for the suggestion, however, I'm afraid cardinality of keys
will grow indefinitely and AFAIU keys are permanent with log
compaction. Any chance keys could also be removed during compaction?
Thanks,
2015-03-02 5:27 GMT+03:00 Guozhang Wang :
>
> From your description it seems Kafk
2015-03-01 18:41 GMT+03:00 Jay Kreps :
> They are mutually exclusive. Can you expand on the motivation/use for
> combining them?
Thanks, Jay
Let's say we need to build key-value storage semantically connected to
the data that also stored in kafka.
Once the particular pieces of data are gone due t
Hi,
Do I understand correctly that compaction and deletion are currently
mutually exclusive?
Is it possible to compact recent segments and delete older ones,
according to general deletion policies?
Thanks,
2014-11-30 15:10 GMT+03:00 Manikumar Reddy :
> Log cleaner does not support topics with
David,
Thanks for sharing this. Any plans to include 0.8.2 to the list of
available packages?
Or, any chance you could share your packaging script? Deb package for
kafka 0.8.2 (or any other version for that matter) is sorely missed.
Thanks,
2015-02-11 21:34 GMT+03:00 David Morales :
> Regarding
Hi,
It looks like it is a general practice to avoid storing data in kafka
keys. Some examples of this: Camus, Secor both not using keys. Even
such a swiss-army tool as kafkacat doesn't seem to have the ability to
display key (although I might be wrong). Also, console producer does
not display keys
Hi,
Is it possible to read all available messages with HLC in a
non-blocking way? E.g. read all messages and not wait for more
messages to appear in the topic.
As far as I understand, currently one has to keep high-level consumer
in a separate thread until it is shut down explicitly, but how can
Hi,
Sorry if this had been answered before. Although, I couldn't find any
information besides "controlled shutdown of broker", which, I believe
not fully applies here.
Could anyone suggest what would be the safest strategy to shut down
kafka cluster? Should brokers be brought down one-by-one or
s
15 matches
Mail list logo