I had enabled eos through streams config and as explained in the
documentation, I have not added anything else other than following config.
streamsConfiguration.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
StreamsConfig.EXACTLY_ONCE);
As explained by you, I think producer idempotence and retr
I had enabled eos through streams config.
On Fri, Sep 29, 2017 at 11:12 PM, Matthias J. Sax
wrote:
> That's correct: If EOS is enabled, we enforce some producer configs:
>
> https://github.com/apache/kafka/blob/0.11.0.1/streams/
> src/main/java/org/apache/kafka/streams/StreamsConfig.java#L678-L
I apologize for sending this to dev. Reposting to the Users mailing list.
-- Forwarded message --
I was investigating some performance issues we're issues in one of our
production clusters, and I ran into extremely unbalanced offset partitions
for the __consumer_offsets topic. I
Hi, I've created a ticket for the situation we have now:
https://issues.apache.org/jira/browse/KAFKA-6003. I will file a ticket for
the original Exception that took down replication fetcher thread after some
initial investigation - it might be same issue after all.
Still would appreciate any hints
Hi Stas,
Thanks for reporting this. It would be helpful to have JIRA with more of
the server logs on the leaders and followers in the time leading up to this
OutOfOrderSequenceException.
The answers to the following questions would help, when you file the JIRA:
What are the retention settings fo
There is a a native kafka framework which runs on top of DC/OS.
https://docs.mesosphere.com/service-docs/kafka/
This will most likely be a better way to run kafka on DC/OS rather than
running it as a marathon framework.
On Mon, Oct 2, 2017 at 7:35 AM, David Garcia wrote:
> I’m not sure how y
I’m not sure how your requirements of Kafka are related to your requirements
for marathon. Kafka is a streaming-log system and marathon is a scheduler.
Mesos, as your resource manager, simply “manages” resources. Are you asking
about multitenancy? If so, I highly recommend that you separate
Hi,
TL;DR: I'd love to be able to make log compaction more "granular" than just
per-partition-key, so I was thinking about the concept of a "composite
key", where partitioning logic is using one part of the key, while
compaction uses the whole key - is this something desirable / doable /
worth a K
Hi Stas,
Thank you for reporting this. Can you please file an issue? Even if
KAFKA-5793 has fixed it for 1.0.0 (which needs to be verified), we should
consider whether a fix is needed for the 0.11.0 branch as well.
Ismael
On Mon, Oct 2, 2017 at 11:28 AM, Stas Chizhov wrote:
> Hi,
>
> We run 0.
Hi there,
Working in a huge compony we are about to install Kafka on DC/OS (Mesos) and
intend to use Marathon as a Scheduler. Since I am new to DC/OS and Marathon, I
was wondering if this is a recommended way of using Kafka in the production
environment.
My doubts are:
- Kafka manages Broker r
Hi,
We run 0.11.01 and there was a problem with 1 ReplicationFetcher on one of
the brokers - it experience out of order sequence problem for one
topic/partition and was stopped. It stayed stopped over the weekend. During
this time log cleanup was working and by now it has cleaned up all the data
i
11 matches
Mail list logo