You might want to send this to users-unsubscr...@kafka.apache.org .
On Tue, Feb 27, 2018 at 7:49 PM, Yuejie Chen wrote:
>
>
Hi everyone,
We are a university research group, and we are currently working on a project
on providing total ordering of messages across multiple partitions in a Kafka
topic. The project is in active development. We hope this project can benefit
users that require ordering of messages in their
Hi,
We need to distribute Kafka as part of our solution. While reading the license,
we came across this - ‘This distribution has a binary dependency on jersey,
which is available under the CDDL License. The source code of jersey can be
found at https://github.com/jersey/jersey’ . CDDL, on first
Dear Folks
We have plans implementing kafa and zookeeper metrics using java JMX API. We
were able to successfully implement metrics collection using the MBean exposed
for kafka. But when we try to do so for zookeeper I do not find much API
support like we have for kafka. Can someone help if you
In Kafka v1.0.0 Java doc, the introduction about KafkaConsumer's commitAsync
function say that Offsets committed through multiple calls to this API are
guaranteed to be sent in the same order as the invocations and Corresponding
commit callbacks are also invoked in the same order. This descrip
That's basically all logs we have. Nothing more interested :)
Ok, we try to upgrade, sounds fair.
On Tue, Feb 27, 2018 at 10:01 PM, Matthias J. Sax
wrote:
> Hard to say without logs...
>
> However, this was improved in 1.0 release -- there, if this error
> happens, Streams would not die but rec
Hard to say without logs...
However, this was improved in 1.0 release -- there, if this error
happens, Streams would not die but recover from the error automatically.
Thus, I would recommend to upgrade to 1.0 eventually.
-Matthias
On 2/27/18 8:06 AM, Dmitriy Vsekhvalnov wrote:
> Good day every
Good day everybody,
we faced unexpected kafka-streams application death after 3 months of work
with exception below.
Our setup:
- 2 instances (both died) of kafka-stream app
- kafka-streams 0.11.0.0
- kafka broker 1.0.0
Sounds like re-balanced happened and something went terribly wrong this
t
Hi Nicu,
To eliminate old records you'll want to add an `until(final long
durationMs)` method to your existing code like so:
stream.groupByKey(Serdes.String(), Serdes.String())
.windowedBy(SessionWindows.with(TimeUnit.HOURS.toMillis(1)).
*until(TimeUnit.HOURS.toMillis(1))*)
1) We write out one recovery point per log directory, which practically
means topicpartition. So if your topic is called mytopic, then you will
have a file called
recovery-point-offset-checkpoint in topic-0/ , in topic-1/ , and in
topic-2/ .
2) Data deletion in kafka is not related to what was re
Hi,
In the code below I am attempting at deduplicating within one hour, any
duplicates further apart would remain duplicates. Does this look good to you?
Thanks
stream.groupByKey(Serdes.String(), Serdes.String())
.windowedBy(SessionWindows.with(TimeUnit.HOURS.toMillis(1)))
Hi,
>From a programatic perspective, doing a groupByKey.reduce((val1, val2) ->
>val1) would deduplicate entries, but then I have a few questions: this state
>would accumulate without limit, right? Should we do a windowing, to eliminate
>old records be needed, right? Will the state accumulate jus
13 matches
Mail list logo