Hi,
The "rebalancing" callback is also called for the partitions the consumer gets
when it starts up and joins the consumer group?
Thank you,
Nicu Marrasoiu
Geschäftsanschrift/Business address: METRO SYSTEMS GmbH, Metro-Straße 12, 40235
Düsseldorf, Germany
Aufsichtsrat/Supervisory Board: Heiko Hu
Hi,
Is it correct that out of all the consumer api calls, only poll does any real
activity?
Is it safe to assume that poll(0) will only fetch metadata, but will not have
time to get any response on select, returning an empty list of consumerRecords?
Excepting of course the case of thread being s
Hi,
We would consider one of 2 or 3 flows to ensure an "exactly once" process from
an input kafka topic to a database storing results (using kafka consumer, but
also evaluated kafka streams and details at the end) and wanted to gather your
input on them:
(for simplicity let's assume that any exc
p, iterate through partitions and
resume one partition, do poll, process, then next partition, in rotation?
Thank you,
Nicu Marasoiu
____
From: Marasoiu, Nicu
Sent: Monday, March 12, 2018 8:58 AM
To: users@kafka.apache.org
Subject: transactional behavior offsets+effec
Hi,
We would consider one of 2 or 3 flows to ensure an "exactly once" process from
an input kafka topic to a database storing results (using kafka consumer, but
also evaluated kafka streams and details at the end) and wanted to gather your
input on them:
(for simplicity let's assume that any ex
Hi,
Currently we have an aggregation system (without kafka) where events are
aggregated into Cassandra tables holding aggregate results.
We are considering moving to a KafkaStreams solution with exactly-once
processing but in this case it seems that all the aggregation tables (reaching
TB) need
Hi,
When using kafka streams & stateful transformations (rocksdb) and docker and
Kubernetes, what are the concerns for storage - I mean, I know that writing to
disk in Docker into the container is less performant than mounting a direct
volume. However, in our setup a separate team is handling, a
)))
.reduce((val1, val2) -> val1)
From: Marasoiu, Nicu [nicu.maras...@metrosystems.net]
Sent: Tuesday, February 27, 2018 11:03 AM
To: users@kafka.apache.org
Subject: question on doing deduplication with KafkaStreams
Hi,
>From a progr
Hi,
>From a programatic perspective, doing a groupByKey.reduce((val1, val2) ->
>val1) would deduplicate entries, but then I have a few questions: this state
>would accumulate without limit, right? Should we do a windowing, to eliminate
>old records be needed, right? Will the state accumulate jus
hem?
It's tough to be more specific without knowing more specifics, but maybe
that helps a bit already?
Best regards,
Sönke
On Wed, Feb 21, 2018 at 11:57 AM, Marasoiu, Nicu <
nicu.maras...@metrosystems.net> wrote:
> Hi,
> In order to obtain an exactly-once semantics, we are thin
Hi,
In order to obtain an exactly-once semantics, we are thinking of doing
at-least-once processing, and then have a compensation mechanism to fix the
results in few minutes by correcting them by substracting the effects of the
duplicates. However, in order to do that, it seems that at least thi
11 matches
Mail list logo