Thanks for volunteering Manikumar. +1
Ismael
On Mon, Aug 12, 2019 at 7:54 AM Manikumar wrote:
> Hi all,
>
> I would like to volunteer to be the release manager for our next time-based
> feature release (v2.4.0).
>
> If that sounds good, I'll post the release plan over the next few days.
>
> Tha
Hi,
we are using Kafka in an environment where we have restricted access to the
Kafka brokers.
The access needs to happen via TCP proxies. So, in my setup I have 3 Kafka
brokers (broker1-3) for which I created 3 proxy instances and I have set these
proxy instances as the bootstrap servers:
boot
Hi all,
I would like to volunteer to be the release manager for our next time-based
feature release (v2.4.0).
If that sounds good, I'll post the release plan over the next few days.
Thanks,
Manikumar
Thanks!
Tim Ward
-Original Message-
From: Bruno Cadonna
Sent: 12 August 2019 14:18
To: users@kafka.apache.org
Subject: Re: KSTREAM-AGGREGATE-STATE-STORE persistence?
Hi Tim,
Kafka Streams guarantees at-least-once processing semantics by
default. That means, a record is processed (e.g.
Hi Tim,
Kafka Streams guarantees at-least-once processing semantics by
default. That means, a record is processed (e.g. added to an
aggregate) at least once but might be processed multiple times. The
cause for processing the same record multiple time are crashes as you
described. Exactly-once proc
I believe I have witnessed - at least twice - something like the following
happening, in a Kafka Streams application where I have a
.groupByKey().windowedBy().aggregate() sequence.
* Application runs for a while
* Application crashes
* Application restarts
* Aggregator.apply() i
I guess it more about replication, you can push your data in one choosen
cluster and replicate your data to the second one by using mirror maker or
confluent replicator or the new open sourced project by LinkedIn, "Brooklin"
Le lun. 12 août 2019 à 10:19, Garvit Sharma a écrit :
> Thanks, Isroudi
I'm using groupByKey, and it causes repartitioning.
I suppose I could aggregate by parent ID, if the data structure into which I
aggregate by parent ID is itself a map from child ID to what I'm really wanting
to aggregate - is that what you had in mind? - I think it would work!
Give or take a p
I believe not, because that only causes the application to start reading from
latest when there is no recorded offset at application start, no?
What I need is to be able to specify, by topic, that when the application
starts it doesn't want to see anything other than new data, regardless of wha
Thanks, Isroudi. But in my usecase there are two separate zookeeper as
well. Let me know how would it handle.
On Mon, Aug 12, 2019 at 1:45 PM lsroudi abdel wrote:
> If it is two cluster it mean you have two zookeeper, I guess you can't do
> that, I f you have one zookeeper it will be ok, in the
If it is two cluster it mean you have two zookeeper, I guess you can't do
that, I f you have one zookeeper it will be ok, in the case you have one
zookeeper it's just an architecture with replica across two data center, I
hope it clear for you
Le lun. 12 août 2019 à 09:22, Garvit Sharma a écrit :
Hi Kafka users,
I've been trying to investigate this a bit further, in the documentation
for the Connect REST API, found this paragraph:
"*Note that if you try to modify, update or delete a resource under
> connector which may require the request to be forwarded to the leader,
> Connect will retu
Hi,
I have 2 kafka clusters in different data-centers so can I provide the dns
hostnames of both the clusters separated by comma in *bootstrap.servers*
key in producer config ?
If I provide two different clusters in *bootstrap.servers *then how would
the events get published ?
Would events get pu
13 matches
Mail list logo