Hi Shengyi,
1) Unfortunately no, see the related docs: https://kafka.apache.
org/0110/documentation.html#upgrade_11_exactly_once_semantics
2) You cannot set the internal topic message format on the client side
(producer, consumer, streams, ..) it is decided on the broker side only.
3) You can re
Hi Rajini
1. Oh so truststores can't be be updated dynamically ? Is it planned for
any future release?
2. By dynamically updated, do you mean that if Broker was using keystore A,
we can now point it to use a different keystore B ?
Thanks.
On Wed, Apr 18, 2018 at 10:51 PM, Darshan
wrote:
> H
> From your answer I understand that whenever a product is
>> deleted a message by the Kafka Streams application needs to be consumed and
>> that specific entry for that product needs to be deleted from the KTable
>> with aggregated data using tombstone.
Exactly.
> If I don't do that the entry wi
Hi Darshan,
We currently allow only keystores to be dynamically updated. And you need
to use kaka-configs.sh to update the keystore config. See
https://kafka.apache.org/documentation/#dynamicbrokerconfigs.
On Thu, Apr 19, 2018 at 6:51 AM, Darshan
wrote:
> Hi
>
> KIP-226 is released in 1.1. I ha
I will try to clarify what I mean by "old state that is not longer needed".
Lets say I consume messages about products that has been sold to customers
and I keep a KTable with aggregated data for product with a specific id and
the number of times it has been bought. At some point a product that has
I think this can be done in two ways.
1. Kstream or Ktable filter in a topology.
2. Store data in a persistent store elsewhere and expose via API (like
Cassandra)
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Apr 19, 2018, 7:07 AM -0500, joe_delbri...@denso-diam.com, wrote:
> I am t
Not sure what you mean by "old state that is not longer needed" ?
key-value entries are kept forever, and there is no TTL. If you want to
delete something from the store, you can return `null` as aggregation
result though.
-Matthias
On 4/19/18 2:28 PM, adrien ruffie wrote:
> Hi Mihaela,
>
>
>
I am trying to determine how our consumer will send data to a rest API.
The topics are machine based topics meaning they store all the information
about a specific machine in one topic. I then have keys that are
identifying the type of information stored. Here are some examples:
Topic: E347-8 K
Hi Mihaela,
by default a KTable already have a log compacted behavior.
therefore you don't need to manually clean up.
Best regards,
Adrien
De : Mihaela Stoycheva
Envoyé : jeudi 19 avril 2018 13:41:22
À : users@kafka.apache.org
Objet : Is KTable cleaned up a
Hello,
I have a Kafka Streams application that is consuming from two topics and
internally aggregating, transforming and joining data. I am using KTable as
result of aggregation and my question is if KTables are cleaned using some
mechanism of Kafka Streams or is this something that I have to do m
10 matches
Mail list logo