Thanks, will have a look through!
-Original Message-
From: Yangze Guo
Sent: Wednesday, September 15, 2021 11:25 AM
To: Osada Paranaliyanage
Cc: David Morávek ; user@flink.apache.org
Subject: Re: Streaming SQL support for redis streaming connector
[EXTERNAL EMAIL] This email has been rec
Hi Leonard,
That’s awesome news. We are actually using documentdb. Any idea how much work
it will be to make it work with documentdb instead?
Thanks,
Osada.
From: Leonard Xu
Sent: Wednesday, September 15, 2021 1:08 PM
To: Osada Paranaliyanage
Cc: user@flink.apache.org
Subject: Re: Streaming
Hi David,
Confirmed with RocksDB log Stephan's observation is the root cause that
compaction doesn't clean up the high level sst files fast enough. Do you
think manual clean up by registering a timer is the way to go or any
RocksDB parameter can be tuned to mitigate this issue?
On Wed, Sep 15, 2
Thanks for the feedback.
> May I ask why you have less partitions than the parallelism? I would be
happy to learn more about your use-case to better understand the
> motivation.
The use case is that topic A, contains just a few messages with product
metadata that rarely gets updated, while topic
True, that's a valid concern you raised here, Alexis. Thanks for pointing
that out.
On Thu, Sep 16, 2021 at 1:58 PM Alexis Sarda-Espinosa <
alexis.sarda-espin...@microfocus.com> wrote:
> Someone please correct me if I’m wrong but, until FLINK-16686 [1] is
> fixed, a class must be a POJO to be use
Someone please correct me if I’m wrong but, until FLINK-16686 [1] is fixed, a
class must be a POJO to be used in managed state with RocksDB, right? That’s
not to say that the approach with TypeInfoFactory won’t work, just that even
then it will mean none of the data classes can be used for manag
Hi Alex,
have you had a look at TypeInfoFactory? That might be the best way to come
up with a custom serialization mechanism. See the docs [1] for further
details.
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/types_ser
Hi all,
The problem you are seeing Lars is somewhat intended behaviour, unfortunately.
With the batch/stream unification every Kafka partition is treated
as kind of workload assignment. If one subtask receives a signal that there is
no workload anymore it goes into the FINISHED state.
As alread