That is doable via the state processor API, though Arvid's idea does sound
simpler :)
You could read the operator with the rules, change the data as necessary
and then rewrite it out as a new savepoint to start the job.
On Thu, Jul 30, 2020 at 5:24 AM Arvid Heise wrote:
> Another idea: since y
Another idea: since your handling on Flink is idempotent, would it make
sense to also periodically send the whole rule set anew?
Going further, depending on the number of rules, their size, and the update
frequency. Would it be possible to always transfer the complete rule set
and just discard the
Hi Kostas
Thanks for a possible help!
пт, 24 июл. 2020 г., 19:08 Kostas Kloudas :
> Hi Alex,
>
> Maybe Seth (cc'ed) may have an opinion on this.
>
> Cheers,
> Kostas
>
> On Thu, Jul 23, 2020 at 12:08 PM Александр Сергеенко
> wrote:
> >
> > Hi,
> >
> > We use so-called "control stream" pattern t
Hi Alex,
Maybe Seth (cc'ed) may have an opinion on this.
Cheers,
Kostas
On Thu, Jul 23, 2020 at 12:08 PM Александр Сергеенко
wrote:
>
> Hi,
>
> We use so-called "control stream" pattern to deliver settings to the Flink
> job using Apache Kafka topics. The settings are in fact an unlimited stre
Hi,
We use so-called "control stream" pattern to deliver settings to the Flink
job using Apache Kafka topics. The settings are in fact an unlimited stream
of events originating from the master DBMS, which acts as a single point of
truth concerning the rules list.
It may seems odd, since Flink gua