Hi, Jameson
Thanks very much for replying , I am really struggling on this.
I am using flowId as my keys, which means they will be matched and never use 
again.
This seems like the scenario 2. I didn't know it is not fixed yet.
thank you again and do you have any solutions ?

On 2021/07/07 01:47:00, Aeden Jameson <aeden.jame...@gmail.com> wrote: 
> Hi Li,
> 
>    How big is your keyspace? Had a similar problem which turns out to
> be scenario 2 in this issue
> https://issues.apache.org/jira/browse/FLINK-19970. Looks like the bug
> in scenario 1 got fixed by scenario 2 did not.  There's more detail in
> this thread, 
> http://deprecated-apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-cep-checkpoint-size-td44141.html#a44168
> . Hope that helps
> 
> On Tue, Jul 6, 2021 at 12:44 AM Li Jim <lishijun121...@126.com> wrote:
> >
> > I am using Flink CEP to do some performance tests.
> >
> > Flink version 1.13.1.
> >
> > below is the sql:
> >
> > INSERT INTO to_kafka
> > SELECT bizName, wdName, wdValue , zbValue , flowId FROM kafka_source
> > MATCH_RECOGNIZE
> > (
> >     PARTITION BY flow_id
> >     ORDER BY proctime
> >     MEASURES A.biz_name as bizName, A.wd_name as wdName, A.wd_value as 
> > wdValue, MAP[
> >         A.zb_name, A.zb_value,
> >         B.zb_name, B.zb_value
> >     ] as zbValue, A.flow_id as flowId
> >     ONE ROW PER MATCH
> >     AFTER MATCH SKIP PAST LAST ROW
> >     PATTERN ( A B ) WITHIN INTERVAL '10' SECOND
> >     DEFINE
> >         B AS B.flow_id = A.flow_id
> > );
> >
> > I add the 'within clause' to avoid state growing, but it does not work at 
> > all.
> >
> > the checkpoint size is growing fast.
> >
> > I am using rocksdb incremental mode.
> >
> >
> 
> 
> -- 
> Cheers,
> Aeden
> 

Reply via email to