Hi Hangxiang,

Thanks for your answer! We are using RocksDB state backend, and the
incremental checkpoint is enabled, and it is the incremental size keeps
increasing. We didn't add any custom checkpoint configuration in flink sql
jobs, where can I see the log of
StreamGraphHasherV2.generateDeterministicHash? And is there a default state
name?

Thanks,
Yifan

On 2023/09/06 07:12:05 Hangxiang Yu wrote:
> Hi, Yifan.
> Unfortunately, The State Processor API only supports Datastream currently.
> But you still could use it to read your SQL job state.
> The most difficult thing is that you have to get the operator id which you
> could get from the log of StreamGraphHasherV2.generateDeterministicHash
and
> state name which you could get from the code of operator.
>
> BTW, About investigating why the checkpoint size keeps growing:
> 1. Which State Backend are you using ?
> 2. Are you enabling incremental checkpoint ? The checkpoint size you
> mentioned is incremental size or full size ?
> 3. If full size, Did you evaluate whether the size is matching the
> theoretical size ?
>
>
> On Wed, Sep 6, 2023 at 1:11 PM Yifan He via user <us...@flink.apache.org>
> wrote:
>
> > Hi team,
> >
> > We are investigating why the checkpoint size of our FlinkSQL jobs keeps
> > growing and we want to look into the checkpoint file to know what is
> > causing the problem. I know we can use the state processor api to read
the
> > state of jobs using datastream api, but how can I read the state of jobs
> > using table api & sql?
> >
> > Thanks,
> > Yifan
> >
>
>
> --
> Best,
> Hangxiang.
>

Reply via email to