Hi Xuannan, thanks for drafting this FLIP.
One immediate thought, from what I've seen for interactive data exploration with Spark, most people tend to use the higher level APIs, that allow for faster prototyping (Table API in Flink's case). Should the Table API also be covered by this FLIP? Best, D. On Wed, Dec 29, 2021 at 10:36 AM Xuannan Su <suxuanna...@gmail.com> wrote: > Hi devs, > > I’d like to start a discussion about adding support to cache the > intermediate result at DataStream API for batch processing. > > As the DataStream API now supports batch execution mode, we see users > using the DataStream API to run batch jobs. Interactive programming is > an important use case of Flink batch processing. And the ability to > cache intermediate results of a DataStream is crucial to the > interactive programming experience. > > Therefore, we propose to support caching a DataStream in Batch > execution. We believe that users can benefit a lot from the change and > encourage them to use DataStream API for their interactive batch > processing work. > > Please check out the FLIP-205 [1] and feel free to reply to this email > thread. Looking forward to your feedback! > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-205%3A+Support+Cache+in+DataStream+for+Batch+Processing > > Best, > Xuannan >