Hi Timo,

I have the same idea about the two step approach.
+1 to change default planner for Table API in 1.11.

Best,
Jark


On Fri, 3 Jan 2020 at 17:06, Timo Walther <twal...@apache.org> wrote:

> Hi Jark,
>
> +1 for making the Blink planner the default planner for the SQL Client.
>
> I think for the Table API, we should give the planner a bit more
> exposure and target the changing of a default planner for 1.11.
>
> What do you think about this two step approach?
>
> Regards,
> Timo
>
> On 03.01.20 09:37, Jingsong Li wrote:
> > Hi Jark,
> >
> > +1 for default blink planner in SQL-CLI.
> > I believe this new planner can be put into practice in production.
> > We've worked hard for nearly a year, but the old planner didn't move on.
> >
> > And I'd like to cc to u...@flink.apache.org.
> > If anyone finds that blink planner has any significant defects and has a
> > larger regression than the old planner, please let us know. We will be
> very
> > grateful.
> >
> > Best,
> > Jingsong Lee
> >
> > On Fri, Jan 3, 2020 at 4:14 PM Leonard Xu <xbjt...@gmail.com> wrote:
> >
> >> +1 for this.
> >> We bring many SQL/API features and enhance stability in 1.10 release,
> and
> >> almost all of them happens in Blink planner.
> >> SQL CLI is the most convenient entrypoint for me, I believe many users
> >> will have a better experience If we set Blink planner as default
> planner.
> >>
> >> Best,
> >> Leonard
> >>
> >>> 在 2020年1月3日,15:16,Terry Wang <zjuwa...@gmail.com> 写道:
> >>>
> >>> Since what blink planner can do is a superset of flink planner, big +1
> >> for changing the default planner to Blink planner from my side.
> >>>
> >>> Best,
> >>> Terry Wang
> >>>
> >>>
> >>>
> >>>> 2020年1月3日 15:00,Jark Wu <imj...@gmail.com> 写道:
> >>>>
> >>>> Hi everyone,
> >>>>
> >>>> In 1.10 release, Flink SQL supports many awesome features and
> >> improvements,
> >>>> including:
> >>>> - support watermark statement and computed column in DDL
> >>>> - fully support all data types in Hive
> >>>> - Batch SQL performance improvements (TPC-DS 7x than Hive MR)
> >>>> - support INSERT OVERWRITE and INSERT PARTITION
> >>>>
> >>>> However, all the features and improvements are only avaiable in Blink
> >>>> planner, not in Old planner.
> >>>> There are also some other features are limited in Blink planner, e.g.
> >>>> Dimension Table Join [1],
> >>>> TopN [2], Deduplicate [3], streaming aggregates optimization [4], and
> >> so on.
> >>>>
> >>>> But Old planner is still the default planner in Table API & SQL. It is
> >>>> frustrating for users to set
> >>>> to blink planner manually when every time start a SQL CLI. And it's
> >>>> surprising to see unsupported
> >>>> exception if they trying out the new features but not switch planner.
> >>>>
> >>>> SQL CLI is a very important entrypoint for trying out new feautures
> and
> >>>> prototyping for users.
> >>>> In order to give new planner more exposures, I would like to suggest
> to
> >> set
> >>>> default planner
> >>>> for SQL Client to Blink planner before 1.10 release.
> >>>>
> >>>> The approach is just changing the default SQL CLI yaml
> >> configuration[5]. In
> >>>> this way, the existing
> >>>> environment is still compatible and unaffected.
> >>>>
> >>>> Changing the default planner for the whole Table API & SQL is another
> >> topic
> >>>> and is out of scope of this discussion.
> >>>>
> >>>> What do you think?
> >>>>
> >>>> Best,
> >>>> Jark
> >>>>
> >>>> [1]:
> >>>>
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
> >>>> [2]:
> >>>>
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
> >>>> [3]:
> >>>>
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
> >>>> [4]:
> >>>>
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
> >>>> [5]:
> >>>>
> >>
> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100
> >>>
> >>
> >>
> >
>
>

Reply via email to