k forwards to your feedback. I'd love to listen the feedback from
> > community to take the next steps.
> >
> > [1]:
> >
> https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParser.java#L348
> > [2]:https://issues.apache.org/jira/browse/FLINK-26681
> > [3]:https://issues.apache.org/jira/browse/FLINK-31413
> > [4]:https://issues.apache.org/jira/browse/FLINK-30064
> >
> >
> >
> > Best regards,
> > Yuxia
> >
>
>
> --
>
> Best,
> Benchao Li
>
--
Best regards!
Rui Li
gt; > >> >> * all artifacts to be deployed to the Maven Central Repository [4],
> > >> >> * source code tag "release-1.10.0-rc3" [5],
> > >> >> * website pull request listing the new release and adding
> > announcement
> > >> blog
> > >> >> post [6][7].
> > >> >>
> > >> >> The vote will be open for at least 72 hours. It is adopted by
> > majority
> > >> >> approval, with at least 3 PMC affirmative votes.
> > >> >>
> > >> >> Thanks,
> > >> >> Yu & Gary
> > >> >>
> > >> >> [1]
> > >> >>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845
> > >> <
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845
> > >> >
> > >> >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc3/
> <
> > >> https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc3/>
> > >> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS <
> > >> https://dist.apache.org/repos/dist/release/flink/KEYS>
> > >> >> [4]
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1333
> > <
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1333>
> > >> >> [5]
> https://github.com/apache/flink/releases/tag/release-1.10.0-rc3
> > <
> > >> https://github.com/apache/flink/releases/tag/release-1.10.0-rc3>
> > >> >> [6] https://github.com/apache/flink-web/pull/302 <
> > >> https://github.com/apache/flink-web/pull/302>
> > >> >> [7] https://github.com/apache/flink-web/pull/301 <
> > >> https://github.com/apache/flink-web/pull/301>
> > >> >
> > >>
> > >>
> >
>
--
Best regards!
Rui Li
very
> active in both dev
> and user mailing lists, helped discussing designs and answering users
> questions, also
> helped to verify various releases.
>
> Congratulations Jingsong!
>
> Best, Kurt
> (on behalf of the Flink PMC)
>
>
>
--
Best regards!
Rui Li
to Flink, with
fewer SQL statements that need to be changed.
Please find more details in the FLIP wiki [1]. Feedbacks and suggestions
are appreciated.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-123%3A+DDL+and+DML+compatibility+for+Hive+connector
--
Cheers,
Rui Li
gt; > > >>> - Konstantin Knauf joined as a committer. You may know him,
> > for
> > > > > > >> example,
> > > > > > >>> from the weekly community updates.
> > > > > > >>>
> > > > > > >>> - Dawid Wysakowicz joined the PMC. Dawid is one of the main
> > > > > developers
> > > > > > >> on
> > > > > > >>> the Table API.
> > > > > > >>>
> > > > > > >>> - Zhijiang Wang joined the PMC. Zhijiang is a veteran of
> > Flink's
> > > > > > >> network
> > > > > > >>> / data shuffle system.
> > > > > > >>>
> > > > > > >>> A warm welcome to your new roles in the Flink project!
> > > > > > >>>
> > > > > > >>> Best,
> > > > > > >>> Stephan
> > > > > > >>>
> > > > > > >>
> > > > > > >>
> > > > > > >> --
> > > > > > >> Best, Jingsong Lee
> > > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> Best Regards
>
> Jeff Zhang
>
--
Best regards!
Rui Li
get what will be supported exactly. I can imagine
> other users will also have such confusion.
> Could you add a table or a list of syntax which will be supported?
>
> Best,
> Kurt
>
>
> On Wed, Apr 1, 2020 at 4:24 PM Rui Li wrote:
>
> > Hi devs,
> >
> &
s is quite different
> from HQL.
>
> Do you think we must need import `FlinkHiveSqlParserImpl`? This will bother
> planner code, if possible, I think it is better to keep dialect things in
> sql-parer.
> What do you think?
>
> Best,
> Jingsong Lee
>
> On Thu, Apr 9,
ut hive things to planner.
> Because planner have set conformance. So the parser already know the
> dialect things.
>
> A simple way is implement a Flink SqlParserImplFactory with conformance. We
> can limit dialect things to parser module.
>
> Best,
> Jingsong Lee
>
>
+connector
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-123-DDL-and-DML-compatibility-for-Hive-connector-td39633.html
--
Cheers,
Rui Li
gt; > Jingsong Lee
> >
> > On Tue, Apr 14, 2020 at 9:29 PM Kurt Young wrote:
> >
> > > +1
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Mon, Apr 13, 2020 at 9:26 PM Rui Li wrote:
> > >
> > >
ed, Apr 15, 2020 at 10:38 AM Rui Li wrote:
> Hey Rong,
>
> Thanks for the reminder. I have updated the main page to add a row for
> FLIP-123.
>
> On Tue, Apr 14, 2020 at 11:02 PM Rong Rong wrote:
>
>> Great feature, I kinda like how the dialect / conformance is han
A+Rework+of+the+Expression+Design
> > > >>>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
> > > >>>>>
> > > >>>>> Discussion thread:
> > > >>>>> <
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> https://lists.apache.org/thread.html/65078bad6e047578d502e1e5d92026f13fd9648725f5b74ed330@%3Cdev.flink.apache.org%3E
> > > >>>>>>
> > > >>>>> <
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-51-Rework-of-the-Expression-Design-td31653.html
> > > >>>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-63-Rework-table-partition-support-td32770.html
> > > >>>>>
> > > >>>>> Google Doc:
> > > >>>>> <
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> https://docs.google.com/document/d/1yFDyquMo_-VZ59vyhaMshpPtg7p87b9IYdAtMXv5XmM/edit?usp=sharing
> > > >>>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>
> > > >>
> > >
> >
> https://docs.google.com/document/d/15R3vZ1R_pAHcvJkRx_CWleXgl08WL3k_ZpnWSdzP7GY/edit?usp=sharing
> > > >>>>>
> > > >>>>> Thanks,
> > > >>>>>
> > > >>>>> Best,
> > > >>>>> Jingsong Lee
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> Best, Jingsong Lee
> > > >>>
> > > >>
> > > >>
> > > >> --
> > > >> Best, Jingsong Lee
> > > >>
> > >
> > >
> >
>
>
> --
> Xuefu Zhang
>
> "In Honey We Trust!"
>
--
Best regards!
Rui Li
built-in function.
> > > >> */
> > > >> Optional getObjectIdentifier()
> > > >> }
> > > >>
> > > >> class ObjectIdentifier implements FunctionIdentifier {
> > > >> Optional getObjectIdentifier() {
> > > >> return Optional.of(this);
> > > >> }
> > > >> }
> > > >>
> > > >> class SystemFunctionIdentifier implements FunctionIdentifier {...}
> > > >>
> > > >> WDYT?
> > > >>
> > > >> On Wed, 25 Sep 2019, 04:50 Xuefu Z, wrote:
> > > >>
> > > >> > +1. LGTM
> > > >> >
> > > >> > On Tue, Sep 24, 2019 at 6:09 AM Terry Wang
> > > wrote:
> > > >> >
> > > >> > > +1
> > > >> > >
> > > >> > > Best,
> > > >> > > Terry Wang
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > > 在 2019年9月24日,上午10:42,Kurt Young 写道:
> > > >> > > >
> > > >> > > > +1
> > > >> > > >
> > > >> > > > Best,
> > > >> > > > Kurt
> > > >> > > >
> > > >> > > >
> > > >> > > > On Tue, Sep 24, 2019 at 2:30 AM Bowen Li >
> > > >> wrote:
> > > >> > > >
> > > >> > > >> Hi all,
> > > >> > > >>
> > > >> > > >> I'd like to start a voting thread for FLIP-57 [1], which
> we've
> > > >> reached
> > > >> > > >> consensus in [2].
> > > >> > > >>
> > > >> > > >> This voting will be open for minimum 3 days till 6:30pm UTC,
> > Sep
> > > >> 26.
> > > >> > > >>
> > > >> > > >> Thanks,
> > > >> > > >> Bowen
> > > >> > > >>
> > > >> > > >> [1]
> > > >> > > >>
> > > >> > > >>
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog
> > > >> > > >> [2]
> > > >> > > >>
> > > >> > > >>
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html#a32613
> > > >> > > >>
> > > >> > >
> > > >> > >
> > > >> >
> > > >> > --
> > > >> > Xuefu Zhang
> > > >> >
> > > >> > "In Honey We Trust!"
> > > >> >
> > > >>
> > > >
> > >
> >
>
--
Best regards!
Rui Li
>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> Best Regards
> >>>>>>>>>>>> Peter Huang
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Mon, Oct 28, 2019 at 9:19 AM Rong Rong <
> walter...@gmail.com
> >>>
> >>>>>>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Congratulations Becket!!
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>> Rong
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Mon, Oct 28, 2019, 7:53 AM Jark Wu
> >> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Congratulations Becket!
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Best,
> >>>>>>>>>>>>>> Jark
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Mon, 28 Oct 2019 at 20:26, Benchao Li <
> >> libenc...@gmail.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Congratulations Becket.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Dian Fu 于2019年10月28日周一 下午7:22写道:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Congrats, Becket.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> 在 2019年10月28日,下午6:07,Fabian Hueske
> >> 写道:
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Hi everyone,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I'm happy to announce that Becket Qin has joined the
> Flink
> >>>>>> PMC.
> >>>>>>>>>>>>>>>>> Let's congratulate and welcome Becket as a new member of
> >> the
> >>>>>>>>>>>> Flink
> >>>>>>>>>>>>>> PMC!
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Cheers,
> >>>>>>>>>>>>>>>>> Fabian
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Benchao Li
> >>>>>>>>>>>>>>> School of Electronics Engineering and Computer Science,
> >> Peking
> >>>>>>>>>>>>> University
> >>>>>>>>>>>>>>> Tel:+86-15650713730
> >>>>>>>>>>>>>>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> Xuefu Zhang
> >>>>>>>>>>
> >>>>>>>>>> "In Honey We Trust!"
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>
> >>>
> >>
> >> --
> >> Best, Jingsong Lee
> >>
> >>
>
>
--
Best regards!
Rui Li
n the discussion thread[2].
> > >>>>
> > >>>> The vote will be open for at least 72 hours. I'll try to close it by
> > >>>> 2019-11-08 14:30 UTC, unless there is an objection or not enough
> > votes.
> > >>>>
> > >>>> [1]
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
> > >>>> <
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
> > >>>>>
> > >>>> [2]
> > >>>>
> > >>>
> > >>
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
> > >>>> <
> > >>>>
> > >>>
> > >>
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
> > >>>>>
> > >>>> Best,
> > >>>> Terry Wang
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>
> > >>> --
> > >>> Xuefu Zhang
> > >>>
> > >>> "In Honey We Trust!"
> > >>>
> > >>
> >
> >
>
--
Best regards!
Rui Li
iption.
>
> Best,
> Terry Wang
>
>
>
> > 2019年11月7日 15:40,Rui Li 写道:
> >
> > Thanks Terry for driving this forward.
> > Got one question about DESCRIBE DATABASE: the results display comment and
> > description of a database. While comment can be sp
k.apache.org,写道:
> > >
> > > Congrats Jark!
> >
>
--
Best regards!
Rui Li
e an interface to switch dialects.
>> >
>> > Because it directly hinders the SQL-CLI's insert syntax in hive
>> integration
>> > and seriously hinders the practicability of SQL-CLI.
>> > And we have introduced these two grammars in FLIP-63 [1] to Flink.
>> > Here are my question:
>> > 1.Should we remove hive dialect limitation for these two grammars?
>> > 2.Should we fix this in 1.10?
>> >
>> > [1]
>> >
>> >
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
>> >
>> > Best,
>> > Jingsong Lee
>> >
>>
>
--
Best regards!
Rui Li
t;
> > >>>> So I’m
> > >>>>
> > >>>> +1 to remove the hive dialect limitations for INSERT OVERWRITE and
> > >> INSERT
> > >>>> PARTITION
> > >>>> +0 to add yaml dialect conf to SQL-CLI because FLIP-89 is not
> finished
> > >>>> yet, we better do this until FLIP-89 is resolved.
> > >>>>
> > >>>> Best,
> > >>>> Danny Chan
> > >>>> 在 2019年12月11日 +0800 PM5:29,Jingsong Li ,写道:
> > >>>>> Hi Dev,
> > >>>>>
> > >>>>> After cutting out the branch of 1.10, I tried the following
> functions
> > >>> of
> > >>>>> SQL-CLI and found that it does not support:
> > >>>>> - insert overwrite
> > >>>>> - PARTITION (partcol1=val1, partcol2=val2 ...)
> > >>>>> The SQL pattern is:
> > >>>>> INSERT { INTO | OVERWRITE } TABLE tablename1 [PARTITION
> > >> (partcol1=val1,
> > >>>>> partcol2=val2 ...) select_statement1 FROM from_statement;
> > >>>>> It is a surprise to me.
> > >>>>> The reason is that we only allow these two grammars in hive
> dialect.
> > >>> And
> > >>>>> SQL-CLI does not have an interface to switch dialects.
> > >>>>>
> > >>>>> Because it directly hinders the SQL-CLI's insert syntax in hive
> > >>>> integration
> > >>>>> and seriously hinders the practicability of SQL-CLI.
> > >>>>> And we have introduced these two grammars in FLIP-63 [1] to Flink.
> > >>>>> Here are my question:
> > >>>>> 1.Should we remove hive dialect limitation for these two grammars?
> > >>>>> 2.Should we fix this in 1.10?
> > >>>>>
> > >>>>> [1]
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
> > >>>>>
> > >>>>> Best,
> > >>>>> Jingsong Lee
> > >>>>
> > >>>
> > >>
> > >>
> > >> --
> > >> Best, Jingsong Lee
> > >>
> > >
> >
> >
>
> --
> Best, Jingsong Lee
>
--
Best regards!
Rui Li
t;>>>>> >> to blink planner manually when every time start a SQL CLI. And
>>>>>>>> it's
>>>>>>>> >> surprising to see unsupported
>>>>>>>> >> exception if they trying out the new features but not switch
>>>>>>>> planner.
>>>>>>>> >>
>>>>>>>> >> SQL CLI is a very important entrypoint for trying out new
>>>>>>>> feautures and
>>>>>>>> >> prototyping for users.
>>>>>>>> >> In order to give new planner more exposures, I would like to
>>>>>>>> suggest to set
>>>>>>>> >> default planner
>>>>>>>> >> for SQL Client to Blink planner before 1.10 release.
>>>>>>>> >>
>>>>>>>> >> The approach is just changing the default SQL CLI yaml
>>>>>>>> configuration[5]. In
>>>>>>>> >> this way, the existing
>>>>>>>> >> environment is still compatible and unaffected.
>>>>>>>> >>
>>>>>>>> >> Changing the default planner for the whole Table API & SQL is
>>>>>>>> another topic
>>>>>>>> >> and is out of scope of this discussion.
>>>>>>>> >>
>>>>>>>> >> What do you think?
>>>>>>>> >>
>>>>>>>> >> Best,
>>>>>>>> >> Jark
>>>>>>>> >>
>>>>>>>> >> [1]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
>>>>>>>> >> [2]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
>>>>>>>> >> [3]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
>>>>>>>> >> [4]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
>>>>>>>> >> [5]:
>>>>>>>> >>
>>>>>>>> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100
>>>>>>>> >
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best, Jingsong Lee
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Benoît Paris
>>>>> Ingénieur Machine Learning Explicable
>>>>> Tél : +33 6 60 74 23 00
>>>>> http://benoit.paris
>>>>> http://explicable.ml
>>>>>
>>>>
>>
>> --
>>
>> Benchao Li
>> School of Electronics Engineering and Computer Science, Peking University
>> Tel:+86-15650713730
>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
>>
>>
--
Best regards!
Rui Li
the release.
>
> Please join in me congratulating Dian for becoming a Flink committer !
>
> Best,
> Jincheng(on behalf of the Flink PMC)
>
--
Best regards!
Rui Li
t available on every machine. We need to upload so
> many jars.
> - A separate classloader maybe hard to work too, our flink-connector-hive
> need hive jars, we may need to deal with flink-connector-hive jar spacial
> too.
> CC: Rui Li
>
> I think the best system to integrate with
gt; >>>>> return a List or ConfigOptionGroup that contains the
> >> >>>>> validation logic as mentioned in the validation part of
> FLIP-54[1].
> >> But
> >> >>>>> currently our config options are not rich enough to have a unified
> >> >>>>> validation. Additionally, the factory should return some
> properties
> >> >>>>> such
> >> >>>>> as "supports event-time" for the schema validation outside of the
> >> >>>>> factory itself.
> >> >>>>>
> >> >>>>> Regards,
> >> >>>>> Timo
> >> >>>>>
> >> >>>>> [1]
> >> >>>>>
> >> >>>>>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-54%3A+Evolve+ConfigOption+and+Configuration
> >> >>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>> On 16.01.20 00:51, Bowen Li wrote:
> >> >>>>>> Hi Jingsong,
> >> >>>>>>
> >> >>>>>> The 1st and 2nd pain points you described are very valid, as I'm
> >> more
> >> >>>>>> familiar with them. I agree these are shortcomings of the current
> >> >>>>> Flink SQL
> >> >>>>>> design.
> >> >>>>>>
> >> >>>>>> A couple comments on your 1st proposal:
> >> >>>>>>
> >> >>>>>> 1. is it better to have explicit APIs like
> >> >>>>> "createBatchTableSource(...)"
> >> >>>>>> and "createStreamingTableSource(...)" in TableSourceFactory
> (would
> >> be
> >> >>>>>> similar for sink factory) to let planner handle which mode
> >> (streaming
> >> >>>>> vs
> >> >>>>>> batch) of source should be instantiated? That way we don't need
> to
> >> >>>>> always
> >> >>>>>> let connector developers handling an if-else on isStreamingMode.
> >> >>>>>> 2. I'm not sure of the benefits to have a CatalogTableContext
> >> class.
> >> >>>>> The
> >> >>>>>> path, table, and config are fairly independent of each other. So
> >> why
> >> >>>>> not
> >> >>>>>> pass the config in as 3rd parameter as
> `createXxxTableSource(path,
> >> >>>>>> catalogTable, tableConfig)?
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Tue, Jan 14, 2020 at 7:03 PM Jingsong Li <
> >> jingsongl...@gmail.com>
> >> >>>>> wrote:
> >> >>>>>>
> >> >>>>>>> Hi dev,
> >> >>>>>>>
> >> >>>>>>> I'd like to kick off a discussion on the improvement of
> >> >>>>> TableSourceFactory
> >> >>>>>>> and TableSinkFactory.
> >> >>>>>>>
> >> >>>>>>> Motivation:
> >> >>>>>>> Now the main needs and problems are:
> >> >>>>>>> 1.Connector can't get TableConfig [1], and some behaviors really
> >> >>>>> need to be
> >> >>>>>>> controlled by the user's table configuration. In the era of
> >> catalog,
> >> >>>>> we
> >> >>>>>>> can't put these config in connector properties, which is too
> >> >>>>> inconvenient.
> >> >>>>>>> 2.Connector can't know if this is batch or stream execution
> mode.
> >> >>>>> But the
> >> >>>>>>> sink implementation of batch and stream is totally different. I
> >> >>>>> understand
> >> >>>>>>> there is an update mode property now, but it splits the batch
> and
> >> >>>>> stream in
> >> >>>>>>> the catalog dimension. In fact, this information can be obtained
> >> >>>>> through
> >> >>>>>>> the current TableEnvironment.
> >> >>>>>>> 3.No interface to call validation. Now our validation is more
> util
> >> >>>>> classes.
> >> >>>>>>> It depends on whether or not the connector calls. Now we have
> some
> >> >>>>> new
> >> >>>>>>> validations to add, such as [2], which is really confuse uses,
> >> even
> >> >>>>>>> developers. Another problem is that our SQL update (DDL) does
> not
> >> >>>>> have
> >> >>>>>>> validation [3]. It is better to report an error when executing
> >> DDL,
> >> >>>>>>> otherwise it will confuse the user.
> >> >>>>>>>
> >> >>>>>>> Proposed change draft for 1 and 2:
> >> >>>>>>>
> >> >>>>>>> interface CatalogTableContext {
> >> >>>>>>> ObjectPath getTablePath();
> >> >>>>>>> CatalogTable getTable();
> >> >>>>>>> ReadableConfig getTableConfig();
> >> >>>>>>> boolean isStreamingMode();
> >> >>>>>>> }
> >> >>>>>>>
> >> >>>>>>> public interface TableSourceFactory extends TableFactory {
> >> >>>>>>>
> >> >>>>>>> default TableSource
> createTableSource(CatalogTableContext
> >> >>>>> context) {
> >> >>>>>>> return createTableSource(context.getTablePath(),
> >> >>>>> context.getTable());
> >> >>>>>>> }
> >> >>>>>>>
> >> >>>>>>> ..
> >> >>>>>>> }
> >> >>>>>>>
> >> >>>>>>> Proposed change draft for 3:
> >> >>>>>>>
> >> >>>>>>> public interface TableFactory {
> >> >>>>>>>
> >> >>>>>>> TableValidators validators();
> >> >>>>>>>
> >> >>>>>>> interface TableValidators {
> >> >>>>>>> ConnectorDescriptorValidator connectorValidator();
> >> >>>>>>> TableSchemaValidator schemaValidator();
> >> >>>>>>> FormatDescriptorValidator formatValidator();
> >> >>>>>>> }
> >> >>>>>>> }
> >> >>>>>>>
> >> >>>>>>> What do you think?
> >> >>>>>>>
> >> >>>>>>> [1] https://issues.apache.org/jira/browse/FLINK-15290
> >> >>>>>>> [2]
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>
> >>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-A-mechanism-to-validate-the-precision-of-columns-for-connectors-td36552.html#a36556
> >> >>>>>>> [3] https://issues.apache.org/jira/browse/FLINK-15509
> >> >>>>>>>
> >> >>>>>>> Best,
> >> >>>>>>> Jingsong Lee
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>
> >> >>>> --
> >> >>>> Best, Jingsong Lee
> >> >>>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Best, Jingsong Lee
> >> >>>
> >> >>
> >> >>
> >> >> --
> >> >> Best, Jingsong Lee
> >> >>
> >> >
> >> >
> >>
> >>
> >
> > --
> > Best, Jingsong Lee
> >
>
>
> --
> Best, Jingsong Lee
>
--
Best regards!
Rui Li
Hi team,
Could someone add me as a contributor? My JIRA username is lirui.
Thanks!
--
Best regards!
Rui Li
rc/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L136
--
Best regards!
Rui Li
taging_jars.sh [1].
>
> Regards,
> Dian
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release
>
> 在 2020年9月8日,下午7:19,Rui Li 写道:
>
> maven artifacts
>
>
>
--
Best regards!
Rui Li
Verified the issue was related to the building environment. The published
jar is good. Thanks Dian for the help.
On Tue, Sep 8, 2020 at 7:49 PM Rui Li wrote:
> Thanks Dian. The script looks all right to me. I'll double check with the
> user whether the issue is related to h
> Leonard Xu 于2020年9月15日周二 下午7:11写道:
> >
> >> Congrats, Yun!
> >>
> >> Best,
> >> Leonard
> >>> 在 2020年9月15日,19:01,Yangze Guo 写道:
> >>>
> >>> Congrats, Yun!
> >>
> >>
>
>
--
Best regards!
Rui Li
571,414 lines which is quite outstanding.
> > Godfrey has paid essential effort with SQL optimization and helped a lot
> > during the blink merging.
> > Besides that, he is also quite active with community work especially in
> > Chinese mailing list.
> >
> > Please join me in congratulating Godfrey for becoming a Flink committer!
> >
> > Cheers,
> > Jark Wu
> >
>
>
--
Best regards!
Rui Li
gt; SQL Connectors" document was extracted separately, Hive is a little out
> of
> >> place.
> >> And Hive's code is also in "flink-connector-hive", which should be a
> >> connector.
> >> Hive also includes the concept of HiveCatalog. Is catalog a part of the
> >> connector? I think so.
> >>
> >> What do you think? If you don't object, I think we can move it.
> >>
> >> Best,
> >> Jingsong Lee
> >>
> >
>
>
--
Best regards!
Rui Li
eam here would be a step back in the wrong
> > direction.
> > >
> > > I think the solution to the existing user requirement of using
> > > DataStream sources and sinks with the Table API should be better
> > > interoperability between the two APIs, which is being tackled right now
> > > in FLIP-136 [2]. If FLIP-136 is not adequate for the use cases that
> > > we're trying to solve here, maybe we should think about FLIP-136 some
> > more.
> > >
> > > What do you think?
> > >
> > > Best,
> > > Aljoscha
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-95%3A+New+TableSource+and+TableSink+interfaces
> > > [2]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-136%3A++Improve+interoperability+between+DataStream+and+Table+API
> > >
> > >
> >
>
>
> --
> Best, Jingsong Lee
>
--
Best regards!
Rui Li
s. We have a large number of bash e2e tests that are just
> >>>> parameterized
> >>>>> differently. If we would start migrating them to Java, we could move
> a
> >>>>> larger proportion of tests over to the new Java framework, and tackle
> >>> the
> >>>>> more involved bash tests later (kerberized yarn, kubernetes, ...).
> >>>>>
> >>>>> Let me know what you think!
> >>>>>
> >>>>> Best,
> >>>>> Robert
> >>>>>
> >>>>>
> >>>>> PS: If you are wondering why I'm bringing this up now: I'm spending
> >>>> quite a
> >>>>> lot of time trying to figure out really hard to debug issues with our
> >>>> bash
> >>>>> testing infra.
> >>>>> Also, it is very difficult to introduce something generic for all
> tests
> >>>>> (such as a test-timeout, using docker as the preferred deployment
> >>> method
> >>>>> etc.) since the tests often don't share common tooling.
> >>>>> Speaking about tooling: there are a lot of utilities everywhere,
> >>>> sometimes
> >>>>> duplicated, with different features / stability etc.
> >>>>> I believe bash is not the right tool for a project this size (in
> terms
> >>> of
> >>>>> developers and lines of code)
> >>>>>
> >>>>
> >>>
> >
>
>
--
Best regards!
Rui Li
d makes it even
easier for users to migrate to Flink. More details are in the FLIP wiki
page [1]. Looking forward to your feedback!
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-152%3A+Hive+Query+Syntax+Compatibility
--
Best regards!
Rui Li
t; > >>
> > >> The vote will be open for at least 72 hours. It is adopted by majority
> > >> approval, with at least 3 PMC affirmative votes.
> > >>
> > >> Thanks,
> > >> Dian & Robert
> > >>
> > >> [1a]
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348263
> > >> [1b] https://github.com/apache/flink/pull/14195
> > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.0-rc3/
> > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >> [4]
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1404
> > >> [5] https://github.com/apache/flink/releases/tag/release-1.12.0-rc3
> > >>
> > >>
> >
>
--
Best regards!
Rui Li
be a temporary table or saved in default
> memory catalog.
>
>
> [1] http://apache-flink.147419.n8.nabble.com/calcite-td9059.html#a9118
> [2] http://apache-flink.147419.n8.nabble.com/hive-sql-flink-11-td9116.html
> [3]
>
> http://apache-flink-user-mailing-list-archive.2336050.
t to stream data from
> Kafka
> > to Hive, fully use hive's dialect including
> > query part. The kafka table could be a temporary table or saved in
> default
> > memory catalog.
> >
> >
> > [1] http://apache-flink.147419.n8.nabble.com/calcite-td9059.
tml
> >>>
> >>> Please check out the release blog post for an overview of the
> improvements
> >>> for this bugfix release:
> >>> https://flink.apache.org/news/2020/12/10/release-1.12.0.html
> >>>
> >>> The full release notes are available in Jira:
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348263
> >>>
> >>> We would like to thank all contributors of the Apache Flink community
> who
> >>> made this release possible!
> >>>
> >>> Regards,
> >>> Dian & Robert
> >>
> >
>
--
Best regards!
Rui Li
y, we have to make sure temp table/function is processed by
the TableFactory/FunctionDefinitionFactory associated with the catalog.
Looking forward to your suggestions.
--
Best regards!
Rui Li
>>
>>
>> Both options have to handle how to specify the HiveConf to use. In Hive
>> connector, user could specify both hiveConfDir and hadoopConfDir when
>> creating HiveCatalog. The hadoopConfDir may not the same as the Hadoop
>> configuration in HadoopModule.
>>
>> Looking forward to your suggestions.
>>
>> --
>> Best regards!
>> Jie Wang
>>
>>
--
Best regards!
Rui Li
t; provider
> > > could be placed in connector related code, so reflection is not needed
> and
> > > is more extendable.
> > >
> > >
> > >
> > > Both options have to handle how to specify the HiveConf to use. In Hive
> > > connector, user could specify both hiveConfDir and hadoopConfDir when
> > > creating HiveCatalog. The hadoopConfDir may not the same as the Hadoop
> > > configuration in HadoopModule.
> > >
> > > Looking forward to your suggestions.
> > >
> > > --
> > > Best regards!
> > > Jie Wang
> > >
> > >
> >
>
--
Best regards!
Rui Li
hanges, please refer to FLIP-163[1].
> >
> > Look forward to your feedback.
> >
> >
> > Best,
> > Shengkai
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-163%3A+SQL+Client+Improvements
> >
>
>
> --
>
> *With kind regards
>
> Sebastian Liu 刘洋
> Institute of Computing Technology, Chinese Academy of Science
> Mobile\WeChat: +86—15201613655
> E-mail: liuyang0...@gmail.com
> QQ: 3239559*
>
--
Best regards!
Rui Li
e used
> to analyze the data. Users should use queries in the interactive mode.
>
> Best,
> Shengkai
>
> Rui Li 于2021年1月29日周五 下午3:18写道:
>
>> Thanks Shengkai for bringing up this discussion. I think it covers a lot
>> of useful features which will dramatically impr
e in the batch mode. Users can set
> this option false and the client will process the next job until the
> current job finishes. The default value of this option is false, which
> means the client will execute the next job when the current job is
> submitted.
>
> Best,
> She
ary and starting the discussion in the mailing
> list.
>
> Here are my thoughts:
>
> 1) syntax to reorder modules
> I agree with Rui Li it would be quite useful if we can have some syntax to
> reorder modules.
> I slightly prefer `USE MODULES x, y, z` than `RELOAD MODULES x, y,
`Y`, `Z`, `X`.
> > > >
> > > > Regarding #3, I'm fine with mapping modules purely by name, and I
> think
> > > > Jark raised a good point on making the module name a simple
> identifier
> > > > instead of a string li
t; be
> > a
> > > > HiveCatalog.
> > > > 2. Queries cannot involve tables/views from multiple catalogs.
> > > > I assume this is because hive parser and analyzer doesn't support
> > > > referring to a name with "x.y.z" fashion? Sinc
QLs, we need not to copy
> > Calcite parser code, and the framework and the code will be very simple.
> >
> > Regarding the "Go Beyond Hive" section, is that the scope of this FLIP ?
> > Could you list all the extensions and give some examples ?
> >
> > One
t;>
> >> > >>>>> I would like to add the module to the enabled list by default,
> the
> >> > >> main
> >> > >>>>> reasons are:
> >> > >>>>> 1) Reordering is an advanced requirement, adding modul
>
> >>> 4) LIST JAR
> >>>
> >>> This should be `SHOW JARS` according to other SQL commands such as
> `SHOW
> >>> CATALOGS`, `SHOW TABLES`, etc. [2].
> >>>
> >>> 5) EXPLAIN [ExplainDetail[, ExplainDetail]*]
Hi Jingsong, Godfrey and Jark,
I have updated the FLIP to incorporate your suggestions. Please let me know
if you have further comments. Thank you.
On Wed, Feb 3, 2021 at 4:51 PM Rui Li wrote:
> Thanks Godfrey & Jark for your comments!
>
> Since both of you mentioned the naming
"the sql client, we will maintain two parsers", I want to
> > >>> give
> > >>>>> more inputs:
> > >>>>> We want to introduce sql-gateway into the Flink project (see
> FLIP-24
> > &
> > >>>>> FLIP-91
play/FLINK/FLIP-152%3A+Hive+Query+Syntax+Compatibility
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-152-Hive-Query-Syntax-Compatibility-td46928.html
--
Best regards!
Rui Li
sync behavior?
On Mon, Feb 8, 2021 at 11:28 PM Jark Wu wrote:
> Ah, I just forgot the option name.
>
> I'm also fine with `table.dml-async`.
>
> What do you think @Rui Li @Shengkai Fang
> ?
>
> Best,
> Jark
>
> On Mon, 8 Feb 2021 at 23:06, Timo Walther wrote:
leEnvironment (including `executeSql`, `StatementSet`,
> `Table#executeInsert`, etc.).
> This can also make SQL CLI easy to support this configuration by passing
> through to the TableEnv.
>
> Best,
> Jark
>
> On Tue, 9 Feb 2021 at 10:07, Rui Li wrote:
>
>> Hi,
>>
>&
e `table.multi-dml-sync` `table.multi-stmt-sync`? Or other
> opinions?
> > >>
> > >> Regards,
> > >> Timo
> > >>
> > >> On 09.02.21 08:50, Shengkai Fang wrote:
> > >>> Hi, all.
> > >>>
> > >>> I think it may cause user confused. The main problem is we have no
> > means
> > >>> to detect the conflict configuration, e.g. users set the option true
> > and
> > >>> use `TableResult#await` together.
> > >>>
> > >>> Best,
> > >>> Shengkai.
> > >>>
> > >>
> > >>
> > >
> >
> >
>
--
Best regards!
Rui Li
t; >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-164%3A+Improve+Schema+Handling+in+Catalogs
> >>>
> >>>
> >>> The FLIP updates the class hierarchy to achieve the following goals:
> >>>
> >>> - make it visible whether a schema is resolved or unresolved and when
> >>> the resolution happens
> >>> - offer a unified API for FLIP-129, FLIP-136, and catalogs
> >>> - allow arbitrary data types and expressions in the schema for
> >>> watermark spec or columns
> >>> - have access to other catalogs for declaring a data type or
> >>> expression via CatalogManager
> >>> - a cleaned up TableSchema
> >>> - remain backwards compatible in the persisted properties and API
> >>>
> >>> Looking forward to your feedback.
> >>>
> >>> Thanks,
> >>> Timo
> >>
> >>
> >
>
>
--
Best regards!
Rui Li
hose
> properties in again the schema can be resolved in a later stage.
>
> Regards,
> Timo
>
> On 09.02.21 14:07, Rui Li wrote:
> > Hi Timo,
> >
> > Thanks for the FLIP. It looks good to me overall. I have two questions.
> > 1. When should we use a resolved
vid Heise <
> > >> ar...@apache.org>
> > >>>>>>>> wrote:
> > >>>>>>>>>>>>> Congrats! Well deserved.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> On Wed, Feb 10, 2021 at 1:54 PM Yun Gao
> > >>>>>>>> > >>>>>>>>>>>
> > >>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Congratulations Roman!
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>> Yun
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> --Original Mail --
> > >>>>>>>>>>>>>> Sender:Till Rohrmann
> > >>>>>>>>>>>>>> Send Date:Wed Feb 10 20:53:21 2021
> > >>>>>>>>>>>>>> Recipients:dev
> > >>>>>>>>>>>>>> CC:Khachatryan Roman , Roman
> > >>>>>>>>>> Khachatryan
> > >>>>>>>>>>>> <
> > >>>>>>>>>>>>>> ro...@apache.org>
> > >>>>>>>>>>>>>> Subject:Re: [ANNOUNCE] Welcome Roman Khachatryan a new
> > >> Apache
> > >>>>>>> Flink
> > >>>>>>>>>>>>>> Committer
> > >>>>>>>>>>>>>> Congratulations Roman :-)
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Cheers,
> > >>>>>>>>>>>>>> Till
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> On Wed, Feb 10, 2021 at 1:01 PM Konstantin Knauf <
> > >>>>>>>> kna...@apache.org>
> > >>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> Congratulations Roman!
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> On Wed, Feb 10, 2021 at 11:29 AM Piotr Nowojski <
> > >>>>>>>>>>>> pnowoj...@apache.org>
> > >>>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Hi everyone,
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> I'm very happy to announce that Roman Khachatryan has
> > >>>> accepted
> > >>>>>>>> the
> > >>>>>>>>>>>>>>>> invitation to
> > >>>>>>>>>>>>>>>> become a Flink committer.
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Roman has been recently active in the runtime parts of
> > >> the
> > >>>>>>> Flink.
> > >>>>>>>>>>>> He is
> > >>>>>>>>>>>>>>> one
> > >>>>>>>>>>>>>>>> of the main developers behind FLIP-76 Unaligned
> > >>> Checkpoints,
> > >>>>>>>>>>>> FLIP-151
> > >>>>>>>>>>>>>>>> Incremental Heap/FS State Backend [3] and providing a
> > >>> faster
> > >>>>>>>>>>>>>>> checkpointing
> > >>>>>>>>>>>>>>>> mechanism in FLIP-158.
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Please join me in congratulating Roman for becoming a
> > >> Flink
> > >>>>>>>>>>>> committer!
> > >>>>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>>>> Piotrek
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> [1]
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-76%3A+Unaligned+Checkpoints
> > >>>>>>>>>>>>>>>> [2]
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-151%3A+Incremental+snapshots+for+heap-based+state+backend
> > >>>>>>>>>>>>>>>> [3]
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> --
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> Konstantin Knauf
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> https://twitter.com/snntrable
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> https://github.com/knaufk
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> >
>
--
Best regards!
Rui Li
>
> > > +1
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Sun, Feb 7, 2021 at 7:24 PM Rui Li wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I think we have reached some con
t;>
> >> The vote will be open for 72 hours, until Feb. 25 2021 12:00 AM UTC+8,
> >> unless there's an objection.
> >>
> >> Best,
> >> Shengkai
> >>
> >
>
>
--
Best regards!
Rui Li
s
> > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500),
> row-based
> > > > > Operations support in Python Table API (FLINK-20479), etc. He is
> also
> > > > > actively helping on answering questions in the user mailing list,
> > > helping
> > > > > on the release check, monitoring the status of the azure pipeline,
> > etc.
> > > > >
> > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for
> > > becoming
> > > > > Flink committers!
> > > > >
> > > > > Regards,
> > > > > Dian
> > > >
> > >
> >
>
--
Best regards!
Rui Li
ard Xu wrote:
> >
> > > +1 (non-binding)
> > >
> > > The updated FLIP looks well.
> > >
> > > Best,
> > > Leonard
> > >
> > >
> > > > 在 2021年3月2日,22:27,Shengkai Fang 写道:
> > > >
> > > > already updated the FLIP[2]. It seems the vote has lasted for a long
> > > time.
> > >
> > >
> >
>
--
Best regards!
Rui Li
>>>
> >>>> flink code:
> >>>> ```
> >>>> StreamExecutionEnvironment bsEnv =
> >>>> StreamExecutionEnvironment.getExecutionEnvironment();
> >>>> bsEnv.enableCheckpointing(1);
> >>>> StreamTableEnvironment tEnv = StreamTableEnvironment.create(bsEnv);
> >>>> DataStream dataStream = bsEnv.addSource(new MySource())
> >>>>
> >>>> //构造hive catalog
> >>>> String name = "myhive";
> >>>> String defaultDatabase = "default";
> >>>> String hiveConfDir = "/Users/user/work/hive/conf"; // a local path
> >>>> String version = "3.1.2";
> >>>>
> >>>> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir,
> >>>> version);
> >>>> tEnv.registerCatalog("myhive", hive);
> >>>> tEnv.useCatalog("myhive");
> >>>> tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
> >>>> tEnv.useDatabase("db1");
> >>>>
> >>>> tEnv.createTemporaryView("users", dataStream);
> >>>>
> >>>> String hiveSql = "CREATE external TABLE fs_table (\n" +
> >>>> " user_id STRING,\n" +
> >>>> " order_amount DOUBLE" +
> >>>> ") partitioned by (dt string,h string,m string) " +
> >>>> "stored as ORC " +
> >>>> "TBLPROPERTIES (\n" +
> >>>> " 'partition.time-extractor.timestamp-pattern'='$dt $h:$m:00',\n" +
> >>>> " 'sink.partition-commit.delay'='0s',\n" +
> >>>> " 'sink.partition-commit.trigger'='partition-time',\n" +
> >>>> " 'sink.partition-commit.policy.kind'='metastore'" +
> >>>> ")";
> >>>> tEnv.executeSql(hiveSql);
> >>>>
> >>>> String insertSql = "SELECT * FROM users";
> >>>> tEnv.executeSql(insertSql);
> >>>> ```
> >>>>
> >>>> And this is my flink configuration:
> >>>> ```
> >>>> jobmanager.memory.process.size: 1600m
> >>>> taskmanager.memory.process.size 4096m
> >>>> taskmanager.numberOfTaskSlots: 1
> >>>> parallelism.default: 1
> >>>> ```
> >>>>
> >>>> And the exception is: java.util.concurrent.completionException:
> >>>> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailable and
> >>>> request to ResourceManager for new slot failed.
> >>>>
> >>>> According the exception message, it means the resource is in
> sufficient,
> >>>> but the hadoop resource is enough, memory is 300+g, cores is 72, and
> the
> >>>> usage rate is lower about 30%.
> >>>>
> >>>> I have tried increase the taskmanager slot at flink run command with
> >>>> `flink run -ys`, but it is not effective.
> >>>>
> >>>> Here is the environment:
> >>>> flink version: 1.12.0
> >>>> java: 1.8
> >>>>
> >>>> Please check what’s the problem is, really appreciate it. Thanks.
>
>
--
Best regards!
Rui Li
Thanks everyone! Looking forward to making more contributions~
On Thu, Apr 22, 2021 at 6:23 PM Shengkai Fang wrote:
> Congratulations Rui! =-=
>
> Best,
> Shengkai
>
> Leonard Xu 于2021年4月22日周四 下午3:09写道:
>
> > Congratulations Rui Li !
> >
> > Best,
&
flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1420/
> [5] https://github.com/apache/flink/tree/release-1.13.0-rc2
> [6] https://github.com/apache/flink-web/pull/436
>
--
Best regards!
Rui Li
an even be updated after 1.13 is released.
> So we don't need to cancel the RC for this.
>
> Best,
> Jark
>
> [1]:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/hive/overview/
>
> On Sun, 25 Apr 2021 at 13:47, Rui Li wrote:
>
convert the file type to textfile type to run. But orc throw this exception.
>
> Or I missing any dependency on the flink lib? Or class file conflict?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
--
Best regards!
Rui Li
>> >> distributed, high-performing, always-available, and accurate data
>> streaming
>> >> applications.
>> >>
>> >> The release is available for download at:
>> >> https://flink.apache.org/downloads.html
>> >>
>> >> Please check out the release blog post for an overview of the
>> >> improvements for this bugfix release:
>> >> https://flink.apache.org/news/2021/05/03/release-1.13.0.html
>> >>
>> >> The full release notes are available in Jira:
>> >>
>> >>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349287
>> >>
>> >> We would like to thank all contributors of the Apache Flink community
>> who
>> >> made this release possible!
>> >>
>> >> Regards,
>> >> Guowei & Dawid
>> >>
>> >>
>> >
>> >
>>
>
--
Best regards!
Rui Li
gt; > > failure
> > > > > > > > > > > > > for
> > > > > > > > > > > > > > > the
> > > > > > > > > > > &g
> > >>>>>>
> > >>>>>> Xintong Song
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> On Wed, Jul 7, 2021 at 9:31 AM Qingsheng Ren
> > >>>> wrote:
> > >>>>>>
> > >>>>>>> Congratulations Guowei!
> > >>>>>>>
> > >>>>>>> --
> > >>>>>>> Best Regards,
> > >>>>>>>
> > >>>>>>> Qingsheng Ren
> > >>>>>>> Email: renqs...@gmail.com
> > >>>>>>> 2021年7月7日 +0800 09:30 Leonard Xu ,写道:
> > >>>>>>>> Congratulations! Guowei Ma
> > >>>>>>>>
> > >>>>>>>> Best,
> > >>>>>>>> Leonard
> > >>>>>>>>
> > >>>>>>>>> ÔÚ 2021Äê7ÔÂ6ÈÕ£¬21:56£¬Kurt Young дµÀ£º
> > >>>>>>>>>
> > >>>>>>>>> Hi all!
> > >>>>>>>>>
> > >>>>>>>>> I'm very happy to announce that Guowei Ma has joined the
> > >> Flink
> > >>>> PMC!
> > >>>>>>>>>
> > >>>>>>>>> Congratulations and welcome Guowei!
> > >>>>>>>>>
> > >>>>>>>>> Best,
> > >>>>>>>>> Kurt
> > >>>>>>>>
> > >>>>>>>
> > >>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> > >
> >
> >
>
> --
>
> Best,
> Benchao Li
>
--
Best regards!
Rui Li
; >>> Yang has been a very active contributor for more than two years,
> > mainly
> > > >>> focusing on Flink's deployment components. He's a main contributor
> > and
> > > >>> maintainer of Flink's native Kubernetes deployment and native
> > > Kubernetes
> > > >>> HA. He's also very active on the mailing lists, participating in
> > > >>> discussions and helping with user questions.
> > > >>>
> > > >>> Please join me in congratulating Yang Wang for becoming a Flink
> > > >> committer!
> > > >>>
> > > >>> Thank you~
> > > >>>
> > > >>> Xintong Song
> > > >>>
> > > >>
> > > >>
> > >
> > >
> >
>
>
> --
>
> Best,
> Benchao Li
>
--
Best regards!
Rui Li
nity activities such as helping manage releases, discussing
> > questions on dev@list, supporting users and giving talks at conferences.
> >
> > Please join me in congratulating Yuan for becoming a Flink committer!
> >
> > Cheers,
> > Yu
> >
>
--
Best regards!
Rui Li
).
> > > > He is also the maintainer of flink-cdc-connectors[1] project which
> > helps
> > > a
> > > > lot for users building a real-time data warehouse and data lake.
> > > >
> > > > Please join me in congratulating Leonard for becoming a Flink
> > committer!
> > > >
> > > > Cheers,
> > > > Jark Wu
> > > >
> > > > [1]: https://github.com/ververica/flink-cdc-connectors
> > >
> >
>
--
Best regards!
Rui Li
> > > > in miscellaneous contributions, including PR reviews, document
> > > enhancement,
> > > > mailing list services and meetup/FF talks.
> > > >
> > > > Please join me in congratulating Yangze Guo for becoming a Flink
> > > committer!
> > > >
> > > > Thank you~
> > > >
> > > > Xintong Song
> > >
> >
>
--
Best regards!
Rui Li
20 at 14:32, Danny Chan wrote:
> >
> > > Congratulations Xintong !
> > >
> > > Best,
> > > Danny Chan
> > > 在 2020年6月5日 +0800 PM2:20,dev@flink.apache.org,写道:
> > > >
> > > > Congratulations Xintong
> > >
> >
>
--
Best regards!
Rui Li
gt;>>>>>
> >>>>>>>>>>> Best,
> >>>>>>>>>>> Wenlong
> >>>>>>>>>>>
> >>>>>>>>>>> On Wed, 15 Apr 2020 at 22:46, Jingsong Li <
> >> jingsongl...@gmail.com>
> >
> > Sender:Danny Chan
> > Date:2020/06/10 20:01:01
> > Recipient:
> > Theme:Re: [ANNOUNCE] New Flink Committer: Benchao Li
> >
> > Congrats Benchao!
> >
> > Best,
> > Danny Chan
> > 在 2020年6月10日 +0800 AM11:57,dev@flink.apache.org,写道:
> > >
> > > Congrats Benchao!
> >
> >
>
--
Best regards!
Rui Li
gt; > > >>>> Congratulations Yu!
> > > >>>>
> > > >>>> Best,
> > > >>>> Haibo
> > > >>>>
> > > >>>>
> > > >>>> At 2020-06-17 09:15:02, "jincheng sun"
> > > >> wrote:
> > > >>>>> Hi all,
> > > >>>>>
> > > >>>>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is
> now
> > > >>>>> part of the Apache Flink Project Management Committee (PMC).
> > > >>>>>
> > > >>>>> Yu Li has been very active on Flink's Statebackend component,
> > working
> > > >> on
> > > >>>>> various improvements, for example the RocksDB memory management
> for
> > > >> 1.10.
> > > >>>>> and keeps checking and voting for our releases, and also has
> > > >> successfully
> > > >>>>> produced two releases(1.10.0&1.10.1) as RM.
> > > >>>>>
> > > >>>>> Congratulations & Welcome Yu Li!
> > > >>>>>
> > > >>>>> Best,
> > > >>>>> Jincheng (on behalf of the Flink PMC)
> > > >>>>
> > > >>>>
> > > >>
> > > >>
> > > >>
> > > >
> > > > --
> > > > Best Regards
> > > >
> > > > Jeff Zhang
> > >
> > >
> >
>
--
Best regards!
Rui Li
ke hive sql testing [1] to
> sql-client oriented. In this way, the testing can cover more horizontal and
> vertical and it is easy to migrate tests from other systems too. And I
> think, Flink's DDLs are enough stronger to support pure SQLs testing.
>
> What do you think?
>
> [1]https://github.com/apache/hive/tree/master/ql/src/test/queries
>
> Best,
> Jingsong
>
--
Best regards!
Rui Li
> > > > Many of you may know Piotr from the work he does on the data
> processing
> > > > > runtime and the network stack, from the mailing list, or the
> release
> > > > > manager work.
> > > > >
> > > > > Congrats, Piotr!
> > > > >
> > > > > Best,
> > > > > Stephan
> > > > >
> > > >
> > >
>
--
Best regards!
Rui Li
nk all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
--
Best regards!
Rui Li
ist/dev/flink/flink-1.11.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1378/
> [5]
> https://github.com/apache/flink/commit/7eb514a59f6fd117c3535ec4bebc40a375f30b63
> [6] https://github.com/apache/flink-web/pull/359
--
Best regards!
Rui Li
INK-15794,
> and FLINK-15794 has fixed in master and 1.11.1 just wait for being fixed
> in 1.10.2.
> - the web PR looks good
>
> For FLINK-18588, I also agree with Timo to put it to 1.11.2 because it's
> a `Major` bug rather than `Blocker`.
>
> Best,
> Leonard
--
Best regards!
Rui Li
+1 (non-binding)
- Built from source
- Verified hive connector tests for all hive versions
- Played some simple cases with hive connector and everything seems fine
On Sat, Jul 18, 2020 at 12:24 AM Rui Li wrote:
> OK, I agree FLINK-18588 can wait for the next release.
>
> On Fri, Jul
> > > >>
>> > > > >> Apache Flink® is an open-source stream processing framework for
>> > > distributed, high-performing, always-available, and accurate data
>> > streaming
>> > > applications.
>> > > > >>
>> > > > >> The release is available for download at:
>> > > > >> https://flink.apache.org/downloads.html
>> > > > >>
>> > > > >> Please check out the release blog post for an overview of the
>> > > improvements for this bugfix release:
>> > > > >> https://flink.apache.org/news/2020/07/21/release-1.11.1.html
>> > > > >>
>> > > > >> The full release notes are available in Jira:
>> > > > >>
>> > >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348323
>> > > > >>
>> > > > >> We would like to thank all contributors of the Apache Flink
>> > community
>> > > who made this release possible!
>> > > > >>
>> > > > >> Regards,
>> > > > >> Dian
>> > > > >
>> > > >
>> > >
>> >
>>
>>
>> --
>>
>> Konstantin Knauf
>>
>> https://twitter.com/snntrable
>>
>> https://github.com/knaufk
>>
>>
>>
>
> --
> Best, Jingsong Lee
>
--
Best regards!
Rui Li
; > > Leonard Xu 于2020年8月12日周三 下午4:01写道:
> > > >
> > > >> Congratulations! David
> > > >>
> > > >> Best
> > > >> Leonard
> > > >>> 在 2020年8月12日,15:59,Till Rohrmann 写道:
> > > >>>
> > > >>> Congratulations, David!
> > > >>
> > > >>
> > > >
> > > > --
> > > >
> > > > Best,
> > > > Benchao Li
> > >
> > >
> >
> >
>
--
Best regards!
Rui Li
KEY would be join key.
> >>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> 2) Isn't it the time attribute in the ORDER BY clause of the
> VIEW
> >>>>>>>>> definition that defines
> >>>>>>>>>> whether a event-time or processing time temporal table join is
> >>>>>> used?
> >>>>>>>>>
> >>>>>>>>> I think event-time or processing-time temporal table join
> depends on
> >>>>>>> fact
> >>>>>>>>> table’s time attribute in temporal join rather than from temporal
> >>>>>> table
> >>>>>>>>> side, the event-time or processing time in temporal table is just
> >>>>>> used
> >>>>>>> to
> >>>>>>>>> split the validity period of versioned snapshot of temporal
> table.
> >>>>>> The
> >>>>>>>>> processing time attribute is not necessary for temporal table
> >>>>>> without
> >>>>>>>>> version, only the primary key is required, the following VIEW is
> also
> >>>>>>>> valid
> >>>>>>>>> for temporal table without version.
> >>>>>>>>> CREATE VIEW latest_rates AS
> >>>>>>>>> SELECT currency, LAST_VALUE(rate)-- only keep the
> latest
> >>>>>>>>> version
> >>>>>>>>> FROM rates
> >>>>>>>>> GROUP BY currency; -- inferred primary
> key
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> 3) A "Versioned Temporal Table DDL on source" is always
> versioned
> >>>>>> on
> >>>>>>>>>> operation_time regardless of the lookup table attribute
> (event-time
> >>>>>>> or
> >>>>>>>>>> processing time attribute), correct?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Yes, the semantics of `FOR SYSTEM_TIME AS OF o.time` is using the
> >>>>>>> o.time
> >>>>>>>>> value to lookup the version of the temporal table.
> >>>>>>>>> For fact table has the processing time attribute, it means only
> >>>>>> lookup
> >>>>>>>> the
> >>>>>>>>> latest version of temporal table and we can do some optimization
> in
> >>>>>>>>> implementation like only keep the latest version.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Best
> >>>>>>>>> Leonard
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>
> >>
> >
>
>
--
Best regards!
Rui Li
t;>>>>>>>>> and the semantics make sense to me.
> >>>>>>>>>>
> >>>>>>>>>> +1
> >>>>>>>>>>
> >>>>>>>>>> Seth
> >>>>>>>>>>
> >>>>>>>>>> On Wed, Jul 29, 2020 at 11:36 AM Leonard Xu <mailto:xbjt...@gmail.com>
> >> <mailto:xbjt...@gmail.com <mailto:xbjt...@gmail.com>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>>> Hi, Konstantin
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> 1) A "Versioned Temporal Table DDL on source" can only be
> >> joined
> >>>>>>>> on
> >>>>>>>>>> the
> >>>>>>>>>>>> PRIMARY KEY attribute, correct?
> >>>>>>>>>>> Yes, the PRIMARY KEY would be join key.
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> 2) Isn't it the time attribute in the ORDER BY clause of the
> >> VIEW
> >>>>>>>>>>> definition that defines
> >>>>>>>>>>>> whether a event-time or processing time temporal table join is
> >>>>>>>> used?
> >>>>>>>>>>>
> >>>>>>>>>>> I think event-time or processing-time temporal table join
> >> depends on
> >>>>>>>>> fact
> >>>>>>>>>>> table’s time attribute in temporal join rather than from
> temporal
> >>>>>>>> table
> >>>>>>>>>>> side, the event-time or processing time in temporal table is
> just
> >>>>>>>> used
> >>>>>>>>> to
> >>>>>>>>>>> split the validity period of versioned snapshot of temporal
> >> table.
> >>>>>>>> The
> >>>>>>>>>>> processing time attribute is not necessary for temporal table
> >>>>>>>> without
> >>>>>>>>>>> version, only the primary key is required, the following VIEW
> is
> >> also
> >>>>>>>>>> valid
> >>>>>>>>>>> for temporal table without version.
> >>>>>>>>>>> CREATE VIEW latest_rates AS
> >>>>>>>>>>> SELECT currency, LAST_VALUE(rate)-- only keep the
> >> latest
> >>>>>>>>>>> version
> >>>>>>>>>>> FROM rates
> >>>>>>>>>>> GROUP BY currency; -- inferred
> primary
> >> key
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> 3) A "Versioned Temporal Table DDL on source" is always
> >> versioned
> >>>>>>>> on
> >>>>>>>>>>>> operation_time regardless of the lookup table attribute
> >> (event-time
> >>>>>>>>> or
> >>>>>>>>>>>> processing time attribute), correct?
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Yes, the semantics of `FOR SYSTEM_TIME AS OF o.time` is using
> the
> >>>>>>>>> o.time
> >>>>>>>>>>> value to lookup the version of the temporal table.
> >>>>>>>>>>> For fact table has the processing time attribute, it means only
> >>>>>>>> lookup
> >>>>>>>>>> the
> >>>>>>>>>>> latest version of temporal table and we can do some
> optimization
> >> in
> >>>>>>>>>>> implementation like only keep the latest version.
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Best
> >>>>>>>>>>> Leonard
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >>
> >
> > --
> > Best regards!
> > Rui Li
>
>
--
Best regards!
Rui Li
gt;>>> 在 2020年8月19日,04:46,Seth Wiesman 写道:
>>>>
>>>> +1 to the updated design.
>>>>
>>>> I agree with Fabian that the naming of "temporal table without version"
>>>> is
>>>> a bit confusing but the actual seman
ence/display/FLINK/FLIP-132+Temporal+Table+DDL+and+Temporal+Table+Join
>>>
>>> [2]
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-132-Temporal-Table-DDL-td43483.html
>>>
>>>
>>>
--
Best regards!
Rui Li
t;
> >
> >
> > On Mon, Jul 27, 2020 at 8:42 PM Robert Metzger
> wrote:
> >
> >> Hi team,
> >>
> >> We would like to use this thread as a permanent thread for
> >> regularly syncing on stale blockers (need to have somebody assigned
> within
> >> a week and progress, or a good plan) and build instabilities (need to
> check
> >> if its a blocker).
> >>
> >> Recent test-instabilities:
> >>
> >> - https://issues.apache.org/jira/browse/FLINK-17159 (ES6 test)
> >> - https://issues.apache.org/jira/browse/FLINK-16768 (s3 test
> unstable)
> >> - https://issues.apache.org/jira/browse/FLINK-18374 (s3 test
> unstable)
> >> - https://issues.apache.org/jira/browse/FLINK-17949
> >> (KafkaShuffleITCase)
> >> - https://issues.apache.org/jira/browse/FLINK-18634 (Kafka
> >> transactions)
> >>
> >>
> >> It would be nice if the committers taking care of these components could
> >> look into the test failures.
> >> If nothing happens, we'll personally reach out to people I believe they
> >> could look into the ticket.
> >>
> >> Best,
> >> Dian & Robert
> >>
>
>
--
Best regards!
Rui Li
blocker.
>
> Regards,
> Dian
>
> > 在 2020年8月25日,下午2:58,Rui Li 写道:
> >
> > Hi Dian,
> >
> > FLINK-18682 has been fixed. Is there any other blocker in the hive
> > connector?
> >
> > On Tue, Aug 25, 2020 at 2:41 PM Dian Fu dian0511...@gmail.com&
a possible alternative is to add an *isTemporary* field
to TableSourceFactory.Context & TableSinkFactory.Context, so that
HiveTableFactory knows how to handle such tables. What do you think?
[1] https://issues.apache.org/jira/browse/FLINK-18999
--
Best regards!
Rui Li
;> >
>> > Best,
>> > Jingsong
>> >
>> > On Tue, Aug 25, 2020 at 3:55 PM Dawid Wysakowicz <
>> dwysakow...@apache.org>
>> > wrote:
>> >
>> > > Hi Rui,
>> > >
>> > > My take is that temporary tab
Hi everyone,
According to the feedback, it seems adding `isTemporary` to factory context
is the preferred way to fix the issue. I'll go ahead and make the change if
no one objects.
On Tue, Aug 25, 2020 at 5:36 PM Rui Li wrote:
> Hi,
>
> Thanks everyone for your inputs.
>
>
>>>
>>> Please join me in congratulating Dian Fu for becoming a Flink PMC Member!
>>>
>>> Best,
>>> Jincheng(on behalf of the Flink PMC)
>>>
>>
--
Best regards!
Rui Li
Rui Li created FLINK-13068:
--
Summary: HiveTableSink should implement PartitionableTableSink
Key: FLINK-13068
URL: https://issues.apache.org/jira/browse/FLINK-13068
Project: Flink
Issue Type: Sub
Rui Li created FLINK-13069:
--
Summary: HiveTableSink should implement OverwritableTableSink
Key: FLINK-13069
URL: https://issues.apache.org/jira/browse/FLINK-13069
Project: Flink
Issue Type: Sub
Rui Li created FLINK-13090:
--
Summary: Test Hive connector with hive runner
Key: FLINK-13090
URL: https://issues.apache.org/jira/browse/FLINK-13090
Project: Flink
Issue Type: Sub-task
Rui Li created FLINK-13110:
--
Summary: Fix Hive-1.2.1 build
Key: FLINK-13110
URL: https://issues.apache.org/jira/browse/FLINK-13110
Project: Flink
Issue Type: Sub-task
Reporter: Rui Li
1 - 100 of 299 matches
Mail list logo