Re: Question about runtime filter

2020-03-01 Thread JingsongLee
Hi, Does runtime filter probe side wait for building runtime filter? Can you check the start time of build side and probe side? Best, Jingsong Lee -- From:faaron zheng Send Time:2020年3月2日(星期一) 14:55 To:user Subject:Question abou

Re: java.time.LocalDateTime in POJO type

2020-03-02 Thread JingsongLee
Hi, I'v introduced LocalDateTime type information to flink-core. But for compatibility reason, I revert the modification in TypeExtractor. It seems that at present you can only use Types.LOCAL_DATE_TIME explicitly. [1] http://jira.apache.org/jira/browse/FLINK-12850 Best, Jingsong Lee -

Re: SHOW CREATE TABLE in Flink SQL

2020-03-02 Thread JingsongLee
Hi, Some previous discussion in [1], FYI [1] https://issues.apache.org/jira/browse/FLINK-10230 Best, Jingsong Lee -- From:Jark Wu Send Time:2020年3月2日(星期一) 22:42 To:Jeff Zhang Cc:"Gyula Fóra" ; user Subject:Re: SHOW CREATE TABLE

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread JingsongLee
Congrats Hequn! Best, Jingsong Lee -- From:Biao Liu Send Time:2019年8月7日(星期三) 12:05 To:Zhu Zhu Cc:Zili Chen ; Jeff Zhang ; Paul Lam ; jincheng sun ; dev ; user Subject:Re: [ANNOUNCE] Hequn becomes a Flink committer Congrats Heq

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread JingsongLee
Congratulations~~~ Thanks gordon and everyone~ Best, Jingsong Lee -- From:Oytun Tez Send Time:2019年8月22日(星期四) 14:06 To:Tzu-Li (Gordon) Tai Cc:dev ; user ; announce Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released Congratulati

Re:question

2019-09-03 Thread JingsongLee
should be schema.field(“msg”, Types.ROW(...))? And you should select msg.f1 from table. Best Jingsong Lee 来自阿里邮箱 iPhone版 --Original Mail -- From:圣眼之翼 <2463...@qq.com> Date:2019-09-03 09:22:41 Recipient:user Subject:question How do you do: My problem is flink t

Re: Streaming write to Hive

2019-09-05 Thread JingsongLee
Hi luoqi: With partition support[1], I want to introduce a FileFormatSink to cover streaming exactly-once and partition-related logic for flink file connectors and hive connector. You can take a look. [1] https://docs.google.com/document/d/15R3vZ1R_pAHcvJkRx_CWleXgl08WL3k_ZpnWSdzP7GY/edit?usp=

Re: count distinct not supported in batch?

2019-09-19 Thread JingsongLee
Hi fanbin: It is "distinct aggregates for group window" in batch sql mode. Now, legacy planner: not support. blink planner: not support. There is no clear plan yet. But if the demand is strong, we can consider supporting it. Best, Jingsong Lee

回复:Is it possible to handle late data when using table API?

2019-04-16 Thread JingsongLee
Hi @Lasse Nedergaard, Table API don't have allowedLateness api. But you can set rowtime.watermarks.delay of source to slow down the watermark clock. -- 发件人:Lasse Nedergaard 发送时间:2019年4月16日(星期二) 16:20 收件人:user 主 题:Is it possible to

Re: Is it possible to handle late data when using table API?

2019-04-16 Thread JingsongLee
To set rowtime watermarks delay of source you can: val desc = Schema() .field("a", Types.INT) .field("e", Types.LONG) .field("f", Types.STRING) .field("t", Types.SQL_TIMESTAMP) .rowtime(Rowtime().timestampsFromField("t").watermarksPeriodicBounded(1000)) Use watermarksPeriodicBounded a

Re: PatternFlatSelectAdapter - Serialization issue after 1.8 upgrade

2019-04-19 Thread JingsongLee
, JingsongLee -- From:Oytun Tez Send Time:2019年4月19日(星期五) 03:38 To:user Subject:PatternFlatSelectAdapter - Serialization issue after 1.8 upgrade Hi all, We are just migration from 1.6 to 1.8. I encountered a serialization error

Re: Generic return type on a user-defined scalar function

2019-05-20 Thread JingsongLee
it to this way to support generic return type: val functionCallCode = s""" |${parameters.map(_.code).mkString("\n")} |$resultTypeTerm $resultTerm = ($resultTypeTerm) $functionReference.eval( | ${parameters.

Re: Flink SQL: Execute DELETE queries

2019-05-28 Thread JingsongLee
Or you can build your own Sink code, where you can delete rows of DB table. Best, JingsongLee -- From:Papadopoulos, Konstantinos Send Time:2019年5月28日(星期二) 22:54 To:Vasyl Bervetskyi Cc:user@flink.apache.org Subject:RE: Flink SQL:

Re: Clean way of expressing UNNEST operations

2019-06-03 Thread JingsongLee
th similar UNNEST functions to try it out. (Use JOIN LATERAL TABLE) Best, JingsongLee -- From:Piyush Narang Send Time:2019年6月4日(星期二) 00:20 To:user@flink.apache.org Subject:Clean way of expressing UNNEST operations Hi folks,

Re: Clean way of expressing UNNEST operations

2019-06-03 Thread JingsongLee
, JingsongLee -- From:JingsongLee Send Time:2019年6月4日(星期二) 13:35 To:Piyush Narang ; user@flink.apache.org Subject:Re: Clean way of expressing UNNEST operations Hi @Piyush Narang It seems that Calcite's type inference is not pe

Re: can flink sql handle udf-generated timestamp field

2019-06-05 Thread JingsongLee
below document: [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/time_attributes.html#event-time Best, JingsongLee -- From:Yu Yang Send Time:2019年6月5日(星期三) 14:57 To:user Subject:can flink

Re: TableException

2019-06-12 Thread JingsongLee
RetractStreamTableSink or UpsertStreamTableSink. (Unfortunately, we don't have Retract/Upsert JDBC Sink now, you can try to do by yourself) [1]https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/sourceSinks.html#appendstreamtablesink Best, Jingso

Re: Best Flink SQL length proposal

2019-06-26 Thread JingsongLee
Hi Simon: Does your code include the PR[1]? If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)? If exclude: Can you wrap some fields to a nested Row field to reduce field number. 1.https://github.com/apache/flink/pull/5613 ---

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-26 Thread JingsongLee
/CorrelateTest.scala#L168 2.https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/calcite/CalciteConfigBuilderTest.scala Best, JingsongLee -- From:Felipe Gutierrez Send

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-26 Thread JingsongLee
flink-table-planner/src/test/scala/org/apache/flink/table/api/ExternalCatalogTest.scala [2] https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/descriptors/OldCsv.scala Best, Jingso

Re: Best Flink SQL length proposal

2019-06-26 Thread JingsongLee
and make appropriate segmentation when compile) to solve this problem thoroughly in blink planner. Maybe it will in release-1.10. Best, JingsongLee -- From:Simon Su Send Time:2019年6月27日(星期四) 11:22 To:JingsongLee Cc:user

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-27 Thread JingsongLee
Got it, it's clear, TableStats is the important functions of ExternalCatalog. It is right way. Best, JingsongLee -- From:Felipe Gutierrez Send Time:2019年6月27日(星期四) 14:53 To:JingsongLee Cc:user Subject:Re: Hello-world examp

Re: LookupableTableSource question

2019-06-28 Thread JingsongLee
, JingsongLee -- From:Flavio Pompermaier Send Time:2019年6月28日(星期五) 21:04 To:user Subject:LookupableTableSource question Hi to all, I have a use case where I'd like to enrich a stream using a rarely updated lookup table. Basically, I&#

Re: LookupableTableSource question

2019-06-28 Thread JingsongLee
for (Row cachedRow : cachedRows) { collect(cachedRow); } return; } } } ... Am I missing something? On Fri, Jun 28, 2019 at 4:18 PM JingsongLee wrote: Hi Flavio: I just implement a JDBCLookupFunction[1]. You

Re: LookupableTableSource question

2019-07-02 Thread JingsongLee
rent from temporal table. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html#table-functions Best, JingsongLee -- From:Flavio Pompermaier Send Time:2019年7月1日(星期一) 21:26 To:JingsongLee Cc:user S

Re: [ANNOUNCE] Apache Flink 1.8.1 released

2019-07-02 Thread JingsongLee
Thanks jincheng for your great job. Best, JingsongLee -- From:Congxian Qiu Send Time:2019年7月3日(星期三) 14:35 To:d...@flink.apache.org Cc:Dian Fu ; jincheng sun ; Hequn Cheng ; user ; announce Subject:Re: [ANNOUNCE] Apache Flink

Re: Providing Custom Serializer for Generic Type

2019-07-03 Thread JingsongLee
Hi Andrea: Why not make your MyClass POJO? [1] If it is a POJO, then flink will use PojoTypeInfo and PojoSerializer that have a good implementation already. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#rules-for-pojo-types Best, JingsongLee

Re: Flink Table API and Date fields

2019-07-07 Thread JingsongLee
Hi Flavio: Looks like you use java.util.Date in your pojo, Now Flink table not support BasicTypeInfo.DATE_TYPE_INFO because of the limitations of some judgments in the code. Can you use java.sql.Date? Best, JingsongLee

Re: Flink Table API and Date fields

2019-07-08 Thread JingsongLee
Flink 1.9 blink runner will support it as Generic Type, But I don't recommend it. After all, there are java.sql.Date and java.time.* in Java. Best, JingsongLee -- From:Flavio Pompermaier Send Time:2019年7月8日(星期一)

Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread JingsongLee
Congratulations Rong. Rong Rong has done a lot of nice work in the past time to the flink community. Best, JingsongLee -- From:Rong Rong Send Time:2019年7月12日(星期五) 08:09 To:Hao Sun Cc:Xuefu Z ; dev ; Flink ML Subject:Re

Re: Stream to CSV Sink with SQL Distinct Values

2019-07-15 Thread JingsongLee
Hi caizhi and kali: I think this table should use toRetractStream instead of toAppendStream, and you should handle the retract messages. (If you just use distinct, the message should always be accumulate message) Best, JingsongLee