Hi,
Does runtime filter probe side wait for building runtime filter?
Can you check the start time of build side and probe side?
Best,
Jingsong Lee
--
From:faaron zheng
Send Time:2020年3月2日(星期一) 14:55
To:user
Subject:Question abou
Hi,
I'v introduced LocalDateTime type information to flink-core.
But for compatibility reason, I revert the modification in TypeExtractor.
It seems that at present you can only use Types.LOCAL_DATE_TIME explicitly.
[1] http://jira.apache.org/jira/browse/FLINK-12850
Best,
Jingsong Lee
-
Hi,
Some previous discussion in [1], FYI
[1] https://issues.apache.org/jira/browse/FLINK-10230
Best,
Jingsong Lee
--
From:Jark Wu
Send Time:2020年3月2日(星期一) 22:42
To:Jeff Zhang
Cc:"Gyula Fóra" ; user
Subject:Re: SHOW CREATE TABLE
Congrats Hequn!
Best,
Jingsong Lee
--
From:Biao Liu
Send Time:2019年8月7日(星期三) 12:05
To:Zhu Zhu
Cc:Zili Chen ; Jeff Zhang ; Paul Lam
; jincheng sun ; dev
; user
Subject:Re: [ANNOUNCE] Hequn becomes a Flink committer
Congrats Heq
Congratulations~~~ Thanks gordon and everyone~
Best,
Jingsong Lee
--
From:Oytun Tez
Send Time:2019年8月22日(星期四) 14:06
To:Tzu-Li (Gordon) Tai
Cc:dev ; user ; announce
Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released
Congratulati
should be schema.field(“msg”, Types.ROW(...))?
And you should select msg.f1 from table.
Best
Jingsong Lee
来自阿里邮箱 iPhone版
--Original Mail --
From:圣眼之翼 <2463...@qq.com>
Date:2019-09-03 09:22:41
Recipient:user
Subject:question
How do you do:
My problem is flink t
Hi luoqi:
With partition support[1], I want to introduce a FileFormatSink to
cover streaming exactly-once and partition-related logic for flink
file connectors and hive connector. You can take a look.
[1]
https://docs.google.com/document/d/15R3vZ1R_pAHcvJkRx_CWleXgl08WL3k_ZpnWSdzP7GY/edit?usp=
Hi fanbin:
It is "distinct aggregates for group window" in batch sql mode.
Now,
legacy planner: not support.
blink planner: not support.
There is no clear plan yet.
But if the demand is strong, we can consider supporting it.
Best,
Jingsong Lee
Hi @Lasse Nedergaard, Table API don't have allowedLateness api.
But you can set rowtime.watermarks.delay of source to slow down the watermark
clock.
--
发件人:Lasse Nedergaard
发送时间:2019年4月16日(星期二) 16:20
收件人:user
主 题:Is it possible to
To set rowtime watermarks delay of source you can:
val desc = Schema()
.field("a", Types.INT)
.field("e", Types.LONG)
.field("f", Types.STRING)
.field("t", Types.SQL_TIMESTAMP)
.rowtime(Rowtime().timestampsFromField("t").watermarksPeriodicBounded(1000))
Use watermarksPeriodicBounded a
, JingsongLee
--
From:Oytun Tez
Send Time:2019年4月19日(星期五) 03:38
To:user
Subject:PatternFlatSelectAdapter - Serialization issue after 1.8 upgrade
Hi all,
We are just migration from 1.6 to 1.8. I encountered a serialization error
it to this way to support generic return type:
val functionCallCode =
s"""
|${parameters.map(_.code).mkString("\n")}
|$resultTypeTerm $resultTerm = ($resultTypeTerm) $functionReference.eval(
| ${parameters.
Or you can build your own Sink code, where you can delete rows of DB table.
Best, JingsongLee
--
From:Papadopoulos, Konstantinos
Send Time:2019年5月28日(星期二) 22:54
To:Vasyl Bervetskyi
Cc:user@flink.apache.org
Subject:RE: Flink SQL:
th similar UNNEST functions to try it out. (Use JOIN LATERAL
TABLE)
Best, JingsongLee
--
From:Piyush Narang
Send Time:2019年6月4日(星期二) 00:20
To:user@flink.apache.org
Subject:Clean way of expressing UNNEST operations
Hi folks,
, JingsongLee
--
From:JingsongLee
Send Time:2019年6月4日(星期二) 13:35
To:Piyush Narang ; user@flink.apache.org
Subject:Re: Clean way of expressing UNNEST operations
Hi @Piyush Narang
It seems that Calcite's type inference is not pe
below document:
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/time_attributes.html#event-time
Best, JingsongLee
--
From:Yu Yang
Send Time:2019年6月5日(星期三) 14:57
To:user
Subject:can flink
RetractStreamTableSink or UpsertStreamTableSink.
(Unfortunately, we don't have Retract/Upsert JDBC Sink now, you can try to do
by yourself)
[1]https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/sourceSinks.html#appendstreamtablesink
Best, Jingso
Hi Simon:
Does your code include the PR[1]?
If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default
64000)?
If exclude: Can you wrap some fields to a nested Row field to reduce field
number.
1.https://github.com/apache/flink/pull/5613
---
/CorrelateTest.scala#L168
2.https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/calcite/CalciteConfigBuilderTest.scala
Best, JingsongLee
--
From:Felipe Gutierrez
Send
flink-table-planner/src/test/scala/org/apache/flink/table/api/ExternalCatalogTest.scala
[2]
https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/descriptors/OldCsv.scala
Best, Jingso
and
make appropriate segmentation when compile) to solve this problem
thoroughly in blink planner. Maybe it will in release-1.10.
Best, JingsongLee
--
From:Simon Su
Send Time:2019年6月27日(星期四) 11:22
To:JingsongLee
Cc:user
Got it, it's clear, TableStats is the important functions of ExternalCatalog.
It is right way.
Best, JingsongLee
--
From:Felipe Gutierrez
Send Time:2019年6月27日(星期四) 14:53
To:JingsongLee
Cc:user
Subject:Re: Hello-world examp
, JingsongLee
--
From:Flavio Pompermaier
Send Time:2019年6月28日(星期五) 21:04
To:user
Subject:LookupableTableSource question
Hi to all,
I have a use case where I'd like to enrich a stream using a rarely updated
lookup table.
Basically, I
for (Row cachedRow : cachedRows) {
collect(cachedRow);
}
return;
}
}
}
...
Am I missing something?
On Fri, Jun 28, 2019 at 4:18 PM JingsongLee wrote:
Hi Flavio:
I just implement a JDBCLookupFunction[1]. You
rent from temporal table.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html#table-functions
Best, JingsongLee
--
From:Flavio Pompermaier
Send Time:2019年7月1日(星期一) 21:26
To:JingsongLee
Cc:user
S
Thanks jincheng for your great job.
Best, JingsongLee
--
From:Congxian Qiu
Send Time:2019年7月3日(星期三) 14:35
To:d...@flink.apache.org
Cc:Dian Fu ; jincheng sun ;
Hequn Cheng ; user ; announce
Subject:Re: [ANNOUNCE] Apache Flink
Hi Andrea:
Why not make your MyClass POJO? [1] If it is a POJO, then flink
will use PojoTypeInfo and PojoSerializer that have a good
implementation already.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#rules-for-pojo-types
Best, JingsongLee
Hi Flavio:
Looks like you use java.util.Date in your pojo, Now Flink table not support
BasicTypeInfo.DATE_TYPE_INFO because of the limitations of some judgments in
the code.
Can you use java.sql.Date?
Best, JingsongLee
Flink 1.9 blink runner will support it as Generic Type,
But I don't recommend it. After all, there are java.sql.Date and java.time.* in
Java.
Best, JingsongLee
--
From:Flavio Pompermaier
Send Time:2019年7月8日(星期一)
Congratulations Rong.
Rong Rong has done a lot of nice work in the past time to the flink community.
Best, JingsongLee
--
From:Rong Rong
Send Time:2019年7月12日(星期五) 08:09
To:Hao Sun
Cc:Xuefu Z ; dev ; Flink ML
Subject:Re
Hi caizhi and kali:
I think this table should use toRetractStream instead of toAppendStream, and
you should handle the retract messages. (If you just use distinct, the message
should always be accumulate message)
Best, JingsongLee
31 matches
Mail list logo