We don't support 'PROCTIME()' in a temporal table join. Please use a left
table's proctime field. [1]
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html#usage-1
*Best Regards,*
*Zhenghua Gao*
On Fri, Mar 13, 2020 at 11:57 AM wang
You are right.
The product on alibaba cloud is based on an internal branch.
There are much discrepancy between them.
*Best Regards,*
*Zhenghua Gao*
On Fri, Mar 13, 2020 at 1:09 PM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
> Thanks, works now.
>
> Seems the open
Hi izual,
There is a workaround that you could implement your own sink which write
record sink1 and sink2 in turn.
*Best Regards,*
*Zhenghua Gao*
On Wed, Mar 25, 2020 at 10:41 PM Benchao Li wrote:
> Hi izual,
>
> AFAIK, there is no way to to this in pure SQL.
>
>
>
>
>
TableEnvironment and use `registerFunction`s. Pls make sure you pass in
the correct `isStreamingMode = false`
*Best Regards,*
*Zhenghua Gao*
On Tue, Apr 14, 2020 at 5:58 PM Dmytro Dragan
wrote:
> Hi All,
>
>
>
> Could you please tell how to register custom Aggregation function in bl
FLINK-16471 introduce a JDBCCatalog, which implements Catalog interface.
Currently we only support PostgresCatalog and listTables().
If you want to get the list of views, you can implement listViews()
(currently return an empty list).
*Best Regards,*
*Zhenghua Gao*
On Thu, Apr 23, 2020 at 8:48
sqlQuery).
*Best Regards,*
*Zhenghua Gao*
On Fri, Aug 9, 2019 at 12:38 PM Tony Wei wrote:
> forgot to send to user mailing list.
>
> Tony Wei 於 2019年8月9日 週五 下午12:36寫道:
>
>> Hi Zhenghua,
>>
>> I didn't get your point. It seems that `isEagerOperationTranslation`
pache/flink/blob/master/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/WindowJoinITCase.scala
*Best Regards,*
*Zhenghua Gao*
On Mon, Aug 12, 2019 at 10:49 PM Theo Diefenthal <
theo.diefent...@scoop-software.de> wrote:
> Hi there,
>
I wrote a demo example for time windowed join which you can pick up [1]
[1] https://gist.github.com/docete/8e78ff8b5d0df69f60dda547780101f1
*Best Regards,*
*Zhenghua Gao*
On Tue, Aug 13, 2019 at 4:13 PM Zhenghua Gao wrote:
> You can check the plan after optimize to verify it's a regu
rce of the coming release
1.9.0[2].
[1] https://issues.apache.org/jira/browse/FLINK-3033
[2]
https://github.com/apache/flink/blob/release-1.9/flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/LookupableTableSource.java
*Best Regards,*
*Zhenghua Gao*
On Thu, Aug 15, 20
-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/StreamingFileSink.java
[4]
https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/ParquetBulkWriter.java
*Best Regards,*
*Zhenghua Gao*
On Fri, Aug 16
,*
*Zhenghua Gao*
On Fri, Aug 16, 2019 at 11:52 PM Lian Jiang wrote:
> Thanks. Which api (dataset or datastream) is recommended for file handling
> (no window operation required)?
>
> We have similar scenario for real-time processing. May it make sense to
> use datastream api for both batc
/blob/master/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/AsyncLookupJoinITCase.scala
*Best Regards,*
*Zhenghua Gao*
On Mon, Sep 16, 2019 at 9:23 PM srikanth flink wrote:
> Hi there,
>
> I'm working with streaming in Flin
POJO is available in KeySelector[1].
Could you provide more information about your problem? Version of Flink?
Error messages?
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-key-selector-functions
*Best Regards,*
*Zhenghua Gao*
On Mon, Sep 16
The reason might be the parallelism of your task is only 1, that's too low.
See [1] to specify proper parallelism for your job, and the execution time
should be reduced significantly.
[1] https://ci.apache.org/projects/flink/flink-docs-stable/dev/parallel.html
*Best Regards,*
*Zhenghu
I think more runtime information would help figure out where the problem is.
1) how many parallelisms actually working
2) the metrics for each operator
3) the jvm profiling information, etc
*Best Regards,*
*Zhenghua Gao*
On Wed, Oct 30, 2019 at 8:25 PM Habib Mostafaei
wrote:
> Thanks Gao
will try to reproduce your scenario and dig the root causes.
*Best Regards,*
*Zhenghua Gao*
On Thu, Oct 31, 2019 at 9:05 PM Habib Mostafaei
wrote:
> I enclosed all logs from the run and for this run I used parallelism one.
> However, for other runs I checked and found that all parallel worke
ified) or taskmanager.out.
It's large (about 4GB in my case) and causes the disk writes high.
*Best Regards,*
*Zhenghua Gao*
On Fri, Nov 1, 2019 at 4:40 PM Habib Mostafaei
wrote:
> I used streaming WordCount provided by Flink and the file contains text
> like "This is some te
https://docs.oracle.com/javase/7/docs/api/java/util/Collections.html#synchronizedList(java.util.List)
[2] https://issues.apache.org/jira/browse/FLINK-14650
*Best Regards,*
*Zhenghua Gao*
On Thu, Nov 7, 2019 at 12:12 AM Romain Gilles
wrote:
> Hi all,
> I think the code example in following se
The jdbc connector can read data from PostgreSQL for Table/SQL users.
For pyflink, cc @Hequn
*Best Regards,*
*Zhenghua Gao*
On Wed, Nov 20, 2019 at 7:56 PM Yu Watanabe wrote:
> Hello .
>
> I would like to ask question about possibility of stream read table rows
> from PostgresQL u
ull/10268>
[1] https://github.com/apache/flink/pull/10268
[2] https://issues.apache.org/jira/browse/FLINK-14599
*Best Regards,*
*Zhenghua Gao*
On Sun, Nov 24, 2019 at 8:44 PM Jark Wu wrote:
> Hi,
>
> +1 to disable it in 1.10. I think it's time to disable and correct the
>
the kafka connector jar is missing in your class path
*Best Regards,*
*Zhenghua Gao*
On Mon, Dec 2, 2019 at 2:14 PM srikanth flink wrote:
> Hi there,
>
> I'm following the link
> <https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html>
&
+1 for dropping.
*Best Regards,*
*Zhenghua Gao*
On Thu, Dec 5, 2019 at 11:08 AM Dian Fu wrote:
> +1 for dropping them.
>
> Just FYI: there was a similar discussion few months ago [1].
>
> [1]
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DISCUSS-Drop
+1 for making blink planner as the default planner for SQL Client since we
have made a huge improvement in 1.10.
*Best Regards,*
*Zhenghua Gao*
On Sun, Jan 5, 2020 at 2:42 PM Benchao Li wrote:
> +1
>
> We have used blink planner since 1.9.0 release in our production
> environ
Congrats Jingsong!
*Best Regards,*
*Zhenghua Gao*
On Fri, Feb 21, 2020 at 11:59 AM godfrey he wrote:
> Congrats Jingsong! Well deserved.
>
> Best,
> godfrey
>
> Jeff Zhang 于2020年2月21日周五 上午11:49写道:
>
>> Congratulations!Jingsong. You deserve it
>>
>&g
May be you're generating non-standard JSON record.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
So what you want is the counts of every keys ?
Why didn't you use count aggregation?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Actually flink-connectors/flink-jdbc module provided a JDBCInputFormat to
read data from a database.
u can have a try.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Just try: filter("f_date <= '1998-10-02'.toDate")
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Seem like there is something wrong with RestServer and the RestClient
didn't connect to it.
U can check the standalonesession log for investigating causes.
btw: The cause of "no cluster was found" is ur pid information was
cleaned for some reason.
The pid information is stored in ur TMP director
29 matches
Mail list logo