Hi Steve,
When you call env.readFile(…), internally env creates:
ContinuousFileMonitoringFunction monitoringFunction =
new ContinuousFileMonitoringFunction<>(inputFormat, monitoringMode,
getParallelism(), interval);
ContinuousFileReaderOperator reader =
new ContinuousFileReaderOperator<>(
Hi Aissa,
1. To join 3 streams you can chain 2 coflatmap functions:
https://stackoverflow.com/questions/54277910/how-do-i-join-two-streams-in-apache-flink
1. If your aggregation function can be also applied partially you can chain
2 joins:
https://ci.apache.org/projects/flink/flink-docs
Table res = tableEnv.sqlQuery("select a(CAST(price AS INT), 12)
as max_price from A group by symbol");
res.execute().print();
}
Regards,
Timo
On 28.07.20 17:20, Dmytro Dragan wrote:
> Hi Timo,
>
> I have switched
ction signature fun()" was
a bug that got fixed in 1.11.1:
https://issues.apache.org/jira/browse/FLINK-18520
This bug only exists for catalog functions, not temporary system functions.
Regards,
Timo
On 27.07.20 16:35, Dmytro Dragan wrote:
>
Hi All,
I see strange behavior of UDAF functions:
Let`s say we have a simple table:
EnvironmentSettings settings =
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
TableEnvironment t = TableEnvironment.create(settings);
Table table = t.fromValues(DataTypes.ROW(
7.20 15:14, Dmytro Dragan wrote:
> Hi All,
>
> We are working on migration existing pipelines from Flink 1.10 to Flink
> 1.11.
>
> We are using Blink planner and have unified pipelines which can be used
> in stream and batch mode.
>
> St
Hi All,
We are working on migration existing pipelines from Flink 1.10 to Flink 1.11.
We are using Blink planner and have unified pipelines which can be used in
stream and batch mode.
Stream pipelines works as expected, but batch once fail on Flink 1.11 if they
have any table aggregation transf
Hi Jingsong,
Thank you for detailed clarification.
Best regards,
Dmytro Dragan | dd...@softserveinc.com | Lead Big Data Engineer | Big Data &
Analytics | SoftServe
From: Jingsong Li
Sent: Thursday, June 18, 2020 4:58:22 AM
To: Dmytro Dragan
Cc:
Hi guys,
In our use case we consider to write data to AWS S3 in parquet format using
Blink Batch mode.
As far as I see from one side to write parquet file valid approach is to use
StreamingFileSink with Parquet bulk-encoded format, but
Based to documentation and tests it works only with OnCheckp
Hi All,
Could you please tell how to register custom Aggregation function in blink
batch app?
In case of streaming mode:
We create
EnvironmentSettings bsSettings =
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment tableEnv = StreamTableEnviron
10 matches
Mail list logo