Hi Gordon,
Thanks for your information. That is what I need.
And I have responded to the Kafka connector RC vote mail.
Best regards,
Xianxun
> 2023年10月28日 04:13,Tzu-Li (Gordon) Tai 写道:
>
> Hi Xianxun,
>
> You can find the list supported Flink versions for each connector here:
> https://flin
Is it possible to trigger a window without changing window-start and
window-end dates?
I have a lot of jobs run in window tumble (3H) and when they are all
triggered at the same time, it causes performance problems. If somehow I
can delay some of them 10-15 minutes , without changing the original
Hi Xianxun,
You can find the list supported Flink versions for each connector here:
https://flink.apache.org/downloads/#apache-flink-connectors
Specifically for the Kafka connector, we're in the process of releasing a
new version for the connector that works with Flink 1.18.
The release candidate
> I wonder if you could use this fact to query the committed checkpoints
and move them away after the job is done.
This is not a robust solution, I would advise against it.
Best,
Alexander
On Fri, 27 Oct 2023 at 16:41, Andrew Otto wrote:
> For moving the files:
> > It will keep the files as is
* with regards to empty string. The null check is still a bit defensive and
one could return false in test(), but it does not matter really since
String.substring in getName() can never return null.
On Fri, 27 Oct 2023 at 16:32, Alexander Fedulov
wrote:
> Actually, this is not even "defensive pr
Actually, this is not even "defensive programming", but is the required
logic for processing directories.
See here:
https://github.com/apache/flink/blob/release-1.18/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/enumerate/NonSplittingRecursiveEnumerator.ja
Hello Team,
After the release of Flink 1.18, I found that most connectors had been
externalized, e.g. Kafka, ES, HBase, JDBC, and pulsar connectors. But I
didn't find any manual or codes indicating which versions of Flink these
connectors could work.
Best regards,
Xianxun
Hi Matthias,
Thanks for the response. I guess the specific question would be, if I work
with an existing savepoint and pass an empty DataStream to
OperatorTransformation#bootstrapWith, will the new savepoint end up with an
empty state for the modified operator, or will it maintain the existing
sta
For moving the files:
> It will keep the files as is and remember the name of the file read in
checkpointed state to ensure it doesnt read the same file twice.
I wonder if you could use this fact to query the committed checkpoints and
move them away after the job is done. I think it should even b
Hi, this node exists when you are using a Windowing TVF[1] and using
mini-batch, and then planner will optimize the plan tree with local-global
aggregation[2]. You can find the benefits of local global optimization in doc
above.
If you don't need this optimization, set 'table.optimizer.agg-ph
Hi,
Can someone tell what GlobalWindowAggregate is?
it is always %100 busy in my job graph.
GlobalWindowAggregate(groupBy=[deviceId, fwVersion, modelName,
manufacturer, phoneNumber], window=[TUMBLE(slice_end=[$slice_end], size=[3
h])], select=[deviceId, fwVersion, modelName, manufacturer, phoneN
11 matches
Mail list logo