Great, thanks for letting me know.
pt., 19 mar 2021 o 20:24 Alexey Trenikhun napisał(a):
> Hi Piotrek,
>
> Thank for information, looks like isBackPressured ( and in
> future backPressuredTimeMsPerSecond) is more useful for our simple
> monitoring purposes. Looking forward for updated blog post
Hi
Maybe you can reach to this test[1] for reference
[1]
https://github.com/apache/flink/blob/a33e6bd390a9935c3e25b6913bed0ff6b4a78818/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java#L55
Best,
Congxian
Abdullah bin Omar 于2021年3月22日周一 上午11:25
Hi,
(My work is to see the state. So, I have got the save points metadata file
where the state is saved)
*What is the way to read the metadata files that are saved from savepoints
in the local machine?* I guess the file is in binary.
Thank you
Regards,
Abdullah
如何在项目中使用Java代码的方式,发送一个请求,
提交jar包在单节点的flink上运行?
Hello,
I am trying to find some examples of how to use the OrcTableSource and
query it.
I got to the documentation here:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/api/java/org/apache/flink/orc/OrcTableSource.html
and it says that an OrcTableSource is used as below:
OrcTableSou
Hi,
I have a use case where I need to process incoming records on a Kafka topic
based on a certain record field that defines the record type.
What I'm thinking is to split the incoming datastream into record-type
specific streams and then apply record-type specific stream processing on
each. What