Hi Jun,
I think the release notes should only include the issues that cause changes
visible to users. Also I think by design flink-file-sink-common should not be
used directly by users and it only serve as a shared module by the legacy
StreamingFileSink and the new FileSink.
Best,
Yun
--
Hi Qing,
I’m afraid CheckpointedFunction cannot be applied to the new source API, but
could you share the abstractions of your source implementation, like which
component a split maps to etc.? Maybe we can try to do some workarounds.
Best,
Qingsheng
> On May 30, 2022, at 20:09, Qing Lim wr
I'm afraid not. I can still find it in main repository[1].
[1]
https://github.com/apache/flink/tree/master/flink-connectors/flink-file-sink-common
Best regards,
Yuxia
- 原始邮件 -
发件人: "Jun Qin"
收件人: "User"
发送时间: 星期二, 2022年 5 月 31日 上午 5:24:10
主题: Status of File Sink Common (flink-file-sink
Hi,
Has File Sink Common (flink-file-sink-common) been dropped? If so, since which
version? I do not seem to find anything related in the release notes of 1.13.x,
1.14.x and 1.15.0.
Thanks
Jun
It seem that you are finding a custom checkpoint function with the new Source
api. I'm not sure this [1][2] can help you. You can custom the checkpoint just
like how KafkaSource do that.
[1]
https://github.com/apache/flink/blob/9be49ff871feace87aed9d4e3f8132bcf0cd3945/flink-connectors/flink-con
Hi, is it possible to use CheckpointedFunction with the new Source api? (The
one in package org.apache.flink.api.connector.source)
My use case:
I have a custom source that emit individual nodes update from a tree, and I
wish to create a stream of the whole Tree snapshots, so I will have to
acc
May be you can use jstack or flame graph to analyze what's the bottleneck.
BTW, about generating flame graph, arthas[1] is a good tool.
[1] https://github.com/alibaba/arthas
Best regards,
Yuxia
发件人: "Christopher Gustafson"
收件人: "User"
发送时间: 星期一, 2022年 5 月 30日 下午 2:29:19
主题: Large back
Hi Clayton,
Could you also help provide the topology of the job?
Also, if convenient could you also have a look at
the back-pressure status of each node, we could
then locate which node are getting slowly and might
cause the lag.
Best,
Yun