Hi Weihua,Shammon
> What kind of session cluster are you using? Standalone or Native?
We are running a native deployment on Kubernetes. Is there any optimization
we can do ?
> Maybe you need to dump memory and analyze the usage if there are no other
obvious problems
I was thinking of enabling tas
Hi jing,
It sounds good to me, we can add an option for it
Best,
Shammon
On Fri, Feb 17, 2023 at 3:13 PM Jing Ge wrote:
> Hi,
>
> It makes sense to offer this feature of catching and ignoring exp with
> config on/off, when we put ourselves in users' shoes. WDYT? I will create a
> ticket if mo
Hi,
It makes sense to offer this feature of catching and ignoring exp with
config on/off, when we put ourselves in users' shoes. WDYT? I will create a
ticket if most of you consider it as a good feature to help users.
Best regards,
Jing
On Fri, Feb 17, 2023 at 6:01 AM Shammon FY wrote:
> Hi Ha
Hi Reme,
The code you provided seems good to me. Maybe you can add some logs in the
getKey() and join() function for debug purpose to observe whether there was
any successfully joined record. By the way, the metrics in WebUI dashboard
might be of good help.
Best,
Shuiqiang
Reme Ajayi 于2023年2月16
Hi
Maybe you need to dump memory and analyze the usage if there are no other
obvious problems
Best,
Shammon
On Fri, Feb 17, 2023 at 10:41 AM Weihua Hu wrote:
> Hi, Meghajit
>
> What kind of session cluster are you using? Standalone or Native?
> If it's standalone, maybe you can check if TaskMa
Hi Hatem
As mentioned above, you can extend the KafkaSink or create a udf and
process the record before sink
Best,
Shammon
On Fri, Feb 17, 2023 at 9:54 AM yuxia wrote:
> Hi, Hatem.
> I think there is no way to catch the exception and then ignore it in
> current implementation for KafkaSink. Y
Thank you for your reply
But in my local test environment (flink1.15 and flink1.16), when the chain of
writer and commiter is disabled, the back pressure can be reduced.
The specific phenomenon is as follows:
1. After ck-4 is completed, the commit execution is very slow
2. At this time, the [Sink
Hi, Meghajit
What kind of session cluster are you using? Standalone or Native?
If it's standalone, maybe you can check if TaskManager with heavy gc is
running more tasks than others. If so, we can enable
"cluster.evenly-spread-out-slots=true" to balance tasks in all task
managers.
Best,
Weihua
Hi, Hatem.
I think there is no way to catch the exception and then ignore it in current
implementation for KafkaSink. You may also need to extend the KafkaSink.
Best regards,
Yuxia
发件人: "Hatem Mostafa"
收件人: "User"
发送时间: 星期四, 2023年 2 月 16日 下午 9:32:44
主题: KafkaSink handling message size
Hi everyone,
In my Flink application, I have a table created with ChangelogMode.all().
One of the sinks I want to use requires ChangelogMode.insertOnly().
The only solution that comes to mind is converting my table to a DataStream
of Rows, filtering out using RowKind and converting it back to a
Hi wudi
I'm afraid it cannot reduce back pressure. If Writer and Commiter are not
chained together, and the Commiter task runs slowly, it can block the
upstream Writer tasks by back pressure too.
On the other hand, you can try to increase the parallelism of sink node to
speedup the Commiter opera
thanks for your replies.
I think that if Writer and Commiter are not chained together, data consistency
can be guaranteed, right?
Because when the Commiter does not block the Writer, at the next Checkpoint, if
the Commit is not completed, the SinkWriter.precommit will not be triggered
In additio
Hello,
We have a Flink session cluster deployment in Kubernetes of around 100
TaskManagers. It processes around 20-30 Kafka Source jobs at the moment.
The jobs run are all using the same jar and only differ in the SQL query
used and other UDFs. We are using the official flink:1.14.3 image.
We obs
Hi,
I am trying to join two Kafka Data Streams from and output to another Kafka
topic, however my joined stream does not output any data. After some time,
my program crashes and runs out of memory, which I think is a result of the
join not working. My code doesn't throw any errors, but the joins d
As far as I know that chain between committer and writer is also
required for correctness.
On 16/02/2023 14:53, weijie guo wrote:
Hi wu,
I don't think it is a good choice to directly change the strategy of
chain. Operator chain usually has better performance and resource
utilization. If we d
Hi wu,
I don't think it is a good choice to directly change the strategy of chain.
Operator chain usually has better performance and resource utilization. If
we directly change the chain policy between them, users can no longer chain
them together, which is not a good starting point.
Best regards
Thank you for your reply.
Currently in the custom Sink Connector, the Flink task will combine Writer and
Committer into one thread, and the thread name is similar: [Sink: Writer ->
Sink: Committer (1/1)#0].
In this way, when the Committer.commit() method is very slow, it will block the
SinkWrit
Hi
Do you mean how to disable `chain` in your custom sink connector? Can you
give an example of what you want?
On Wed, Feb 15, 2023 at 10:42 PM di wu <676366...@qq.com> wrote:
> Hello
>
> The current Sink operator will be split into two operations, Writer and
> Commiter. By default, they will b
18 matches
Mail list logo