Hi,
Trying to enable the GC dumps on Task Managers in a kube session.
Tried this
-Dkubernetes.taskmanager.entrypoint.args="-Xlog:gc*=debug:file=/tmp/gc.log:filecount=5,filesize=30m"
and it added the arg at the end of the java command which does not seem to
work.
How could I add Xlog:gc*=debug
Hi,
I’m getting an exception at stop-with-savepoint. The savepoint is still created
but the job fails. I’d like to know what the implications and consequences of
the failure are (having job configured as exactly once) and how can It be
avoided. Starting the job with that savepoint looks to wor
That’s great news.
Thanks.
From: Leonard Xu
Date: Thursday, 23 May 2024 at 04:42
To: Vararu, Vadim
Cc: user , Danny Cranmer
Subject: Re: Flink kinesis connector 4.3.0 release estimated date
Hey, Vararu
The kinesis connector 4.3.0 release is under vote phase and we hope to finalize
the
Hi guys,
Any idea when the 4.3.0 kinesis connector is estimated to be released?
Cheers,
Vadim.
Yes, the dynamic log level modification worked great for me.
Thanks a lot,
Vadim
From: Biao Geng
Date: Tuesday, 14 May 2024 at 10:07
To: Vararu, Vadim
Cc: user@flink.apache.org
Subject: Re: Proper way to modify log4j config file for kubernetes-session
Hi Vararu,
Does this document meet your
Hi,
Trying to configure loggers in the log4j-console.properties file (that is
mounted from the host where the kubernetes-session.sh is invoked and referenced
by the TM processes via - Dlog4j.configurationFile).
Is there a proper (documented) way to do that, meaning to append/modify the
log4j c
I’ve been investigating a data duplication issue in a Kinesis -> Flink -> Kafka
exactly once setup.
Found out that at the stop with savepoint next things happen:
* The Kafka transaction is committed, the last processed events being
written
* The Kinesis sequence number is written in the
Hi community,
Need your help to understand if there is a misconfiguration or it’s a Flink bug.
I’ve drawn a schema for better understanding but here is the problem in few
steps:
nfig.builder ()
.withPartPrefix (String.format ("part-%s", UUID.randomUUID ()))
.withPartSuffix (String.format ("%s.parquet",
compressionCodecName.getExtension ()))
.build ())
.build ();
Thanks,
Vararu Vadim.
Hi all,
Trying to have a s3 parquet bulk writer with file roll policy based on size
limitation + checkpoint. For that I’ve extended the CheckpointRollingPolicy and
overwritten shouldRollOnEvent to return true if the part size is greater than
the limit.
The problem is that the part size that I
Hi all,
We have some jobs that write parquet files in s3, bucketing by processing time
in a structure like /year/month/day/hour.
On 13th of September, we have migrated our Flink runtime 1.14.5 to 1.15.2 and
now we have some jobs crashing at checkpointing because of being unable to find
some s3
Hi all,
I need to configure a keyed global window that would trigger a reduce function
for all the events in each key group before the processing finishes and the job
closes.
I have something similar for the realtime(streaming) version of the job,
configured with a processing time gap:
.keyB
Hi all,
Trying to migrate a job from Flink 1.14 to 1.15.1. The job itself consumes from
a kinesis stream and writes to s3.
The stop operation works well on 1.14, however, on 1.15.1 it fails (both with
and without savepoint).
The jobs fails with different exceptions when there is data flowing th
13 matches
Mail list logo