Hi Oscar,
> but we don't understand why this incremental checkpoint keeps increasing
AFAIK, when performing incremental checkpoint, the RocksDBStateBackend will
upload the new created SST files to remote storage. The total size of these
files is the incremental checkpoint size. However, the new c
Hi team,
I am using the JdbcSink from flink-connector-jdbc artifact, version
3.1.0-1.17. I am trying to write a Sink wrapper that will internally call
the invoke method and open method of jdbc sink. While implementing, I see
that JdbcSink.*sink() *returns a SinkFunction which only exposes the inv
Hi, Prashant.
I think Yu Chen has given a professional troubleshooting ideas. Another thing I
want to ask is whether you use some
user defined function to store some objects? You can firstly dump the memory
and get more details to check for memory leaks.
--
Best!
Xuyang
在 2023-1
Hi, Dan.
Can you provide more details?
> I'm seeing unexpected behavior where it appears like the sql is executed
> locally.
Did you find a minicluster started locally running you program?
> In my case the remote environment is inside AWS and it doesn't appear to pick
> up the region and cr
Hi, Tauseef,
AFAIK, the most common way to get a list of tasks that a particular
job executes is through Flink's Web UI or REST API.
Using the Flink Web UI:
When you run a Flink cluster, a Web UI is launched by default on port
8081 of the JobManager. By accessing this Web UI through a browser,
y
Hi Dulce,
Thanks for your clarification. From my experience it is usually a
dependency problem, and you may verify this by checking whether you
have correctly included the dependencies in your jar (like `jar -tf
xxx.jar | grep ExecutionConfig`), and whether the dependencies in your
local environme
Hi Prashant,
OOMkill was mostly caused by workset memory exceed the pod limit.
We have to first expand the OVERHEAD memory properly by the following params to
observe if the problem can be solved.
```
taskmanager.memory.jvm-overhead.max=1536m
taskmanager.memory.jvm-overhead.min=1536m
```
And if
Thanks! Yeah I am not sure why it's handled so different with non-native
k8s mode.
If it's possible I think this would be a huge improvement.
On Mon, Nov 20, 2023, 12:55 PM Alexis Sarda-Espinosa <
sarda.espin...@gmail.com> wrote:
> Hi Trystan, I'm actually not very familiar with the operator's i
Hi,
If I use StreamExecutionEnvironment.createRemoteEnvironment and then
var tEnv = StreamTableEnvironment.create(env) from the resulting remote
StreamExecutionEvironment will any sql executed using tEnv.executeSql be
executed remotely inside the flink cluster?
I'm seeing unexpected behavior wh
As per the docs, the `inputQueueLength` metric refers to the number of
queued input buffers, and cannot be used on its own in order to
determine buffered records.
For instance, if I know that there are 5 queued input buffers, I cannot
conclude anything regarding buffered records if the size of
Hello,
We have been facing this oomkill issue, where task managers are getting
restarted with this error.
I am seeing memory consumption increasing in a linear manner, i have given
memory and CPU as high as possible but still facing the same issue.
We are using rocksdb for the state backend, is t
Hi Dimitris
Maybe you can use the `inputQueueLength` metric.
Best,
Feng
On Tue, Nov 28, 2023 at 12:07 AM Dimitris Mpanelas via user <
user@flink.apache.org> wrote:
> Hello,
>
> I am trying to determine the buffered records in the input buffers of a
> task. I found the inputQueueSize metric. Ac
Hi Oscar
Did you set
state.backend.latency-track.keyed-state-enabled=true;
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/config/#state-backend-latency-track-keyed-state-enabled
Best,
Feng
On Mon, Nov 27, 2023 at 8:42 PM Oscar Perez via user
wrote:
> Hi,
>
> We ar
Hello,
I am trying to determine the buffered records in the input buffers of a task. I
found the inputQueueSize metric. According to the docs it is "The real size of
queued input buffers in bytes". The docs also state that "The size for local
input channels is always 0 since the local channel ta
Hello, Tauseef,
Can you give more details ? Are your task manager and job manager on the
same vm ?
How did you configure the Job manager address in the task manager conf file
?
Did you modify the binding in configuration files ?
Benoit
Le lun. 27 nov. 2023 à 14:29, Tauseef Janvekar
a écrit :
Hi Lasse,
The default flink-conf.yaml file bundled in the distribution should already
have a preset env.java.opts.all config for Java 17. Have you tried that?
Best,
Zhanghao Chen
From: Lasse Nedergaard
Sent: Monday, November 27, 2023 21:20
To: user
Subject: Fli
Hi!
I am currently working with Flink's Table API (Flink version 1.17, Java 11). I
am pulling streaming data from a Kafka topic, processing it and I want to write
the processed data to Azure Blob Storage. I am using the Filesystem SQL
connector (following this page:
https://nightlies.apache.or
Hi HangThanks for the link. I will wait for 3.1 connector release and hope it will be included. Med venlig hilsen / Best regardsLasse NedergaardDen 27. nov. 2023 kl. 12.00 skrev Hang Ruan :Hi, Lasse.There is already a discussion about the connector releases for 1.18[1].Best,Hang[1] https://lists.a
Hi
I need some help to figure out how to get Flink 1.18 running on Java 17
According to the documentation for java compatibility I have to set
env.java.opts.all. As I use data types and generic list and maps from JDK.
I need to configure it so it works for both tests using a mini cluster and o
Hi,
We are using flink 1.16 and we woud like to monitor the state metrics of a
certain job. Looking at the documentation I see that there are some state
access latencies:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/metrics/
Namely, I would like to access the following:
*
Hi,
We have a long running job in production and we are trying to understand
the metrics for this job, see attached screenshot.
We have enabled incremental checkpoint for this job and we use RocksDB as a
state backend.
When deployed from fresh state, the initial checkpoint size is about* 2.41G*.
Dear Team,
How do we get list of tasks that a particular job executes.
If I go toTask Manager then I do not see any tasks. I am also facing the
issue where job manager is not able to access task manager but my jobs are
completing with no issues.
Any help is appreciated.
Thanks,
Tauseef
Hi, Lasse.
There is already a discussion about the connector releases for 1.18[1].
Best,
Hang
[1] https://lists.apache.org/thread/r31f988m57rtjy4s75030pzwrlqybpq2
Lasse Nedergaard 于2023年11月24日周五 22:57写道:
> Hi
>
> From the documentation I can see there isn’t any ES support in Flink 1.18
> righ
Dear Team,
We are getting below error messages in our logs.
Any help on how to resolve would be greatly appreciated.
2023-11-27 08:14:29,712 INFO org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2
24 matches
Mail list logo