since there are other metrics
reported correctly.
how do i check the number of requests capacity?
On Tue, Jun 23, 2020 at 11:32 PM seeksst seek...@163.com wrote:
Hi,
If you don’t care about losing some metrics, you can edit log4j.properties to
ignore it
Hi,
If you don’t care about losing some metrics, you can edit log4j.properties to
ignore it.
log4j.logger.org.apache.flink.runtime.metrics=ERROR
BTW, Whether all machines can telnet datadog port?
Whether the number of requests exceeds the datadog's processing capacity?
原始邮件
发件人:Fanbin bufanbin
Hi,
Recently, I find a problem when job failed in 1.10.0, flink didn’t release
resource first.
You can see I used flink on yarn, and it doesn’t allocate task manager,
beacause no more memory left.
If i cancel the job, the cluster has more memory.
In 1.8.2, the job will restart normally
at is
the exception thrown?
Best
Yun Tang
From: seeksst seek...@163.com
Sent: Wednesday, April 22, 2020 18:17
To: user user@flink.apache.org
Subject: Flink 1.10.0 stop command
Hi,
When i test 1.10.0, i found i must to set savepoint path otherwise i can’t
stop the job. I confuse about this, be
Hi,
When i test 1.10.0, i found i must to set savepoint path otherwise i can’t
stop the job. I confuse about this, beacuse as i know, savepoint offen large
than checkpoint, so i usually resume job from checkpoint. Another problem is
sometimes job throw exception and i can’t trigger a savepoin
may decide many thing, and limit by calcite, so convert may get information
which i don’t want and can’t change it. it seems hard to solve, and i have no
idea.
Best,
L
原始邮件
发件人: Jark Wu
收件人: Till Rohrmann; Danny Chan
抄送: seeksst; user; Timo
Walther
发送时间: 2020年4月17日(周五) 22:53
主题: Re
Hi, All
Recently, I try to upgrade flink from 1.8.2 to 1.10, but i meet some problem
about function. In 1.8.2, there are just Built-In function and User-defined
Functions, but in 1.10, there are 4 categories of funtions.
I defined a function which named JSON_VALUE in my system, it doesn’t exist
Hi:
according to my experience, there are several possible reasons for
checkpoint fail.
1. if you use rocksdb as backend, insufficient disk will cause it.
because file save on local disk, and you may see a exception.
2. Sink can’t be written. all parallelism can’t be complete,
Hi, everyone:
I’m a flink sql user, and the version is 1.8.2.
Recently I confuse about memory and backpressure. I have two job on yarn,
due to memory over, it’s frequently killed by yarn.
One job,I have 3 taskmanagers and 6 parallelism, each one has 8G memory.It read
from kafka, one minute