Hi Arnaud,
It seems that the TaskExecutor terminated exceptionally. I think you need
to check the logs of
container_e38_1604477334666_0960_01_04 to figure out why it crashed or
shut down.
Best,
Yang
LINZ, Arnaud 于2020年11月16日周一 下午7:11写道:
> Hello,
>
> I'm running Flink 1.10 on a yarn cluster
Hello,
I'm running Flink 1.10 on a yarn cluster. I have a streaming application, that,
when under heavy load, fails from time to time with this unique error message
in the whole yarn log:
(...)
2020-11-15 16:18:42,202 WARN
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Receiv
Hi Yang,
After I copied the logic from `YarnLogConfigUtil` to my own deployer (maybe
call its logic instead of copying is a better option), the logs now can show
normally.
Thanks again for the kind help.
At 2020-11-16 17:28:47, "马阳阳" wrote:
Hi Yang,
I checked the `YarnLogConfigUtil`, i
Hi Yang,
I checked the `YarnLogConfigUtil`, it does some work to set the configuration
for log.
Should I copy the logic to my deployer?
At 2020-11-16 17:21:07, "马阳阳" wrote:
Hi Yang,
Thank you for you reply.
I set the value for "$internal.deployment.config-dir" to the Flink
configura
Hi Yang,
Thank you for you reply.
I set the value for "$internal.deployment.config-dir" to the Flink
configuration directory.
And the configuration showed on Flink web UI. But it still not work. So I
wonder what should
I set as the value for "$internal.deployment.config-dir"?
At 2020-11
If you are using your own deployer(aka a java program calls the Flink
client API to submit Flink jobs),
you need to check the jobmanager configuration in webUI whether "
$internal.yarn.log-config-file"
is correctly set. If not, maybe you need to set "
$internal.deployment.config-dir" in your deploy
If you have set the environment FLINK_CONF_DIR, then it will have a higher
priority.
I think that could be why you changed the log4j.properties in the conf
directory but it does
not take effect.
Yes, if you have changed the log4j.properties, you need to relaunch the
Flink application.
Although we
HI Yang,
I'm able to see taskmanage and jobmanager logs after I changed the
log4j.properties file (/usr/lib/flink/conf). Thank you!
I updated the file as shown below. I had to kill the app ( yarn
application -kill ) and start flink job again to get the logs. This
doesn't seem like an efficient w
You could issue "ps -ef | grep container_id_for_some_tm". And then you will
find the
following java options about log4j.
-Dlog.file=/var/log/hadoop-yarn/containers/application_xx/container_xx/taskmanager.log
-Dlog4j.configuration=file:./log4j.properties
-Dlog4j.configurationFile=file:./log4j.prope
Sure. I will check that and get back to you. could you please share how to
check java dynamic options?
Best,
Diwakar
On Mon, Nov 2, 2020 at 1:33 AM Yang Wang wrote:
> If you have already updated the log4j.properties, and it still could not
> work, then I
> suggest to log in the Yarn NodeManager
If you have already updated the log4j.properties, and it still could not
work, then I
suggest to log in the Yarn NodeManager machine and check the
log4j.properties
in the container workdir is correct. Also you could have a look at the java
dynamic
options are correctly set.
I think it should work
Hi Yang,
Thank you so much for taking a look at the log files. I changed my
log4j.properties. Below is the actual file that I got from EMR 6.1.0
distribution of flink 1.11. I observed that it is different from Flink 1.11
that i downloaded so i changed it. Still I didn't see any logs.
*Actual*
log
Hi Diwakar Jha,
>From the logs you have provided, everything seems working as expected. The
JobManager and TaskManager
java processes have been started with correct dynamic options, especially
for the logging.
Could you share the content of $FLINK_HOME/conf/log4j.properties? I think
there's somet
13 matches
Mail list logo