Hi,

when I stop my flink application in standalone cluster, one of the tasks can 
NOT exit gracefully. And the task managers are lost(or detached?). I can't see 
them in the web ui. However, the task managers are still running in the slave 
servers.


What could be the possible cause? My application run Over-window aggregation on 
a datastream table, the results are written into a custom mysql sink with 
org.apache.commons.dbcp.BasicDataSource. Close methods are called for the 
PreparedStatement,  Connection, and BasicDataSource within 
AbstractRichFunction::close()


Is it because of mysql jdbc doesn't handle interrupt propertly? Should I call 
PreparedStatement::cancel()? I find a similar issue here[1].  Thank you for 
your help.



[1] : 
https://stackoverflow.com/questions/40127228/flink-cannot-cancel-a-running-job-streaming


Best

Yan

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/40127228/flink-cannot-cancel-a-running-job-streaming>

Flink : cannot cancel a running job (streaming) - Stack 
...<https://stackoverflow.com/questions/40127228/flink-cannot-cancel-a-running-job-streaming>
stackoverflow.com
I want to run a streaming job. When I try to run it locally using 
start-clusted.sh and the Flink Web Interface, I have no problem. However, I am 
currently trying to run my job using Flink on YARN


Reply via email to