I guess the problem is pretty much like exception message says: HDFS output
does not work because all datanodes are bad.
If you activate execution retries, the system would re-execute the job. If
the datanodes would be well then, the job would succeed.
Greetings,
Stephan
On Wed, Dec 16, 2015
Hi,
I am receiving the following exception while trying to run the terasort
program on flink. My configuration is as follows:
Hadoop: 2.6.2
Flink: 0.10.1
Server 1:
Hadoop data and name node
Flink job and task manager
Server 2:
Flink task manager
org.apache.flink.client.program.ProgramI