Max,
Thanks for the answer...
What I am saying is that my program is not running indeed, yet it doesn't seem
garbage collection occurs after cancelling the job. is you saw in the log, the
memory is still 99% used even though I cancelled the job, and I cannot seem to
run another job. I've had to
Thank you Ufuk! That helped a lot.
But I have an other problem now.
Am I missing something?
Caused by: java.net.UnknownHostException: MYBUCKETNAME
at java.net.InetAddress.getAllByName0(InetAddress.java:1250)
at java.net.InetAddress.getAllByName(InetAddress.java:1162)
at
Hi Emmanuel,
In Java, the garbage collector will always run periodically. So remotely
executing it won't make any difference.
If you want to reuse the existing Java process without restarting it, you
have to stop the program code from executing which is causing the
OutOfMemoryError. Usually, this
Hey Pietro!
You have to add the following lines to your flink-conf.yaml:
fs.s3.accessKey:
fs.s3.secretKey:
I will fix the error message to include a hint on how to configure this
correctly.
– Ufuk
On Tue, Mar 31, 2015 at 10:53 AM, pietro wrote:
> Dear all,
> I have been developing a Flink
Dear all,
I have been developing a Flink application that has to run on Amazon Elastic
Map Reduce.
For convenience the data that the application has to read and write are on
the S3.
But, I have not been able to access S3 .This is the error I got:
org.apache.flink.client.program.ProgramInvocationE