Hi Stephan, thanks for your support.
I was able to track the problem a few days ago. Unirest was the one to
blame, I was using it on some mapfuncionts to connect to external services
and for some reason it was using insane amounts of virtual memory.
Paulo Cezar
On Mon, Dec 19, 2016 at 11:30 AM
- Are you using RocksDB?
No.
- What is your flink configuration, especially around memory settings?
I'm using default config with 2GB for jobmanager and 5GB for taskmanagers.
I'm starting flink via "./bin/yarn-session.sh -d -n 5 -jm 2048 -tm 5120 -s
4 -nm 'Flink'"
- What do you use for T
Worker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Best regards,
Paulo Cezar
ne to avoid
reprocessing.
[]'s
Paulo Cezar
On Mon, Aug 1, 2016 at 9:15 PM, Paulo Cezar wrote:
>
>> Hi folks,
>>
>>
>> I'm trying to run a DataSet program but after around 200k records are
>> processed a "java.lang.OutOfMemoryError: unable to create new native thread"
>> stops me
ices via RPC or REST APIs
to enrich the raw data with info from other sources.
Might I be doing something wrong or I really should have more memory available?
Thanks,
Paulo Cezar