Hi,

We have a custom Hive UDAF function which aggregates a lot of data for
grouping. The reduce task fails with the below stack trace. Any suggestion
would be very help.

MR job was having 5 Maps which completed fine. There were 6 reduces out of
which 5 only completed. Here is a sample MR job_1476197655848_2037226
Error:'
INFO communication thread org.apache.hadoop.mapred.Task: Communication
exception: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.io.BufferedReader.<init>(BufferedReader.java:105)
at java.io.BufferedReader.<init>(BufferedReader.java:116)
at
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:525)
at
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.updateProcessTree(ProcfsBasedProcessTree.java:223)
at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:847)
at org.apache.hadoop.mapred.Task.updateCounters(Task.java:986)
at org.apache.hadoop.mapred.Task.access$500(Task.java:79)
at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:735)
at java.lang.Thread.run(Thread.java:745)

Thank you,
Srinivas Pogiri

Reply via email to