Thanks for the hint! The problem appears to be a corrupted input file. No
hadoop issues.

Paul


Hi Paul,

Looking at the stack trace, the exception is being thrown from your
map method. Can you put some debugging in there to diagnose it?
Detecting and logging the size of the array and the index you are
trying to access should help. You can write to standard error and look
in the task logs. Another way is to use Reporter's setStatus() method
as a quick way to see messages in the web UI.

Cheers,
Tom

On Mon, Mar 16, 2009 at 11:51 PM, psterk <paul.st...@sun.com> wrote:
>
> Hi,
>
> I have been running a hadoop cluster successfully for a few months.
>  During
> today's run, I am seeing a new error and it is not clear to me how to
> resolve it. Below are the stack traces and the configure file I am using.
> Please share any tips you may have.
>
> Thanks,
> Paul
>
> 09/03/16 16:28:25 INFO mapred.JobClient: Task Id :
> task_200903161455_0003_m_000127_0, Status : FAILED
> java.lang.ArrayIndexOutOfBoundsException: 3
>        at com.sun.pinkdots.LogHandler$Mapper.map(LogHandler.java:71)
>        at com.sun.pinkdots.LogHandler$Mapper.map(LogHandler.java:22)
>        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
>        at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
>

-- 
View this message in context: 
http://www.nabble.com/Problem-with-com.sun.pinkdots.LogHandler-tp22550022p22568374.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to