I found the reason, it is about sc. Thanks

On Tue, Jul 14, 2015 at 9:45 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> Someone else also reported this error with spark 1.4.0
>
> Thanks
> Best Regards
>
> On Tue, Jul 14, 2015 at 6:57 PM, Arthur Chan <arthur.hk.c...@gmail.com>
> wrote:
>
>> Hi, Below is the log form the worker.
>>
>>
>> 15/07/14 17:18:56 ERROR FileAppender: Error writing stream to file
>> /spark/app-20150714171703-0004/5/stderr
>>
>> java.io.IOException: Stream closed
>>
>> at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
>>
>> at java.io.BufferedInputStream.read1(BufferedInputStream.java:283)
>>
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>>
>> at java.io.FilterInputStream.read(FilterInputStream.java:107)
>>
>> at
>> org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70)
>>
>> at
>> org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39)
>>
>> at
>> org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
>>
>> at
>> org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
>>
>> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
>>
>> at
>> org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38)
>>
>> 15/07/14 17:18:57 INFO Worker: Executor app-20150714171703-0004/5
>> finished with state KILLED exitStatus 143
>>
>> 15/07/14 17:18:57 INFO Worker: Cleaning up local directories for
>> application app-20150714171703-0004
>>
>> 15/07/14 17:18:57 WARN ReliableDeliverySupervisor: Association with
>> remote system [akka.tcp://sparkExecutor@10.10.10.1:52635] has failed,
>> address is now gated for [5000] ms. Reason is: [Disassociated].
>>
>
>

Reply via email to