Thanks for the information. My problem is resolved now .


I have one more issue.



I am not able to save core dump file. Always shows *“# Failed to write core
dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c
unlimited" before starting Java again"*



I set core dump limit to unlimited in all nodes. Using below settings
   Edit /etc/security/limits.conf file and add  " * soft core unlimited "
line.

I rechecked  using :  $ ulimit -all

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 241204
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 241204
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

but when my spark application  crash ,  show error " Failed to
write core dump. Core dumps have been disabled. To enablecore dumping, try
"ulimit -c unlimited" before starting Java again”.


Regards

Prateek





On Wed, Jun 29, 2016 at 9:30 PM, dhruve ashar <dhruveas...@gmail.com> wrote:

> You can look at the yarn-default configuration file.
>
> Check your log related settings to see if log aggregation is enabled or
> also the log retention duration to see if its too small and files are being
> deleted.
>
> On Wed, Jun 29, 2016 at 4:47 PM, prateek arora <prateek.arora...@gmail.com
> > wrote:
>
>>
>> Hi
>>
>> My Spark application was crashed and show information
>>
>> LogType:stdout
>> Log Upload Time:Wed Jun 29 14:38:03 -0700 2016
>> LogLength:1096
>> Log Contents:
>> #
>> # A fatal error has been detected by the Java Runtime Environment:
>> #
>> #  SIGILL (0x4) at pc=0x00007f67baa0d221, pid=12207, tid=140083473176320
>> #
>> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
>> 1.7.0_67-b01)
>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
>> linux-amd64 compressed oops)
>> # Problematic frame:
>> # C  [libcaffe.so.1.0.0-rc3+0x786221]  sgemm_kernel+0x21
>> #
>> # Failed to write core dump. Core dumps have been disabled. To enable core
>> dumping, try "ulimit -c unlimited" before starting Java again
>> #
>> # An error report file with more information is saved as:
>> #
>>
>> /yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_000003/hs_err_pid12207.log
>>
>>
>>
>> but I am not able to found
>>
>> "/yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_000003/hs_err_pid12207.log"
>> file . its deleted  automatically after Spark application
>>  finished
>>
>>
>> how  to retain report file , i am running spark with yarn .
>>
>> Regards
>> Prateek
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Error-report-file-is-deleted-automatically-after-spark-application-finished-tp27247.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
>
> --
> -Dhruve Ashar
>
>

Reply via email to