Sure. Here it is. Pretty sure it's something else. Any suggestions on other 
avenues to investigate from folks who've seen this?



#

# A fatal error has been detected by the Java Runtime Environment:

#

#  SIGSEGV (0xb) at pc=0x00007f543716cce9, pid=8260, tid=139999226316544

#

# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 
compressed oops)

# Problematic frame:

# V  [libjvm.so+0x632ce9]  jni_GetByteArrayElements+0x89

#

# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again

#

# An error report file with more information is saved as:

# /home/scox/skylr/skylr-analytics/hs_err_pid8260.log

#

# If you would like to submit a bug report, please visit:

#   http://bugreport.sun.com/bugreport/crash.jsp

#


________________________________
From: andy petrella [andy.petre...@gmail.com]
Sent: Thursday, April 17, 2014 3:40 PM
To: user@spark.apache.org
Subject: Re: Spark 0.9.1 core dumps on Mesos 0.18.0

If you can test it quickly, an option would be to try with the exact same 
version that Sean used (1.7.0_51) ?

Maybe it was a bug fixed in 51 and a regression has been introduced in 55 :-D

Andy
On Thu, Apr 17, 2014 at 9:36 PM, Steven Cox 
<s...@renci.org<mailto:s...@renci.org>> wrote:
FYI, I've tried older versions (jdk6.x), openjdk. Also here's a fresh core dump 
on jdk7u55-b13:


# A fatal error has been detected by the Java Runtime Environment:

#

#  SIGSEGV (0xb) at pc=0x00007f7c6b718d39, pid=7708, tid=140171900581632

#

# JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13)

# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode linux-amd64 
compressed oops)

# Problematic frame:

# V  [libjvm.so+0x632d39]  jni_GetByteArrayElements+0x89

#

# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again

#

# An error report file with more information is saved as:

# /home/scox/skylr/skylr-analytics/hs_err_pid7708.log

#

# If you would like to submit a bug report, please visit:

#   http://bugreport.sun.com/bugreport/crash.jsp


Steve


________________________________
From: andy petrella [andy.petre...@gmail.com<mailto:andy.petre...@gmail.com>]
Sent: Thursday, April 17, 2014 3:21 PM
To: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: Spark 0.9.1 core dumps on Mesos 0.18.0

No of course, but I was guessing some native libs imported (to communicate with 
Mesos) in the project that... could miserably crash the JVM.

Anyway, so you tell us that using this oracle version, you don't have any 
issues when using spark on mesos 0.18.0, that's interesting 'cause AFAIR, my 
last test (done by night, which means floating and eventual memory) I was using 
this particular version as well.

Just to make thing clear, Sean, you're using spark 0.9.1 on Mesos 0.18.0 with 
Hadoop 2.x (x >= 2) without any modification than just specifying against which 
version of hadoop you had run make-distribution?

Thanks for your help,

Andy

On Thu, Apr 17, 2014 at 9:11 PM, Sean Owen 
<so...@cloudera.com<mailto:so...@cloudera.com>> wrote:
I don't know if it's anything you or the project is missing... that's
just a JDK bug.
FWIW I am on 1.7.0_51 and have not seen anything like that.

I don't think it's a protobuf issue -- you don't crash the JVM with
simple version incompatibilities :)
--
Sean Owen | Director, Data Science | London


On Thu, Apr 17, 2014 at 7:29 PM, Steven Cox 
<s...@renci.org<mailto:s...@renci.org>> wrote:
> So I tried a fix found on the list...
>
>    "The issue was due to meos version mismatch as I am using latest mesos
> 0.17.0, but spark uses 0.13.0.
>     Fixed by updating the SparkBuild.scala to latest version."
>
> I changed this line in SparkBuild.scala
>         "org.apache.mesos"         % "mesos"            % "0.13.0",
> to
>         "org.apache.mesos"         % "mesos"            % "0.18.0",
>
> ...ran make-distribution.sh, repackaged and redeployed the tar.gz to HDFS.
>
> It still core dumps like this:
> https://gist.github.com/stevencox/11002498
>
> In this environment:
>   Ubuntu 13.10
>   Mesos 0.18.0
>   Spark 0.9.1
>   JDK 1.7.0_45
>   Scala 2.10.1
>
> What am I missing?


Reply via email to