Cool.  So Michael's hunch was correct, it is a thread issue.  I'm currently
using a tarball build, but I'll do a spark build with the patch as soon as
I have a chance and test it out.

Keith


On Tue, Jul 15, 2014 at 4:14 PM, Zongheng Yang <zonghen...@gmail.com> wrote:

> Hi Keith & gorenuru,
>
> This patch (https://github.com/apache/spark/pull/1423) solves the
> errors for me in my local tests. If possible, can you guys test this
> out to see if it solves your test programs?
>
> Thanks,
> Zongheng
>
> On Tue, Jul 15, 2014 at 3:08 PM, Zongheng Yang <zonghen...@gmail.com>
> wrote:
> > - user@incubator
> >
> > Hi Keith,
> >
> > I did reproduce this using local-cluster[2,2,1024], and the errors
> > look almost the same.  Just wondering, despite the errors did your
> > program output any result for the join? On my machine, I could see the
> > correct output.
> >
> > Zongheng
> >
> > On Tue, Jul 15, 2014 at 1:46 PM, Michael Armbrust
> > <mich...@databricks.com> wrote:
> >> Thanks for the extra info.  At a quick glance the query plan looks fine
> to
> >> me.  The class IntegerType does build a type tag.... I wonder if you are
> >> seeing the Scala issue manifest in some new way.  We will attempt to
> >> reproduce locally.
> >>
> >>
> >> On Tue, Jul 15, 2014 at 1:41 PM, gorenuru <goren...@gmail.com> wrote:
> >>>
> >>> Just my "few cents" on this.
> >>>
> >>> I having the same problems with v 1.0.1 but this bug is sporadic and
> looks
> >>> like is relayed to object initialization.
> >>>
> >>> Even more, i'm not using any SQL or something. I just have utility
> class
> >>> like this:
> >>>
> >>> object DataTypeDescriptor {
> >>>   type DataType = String
> >>>
> >>>   val BOOLEAN = "BOOLEAN"
> >>>   val STRING = "STRING"
> >>>   val TIMESTAMP = "TIMESTAMP"
> >>>   val LONG = "LONG"
> >>>   val INT = "INT"
> >>>   val SHORT = "SHORT"
> >>>   val BYTE = "BYTE"
> >>>   val DECIMAL = "DECIMAL"
> >>>   val DOUBLE = "DOUBLE"
> >>>   val FLOAT = "FLOAT"
> >>>
> >>>   def $$(name: String, format: Option[String] = None) =
> >>> DataTypeDescriptor(name, format)
> >>>
> >>>   private lazy val nativeTypes: Map[String, NativeType] = Map(
> >>>     BOOLEAN -> BooleanType, STRING -> StringType, TIMESTAMP ->
> >>> TimestampType, LONG -> LongType, INT -> IntegerType,
> >>>     SHORT -> ShortType, BYTE -> ByteType, DECIMAL -> DecimalType,
> DOUBLE
> >>> ->
> >>> DoubleType, FLOAT -> FloatType
> >>>   )
> >>>
> >>>   lazy val defaultValues: Map[String, Any] = Map(
> >>>     BOOLEAN -> false, STRING -> "", TIMESTAMP -> null, LONG -> 0L, INT
> ->
> >>> 0,
> >>> SHORT -> 0.toShort, BYTE -> 0.toByte,
> >>>     DECIMAL -> BigDecimal(0d), DOUBLE -> 0d, FLOAT -> 0f
> >>>   )
> >>>
> >>>   def apply(dataType: String): DataTypeDescriptor = {
> >>>     DataTypeDescriptor(dataType.toUpperCase, None)
> >>>   }
> >>>
> >>>   def apply(dataType: SparkDataType): DataTypeDescriptor = {
> >>>     nativeTypes
> >>>       .find { case (_, descriptor) => descriptor == dataType }
> >>>       .map { case (name, descriptor) => DataTypeDescriptor(name, None)
> }
> >>>       .get
> >>>   }
> >>>
> >>> .....
> >>>
> >>> and some test that check each of this methods.
> >>>
> >>> The problem is that this test fails randomly with this error.
> >>>
> >>> P.S.: I did not have this problem in Spark 1.0.0
> >>>
> >>>
> >>>
> >>> --
> >>> View this message in context:
> >>>
> http://apache-spark-user-list.1001560.n3.nabble.com/Error-while-running-Spark-SQL-join-when-using-Spark-1-0-1-tp9776p9817.html
> >>> Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
> >>
> >>
>

Reply via email to