Just my "few cents" on this.

I having the same problems with v 1.0.1 but this bug is sporadic and looks
like is relayed to object initialization.

Even more, i'm not using any SQL or something. I just have utility class
like this: 

object DataTypeDescriptor {
  type DataType = String

  val BOOLEAN = "BOOLEAN"
  val STRING = "STRING"
  val TIMESTAMP = "TIMESTAMP"
  val LONG = "LONG"
  val INT = "INT"
  val SHORT = "SHORT"
  val BYTE = "BYTE"
  val DECIMAL = "DECIMAL"
  val DOUBLE = "DOUBLE"
  val FLOAT = "FLOAT"

  def $$(name: String, format: Option[String] = None) =
DataTypeDescriptor(name, format)

  private lazy val nativeTypes: Map[String, NativeType] = Map(
    BOOLEAN -> BooleanType, STRING -> StringType, TIMESTAMP ->
TimestampType, LONG -> LongType, INT -> IntegerType,
    SHORT -> ShortType, BYTE -> ByteType, DECIMAL -> DecimalType, DOUBLE ->
DoubleType, FLOAT -> FloatType
  )

  lazy val defaultValues: Map[String, Any] = Map(
    BOOLEAN -> false, STRING -> "", TIMESTAMP -> null, LONG -> 0L, INT -> 0,
SHORT -> 0.toShort, BYTE -> 0.toByte,
    DECIMAL -> BigDecimal(0d), DOUBLE -> 0d, FLOAT -> 0f
  )

  def apply(dataType: String): DataTypeDescriptor = {
    DataTypeDescriptor(dataType.toUpperCase, None)
  }

  def apply(dataType: SparkDataType): DataTypeDescriptor = {
    nativeTypes
      .find { case (_, descriptor) => descriptor == dataType }
      .map { case (name, descriptor) => DataTypeDescriptor(name, None) }
      .get
  }

.....

and some test that check each of this methods.

The problem is that this test fails randomly with this error.

P.S.: I did not have this problem in Spark 1.0.0



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Error-while-running-Spark-SQL-join-when-using-Spark-1-0-1-tp9776p9817.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to