That explains it. Thanks Reynold.
Justin
On Mon, Apr 13, 2015 at 11:26 PM, Reynold Xin wrote:
> I think what happened was applying the narrowest possible type. Type
> widening is required, and as a result, the narrowest type is string between
> a string and an int.
>
>
> https://github.com/apac
I think what happened was applying the narrowest possible type. Type
widening is required, and as a result, the narrowest type is string between
a string and an int.
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scal
Hello,
I am experimenting with DataFrame. I tried to construct two DataFrames with:
1. case class A(a: Int, b: String)
scala> adf.printSchema()
root
|-- a: integer (nullable = false)
|-- b: string (nullable = true)
2. case class B(a: String, c: Int)
scala> bdf.printSchema()
root
|-- a: string
Hello,
I am experimenting with DataFrame. I tried to construct two DataFrames with:
1. case class A(a: Int, b: String)
scala> adf.printSchema()
root
|-- a: integer (nullable = false)
|-- b: string (nullable = true)
2. case class B(a: String, c: Int)
scala> bdf.printSchema()
root
|-- a: string