Hi,
when I use Dataframe with table schema, It goes wrong:
val test_schema = StructType(Array(
StructField("id", IntegerType, false),
StructField("flag", CharType(1), false),
StructField("time", DateType, false)));
val df = spark.read.format("com.databricks.spark.csv")
.schema(test_schema)
.option("header", "false")
.option("inferSchema", "false")
.option("delimiter", ",")
.load("file:///Users/name/b")
The log is below:
Exception in thread "main" scala.MatchError: CharType(1) (of class
org.apache.spark.sql.types.CharType)
at
org.apache.spark.sql.catalyst.encoders.RowEncoder$.org$apache$spark$sql$catalyst$encoders$RowEncoder$$serializerFor(RowEncoder.scala:73)
at
org.apache.spark.sql.catalyst.encoders.RowEncoder$$anonfun$2.apply(RowEncoder.scala:158)
at
org.apache.spark.sql.catalyst.encoders.RowEncoder$$anonfun$2.apply(RowEncoder.scala:157)
Why? Is this a bug?
But I found spark will translate char type to string when using create
table command:
create table test(flag char(1));
desc test: flag string;
Regards
Wendy He