In your JSON snippet, 111 and 222 are quoted, namely they are strings.
Thus they are automatically inferred as string rather than tinyint by
|jsonRDD|. Try this in Spark shell:
|val sparkContext = sc
import org.apache.spark.sql._
import sparkContext._
val sqlContext = new SQLContext(
sorry ,a mistake by me.
the above code generate a result exactly like the one seen from hive.
NOW my question is can a hive table be applied to insertinto function?
why I keep geting 111,NULL instead of 111,222
--
View this message in context:
http://apache-spark-user-list.1001560.n3.na
It seems that the second problem is dependency issue.
and it works exactly like the first one.
*this is the complete code:*
JavaSchemaRDD schemas=ctx.jsonRDD(arg0);
schemas.insertInto("test", true);
JavaSchemaRDD teeagers=ctx.hql("SELECT a,b FROM test");
List teeagerNames1=teeagers.