The complete code is as follows: JavaHiveContext ctx; JavaSchemaRDD schemas=ctx.jsonRDD(arg0); schemas.insertInto("test", true);
JavaSchemaRDD teeagers=ctx.hql("SELECT a,b FROM test"); List<String> teeagerNames1=teeagers.map(new Function<Row,String>() { /** * */ private static final long serialVersionUID = 1L; @Override public String call(Row arg1) throws Exception { // TODO Auto-generated method stub System.out.println(arg1.length()); System.out.println("Name:"+arg1.getString(0)); System.out.println("Name:"+arg1.getString(1)); return "Name:"+arg1.getString(0); } }).collect(); the input is : {"a":"111","b":"222"} {"a":"111","b":"222"} the output is: 2 Name:111 Name:NULL while from hive: hive> select * from test; OK 111 NULL 111 NULL the schema of test into: create table test( a tinyint, b tinyint ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'; so,what's going wrong? Can insertinto function be applied to a hive table? why I keep geting 111,NULL instead of 111,222? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/a-hivectx-insertinto-issue-can-inertinto-function-be-applied-to-a-hive-table-tp16738.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org