Re:Re: Serialization issue when using HBase with Spark

2014-12-23 Thread yangliuyu
pache.hadoop.hbase.io.ImmutableBytesWritable], > classOf[org.apache.hadoop.hbase.client.Result]) > > And if using MultiTableInputFormat, driver is not possible put all rowkeys > into HBaseConfiguration > Option 2: > sc.newAPIHadoopRD

Re: Serialization issue when using HBase with Spark

2014-12-15 Thread Aniket Bhatnagar
d if using MultiTableInputFormat, driver is not possible put all >> rowkeys >> > into HBaseConfiguration >> > Option 2: >> > sc.newAPIHadoopRDD(conf, classOf[MultiTableInputFormat], >> > classOf[org.apache.hadoop.hbase.io.ImmutableBytesW

Re: Serialization issue when using HBase with Spark

2014-12-15 Thread Shixiong Zhu
ible put all > rowkeys > > into HBaseConfiguration > > Option 2: > > sc.newAPIHadoopRDD(conf, classOf[MultiTableInputFormat], > > classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable], > > classOf[org.apache.hadoop.hbase.client.Result]) > > > >

Re: Serialization issue when using HBase with Spark

2014-12-14 Thread Yanbo
ges into several parts then use option 2, but I > prefer option 1. So is there any solution for option 1? > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Serialization-issue-when-using-HBase-with-Spark-tp2065

Re: Serialization issue when using HBase with Spark

2014-12-12 Thread Akhil Das
mmutableBytesWritable], > classOf[org.apache.hadoop.hbase.client.Result]) > > It may divide all rowkey ranges into several parts then use option 2, but I > prefer option 1. So is there any solution for option 1? > > > > -- > View this mess

Serialization issue when using HBase with Spark

2014-12-12 Thread yangliuyu
.1001560.n3.nabble.com/Serialization-issue-when-using-HBase-with-Spark-tp20655.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional co