Hi,

I have a routine in Spark that iterates  through Hbase rows and tries to
read columns.

My question is how can I read the correct ordering of columns?

example

val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
      classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
      classOf[org.apache.hadoop.hbase.client.Result])

val parsed = hBaseRDD.map{ case(b, a) => val iter = a.list().iterator();
            ( Bytes.toString(a.getRow()).toString,
            Bytes.toString( iter.next().getValue()).toString,
            Bytes.toString( iter.next().getValue()).toString,
            Bytes.toString( iter.next().getValue()).toString,
            Bytes.toString(iter.next().getValue())
)}

The above reads the column family columns sequentially. How can I force it
to read specific columns only?


Thanks


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Reply via email to