Hi everybody.
I’m totally new in Spark and I wanna know one stuff that I do not manage to
find. I have a full ambary install with hbase, Hadoop and spark. My code
reads and writes in hdfs via hbase. Thus, as I understood, all data stored
are in bytes format in hdfs. Now, I know that it’s possible t
s possible to request in
hdfs directly via Spark, but I don't know if Spark will support the format
of those data stored from hbase.
I know that it's possible to manage hbase from Spark but I wanna to directly
request in hdfs.
Thanks to confirm it and to say me how to do it.