Does anyone know if Spark can work with HBase tables using Spark SQL? I know in Hive we are able to create tables on top of an underlying HBase table that can be accessed using MapReduce jobs. Can the same be done using HiveContext or SQLContext? We are trying to setup a way to GET and POST data to and from the HBase table using the Spark SQL JDBC thriftserver from our RESTful API endpoints and/or HTTP web farms. If we can get this to work, then we can load balance the thriftservers. In addition, this will benefit us in giving us a way to abstract the data storage layer away from the presentation layer code. There is a chance that we will swap out the data storage technology in the future. We are currently experimenting with Kudu.
Thanks, Ben --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org