Hi,

I am working on an application that reads a single Hive Table and do some 
manipulations on each row of it. Finally construct an XML.
Hive table will be a large data set, no chance to fit it in memory. I intend to 
use SparkSQL 1.2.1 (due to project limitations).
Any pointers to me on handling this large data-set will be helpful (Fetch 
Size….).

Thanks in advance.

Kind Regards,
Meetu Maltiar
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to