What Sanjay and Swagatika replied are perfect.
Plus fundamentally if you see, if you are able to run the hive query from
CLI or some internal API like HiveDriver, the flow will be this:
>> Compile the query
>> Get the info from Hive Metastore using Thrift or JDBC, Optimize it ( if
required and ca
Hi,
You can also use oozie's fork fearure which acts as a workflow scheduler
to run jobs in parallel. You just need to define all our hql's inside the
workflow.XML to make it run in parallel.
On Apr 22, 2014 3:14 AM, "Subramanian, Sanjay (HQP)" <
sanjay.subraman...@roberthalf.com> wrote:
> Hey
Hey
Instead of going into HIVE CLI
I would propose 2 ways
NOHUP
nohup hive -f path/to/query/file/hive1.hql >> ./hive1.hql_`date
+%Y-%m-%d-%H–%M–%S`.log 2>&1
nohup hive -f path/to/query/file/hive2.hql >> ./hive2.hql_`date
+%Y-%m-%d-%H–%M–%S`.log 2>&1
nohup hive -f path/to/query/file/hive3.hql >>