Hey,
is it possible to start a job on a hadoop cluster from remote. For
example we have a web application
which runs on an apache tomcat server. And would like to start a
mapreduce job on our cluster, from
within the webapp.
Is this possible? And if yes, what are the steps to get there? D
Hi,
Since you didn't get an answer... yes you can.
I'm working from memory so I may be a bit fuzzy on the details...
Your external app has to be 'cloud aware'. Essentially create a config file for
your application that you can read in which lets your app know where the JT and
NN are.
Then you
Need a separate metrics per garbage collector
-
Key: HADOOP-6887
URL: https://issues.apache.org/jira/browse/HADOOP-6887
Project: Hadoop Common
Issue Type: Improvement
Components: metrics
Being able to close all cached FileSystem objects for a given UGI
-
Key: HADOOP-6888
URL: https://issues.apache.org/jira/browse/HADOOP-6888
Project: Hadoop Common
Issue Type: Bu