On Sep 14, 2011, at 1:27 PM, Bharath Ravi wrote: > Hi all, > > I'm a newcomer to Hadoop development, and I'm planning to work on an idea > that I wanted to run by the dev community. > > My apologies if this is not the right place to post this. > > Amazon has an "Elastic MapReduce" Service ( > http://aws.amazon.com/elasticmapreduce/) that runs on Hadoop. > The service allows dynamic/runtime changes in resource allocation: more > specifically, varying the number of > compute nodes that a job is running on. > > I was wondering if such a facility could be added to the publicly available > Hadoop MapReduce.
From a long while you can bring up either DataNodes or TaskTrackers and point them (via config) to the NameNode/JobTracker and they will be part of the cluster. Similarly you can just kill the DataNode or TaskTracker and the respective masters will deal with their loss. Arun