On Nov 21, 2011, at 4:43 PM, Arun C Murthy wrote:

> Hi Ralph,
> 
> Welcome!
> 
> We'd absolutely love to have OpenMPI integrated with Hadoop!
> 
> In fact, there already has been a bunch of discussions running OpenMPI on 
> what we call MR2 (aka YARN), documented here: 
> https://issues.apache.org/jira/browse/MAPREDUCE-2911.
> 
> YARN is our effort to re-imagine Hadoop MapReduce as a general purpose, 
> distributed, data processing system to support MapReduce, MPI and other 
> programming paradigms on the same Hadoop cluster.
> 
> Would love to collaborate, should we discuss on that jira?

Sure! I'll poke my nose over there...thanks!

> 
> thanks,
> Arun
> 
> On Nov 21, 2011, at 3:35 PM, Ralph Castain wrote:
> 
>> Hi folks
>> 
>> I am a lead developer in the Open MPI community, mostly focused on 
>> integrating that package with various environments. Over the last few 
>> months, I've had a couple of people ask me about MPI support within Hadoop - 
>> i.e., they want to run MPI applications under the Hadoop umbrella. I've 
>> spent a little time studying Hadoop, and it would seem a good fit for such a 
>> capability.
>> 
>> I'm willing to do the integration work, but wanted to check first to see if 
>> (a) someone in the Hadoop community is already doing so, and (b) if you 
>> would be interested in seeing such a capability and willing to accept the 
>> code contribution?
>> 
>> Establishing MPI support requires the following steps:
>> 
>> 1. wireup support. MPI processes need to exchange endpoint info (e.g., for 
>> TCP connections, IP address and port) so that each process knows how to 
>> connect to any other process in the application. This is typically done in a 
>> collective "modex" operation. There are several ways of doing it - if we 
>> proceed, I will outline those in a separate email to solicit your input on 
>> the most desirable approach to use.
>> 
>> 2. binding support. One can achieve significant performance improvements by 
>> binding processes to specific cores, sockets, and/or NUMA regions 
>> (regardless of using MPI or not, but certainly important for MPI 
>> applications). This requires not only the binding code, but some logic to 
>> ensure that one doesn't "overload" specific resources.
>> 
>> 3. process mapping. I haven't verified it yet, but I suspect that Hadoop 
>> provides each executing instance with an identifier that is unique within 
>> that job - e.g., we typically assign an integer "rank" that ranges from 0 to 
>> the number of instances being executed. This identifier is critical for MPI 
>> applications, and the relative placement of processes within a job often 
>> dictates overall performance. Thus, we would provide a mapping capability 
>> that allows users to specify patterns of process placement for their job - 
>> e.g., "place one process on each socket on every node".
>> 
>> I have written the code to implement the above support on a number of 
>> systems, and don't foresee major problems doing it for Hadoop (though I 
>> would welcome a chance to get a brief walk-thru the code from someone). 
>> Please let me know if this would be of interest to the Hadoop community.
>> 
>> Thanks
>> Ralph Castain
>> 
>> 
> 

Reply via email to