On 21 Sep 2010, at 09:54, Mikael Lavoie wrote:

> Hi,
> 
> Sorry, but i get lost in what i wanna do, i have build a small home cluster 
> with Pelican_HPC, that user openMPI, and i was trying to find a way to get a 
> multithreaded program work in a multiprocess way without taking the time to 
> learn MPI. And my vision was a sort of wrapper that take C posix app src 
> code, and convert it from pthread to a multiprocessMPI app. But the problem 
> is the remote memory access, that will only be implemented in MPI 3.0(for 
> what i've read of it).
> 
> So, after 12 hour of intensive reading about MPI and POSIX, the best way to 
> deal with my problem(running a C pthreaded app in my cluster) is to convert 
> the src in a SPMD way.
> I didn't mentionned that basicly, my prog open huge text file, take each 
> string and process it through lot's of cryptographic iteration and then save 
> the result in an output.out like file.
> So i will need to make the master process split the input file and then send 
> them as input for the worker process.
> 
> But if you or someone else know a kind of interpretor like program to run a 
> multithreaded C program and convert it logically to a master/worker 
> multiprocess MPI that will be sended by ssh to the interpreter on the worker 
> side and then lunched.
> 
> This is what i've tried to explain in the last msg. A dream for the hobyist 
> that want to get the full power of a night-time cluster, without having to 
> learn all the MPI syntax and structure.
> 
> If it doesn't exist, this would be a really great tool i think.
> 
> Thank you for your reply, but i think i have answered my question alone... No 
> Pain, No Gain...

What you are thinking of is I believe something more like ScaleMP or Mosix, 
neither of which I have first-hand experience of.  It's a hard problem to solve 
and I don't believe there is any general solution available.

It sounds like your application would be a fairly easy conversion to MPI but to 
do that you will need to re-code areas of your application, it almost sounds 
like you could get away with just using MPI_Init, MPI_Scatter and MPI_Gather.  
Typically you would use the head-node to launch the job but not do any 
computation, rank 0 in the job would then do the marshalling of data and all 
ranks would be started simultaneously, you'll find this easier than having one 
single-rank job spawn more ranks as required.

Ashley,

-- 

Ashley Pittman, Bath, UK.

Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk


Reply via email to