Thanks for the feedback. More below:
Is there any MPI implementions which meet the following requirements:
1, it doesn't terminate the whole job when a node is dead?
2, it allows the spare node to replace the dead node and take over the work
of the dead node?
As far as I know, FT-MPI meets the
Thanks for the feedback.
Rui
2010/6/29 Changsheng Jiang
> I am a learner, too, please correct me.
>
> Changsheng Jiang
>
>
> On Tue, Jun 29, 2010 at 15:44, 王睿 wrote:
>
>> Hi, all
>>
>> I'm now learning MPI, but I'm not clear with the follow
OpenMPI version: 1.3.3
Platform: IBM P5
Built OpenMPI 64-bit (i.e., CFLAGS=-q64, CXXFLAGS=-q64, -FFLAGS=-q64,
-FCFLAGS=-q64)
FORTRAN 90 test program:
- Create a large array (3.6 GB of 32-bit INTs)
- Initialize MPI
- Create a large window to encompass large array (3
Open MPI currently has very limited cartesian support -- it actually doesn't
remap anything.
That being said, it is *very* easy to extend Open MPI's algorithms for
cartesian partitioning. As you probably already know, Open MPI is all about
its plugins -- finding and selecting a good set of plu
On Jun 29, 2010, at 3:44 AM, 王睿 wrote:
> 1, suppose a MPI program involves several nodes, if one node dead, will the
> program terminate?
Open MPI will terminate the whole job, yes.
> 2, Is there any possibility to extend or shrink the size of MPI communicator
> size? If so, we can use spare
I don't know exactly what it means in boost.mpi. What it means in MPI is that
you posted a receive that was too short to accommodate the incoming message.
For example, you posted a receive of 4 bytes, but the incoming message was 1024
bytes long. That's a truncate error.
On Jun 28, 2010, at
I would advise pre-posting MPI_Irecv's if you know that messages will be coming.
Barriers should only be used if you need to synchronize between groups of
processes. Keep in mind that MPI is a lossless/reliable message delivery
mechanism, so there's no need for you to call additional error-corr
Dear OpenMPI list,
I am using a MPI-parallelized simulation program, with a
domain-decomposition in 6-Dimensions.
In order to improve the scalability of my program I would like to know
according to what preferences
is MPI distributing the ranks when using MPI_Cart_create( reorder allowed).
To e
I am a learner, too, please correct me.
Changsheng Jiang
On Tue, Jun 29, 2010 at 15:44, 王睿 wrote:
> Hi, all
>
> I'm now learning MPI, but I'm not clear with the following questions,
>
> 1, suppose a MPI program involves several nodes, if one
Hi, all
I'm now learning MPI, but I'm not clear with the following questions,
1, suppose a MPI program involves several nodes, if one node dead, will the
program terminate?
2, Is there any possibility to extend or shrink the size of MPI communicator
size? If so, we can use spare node to replace
10 matches
Mail list logo