Are you sure that you have exactly the same version of Open MPI on all your
nodes?
On May 14, 2013, at 11:39 AM, Hayato KUNIIE wrote:
> Hello I'm kuni255
>
> I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
> about MPI.(Open MPI Ver.1.6.4) I tried following sample which usi
I proclaimed implicit definition of type to add 'implicit node'.
And I change include 'mpif.h' to use mpi.
But Unfortunately I couldn't get more information.
Error occuring on only slave node is to feel uneasy.
But I have no idea except this.
(2013/05/15 1:11), Andrea Negri wrote:
I'm not an
On May 14, 2013, at 3:02 PM, Ralph Castain
wrote:
>
> On May 14, 2013, at 12:56 PM, Damien Kick wrote:
>
>>
>> On May 14, 2013, at 1:46 PM, Ralph Castain
>> wrote:
>>
>>> Problem is that comm_accept isn't thread safe in 1.6 series - we have a
>>> devel branch that might solve it, but is stil
On May 14, 2013, at 12:56 PM, Damien Kick wrote:
>
> On May 14, 2013, at 1:46 PM, Ralph Castain
> wrote:
>
>> Problem is that comm_accept isn't thread safe in 1.6 series - we have a
>> devel branch that might solve it, but is still under evaluation
>
> So then probably the only way to imple
On May 14, 2013, at 1:46 PM, Ralph Castain
wrote:
> Problem is that comm_accept isn't thread safe in 1.6 series - we have a devel
> branch that might solve it, but is still under evaluation
So then probably the only way to implement an MPI server which handles multiple
concurrent clients with
Problem is that comm_accept isn't thread safe in 1.6 series - we have a devel
branch that might solve it, but is still under evaluation
On May 14, 2013, at 11:15 AM, Damien Kick wrote:
> I'm been playing with come code to try and become familiar with
> MPI_Comm_accept and MPI_Comm_connect to i
I'm been playing with come code to try and become familiar with
MPI_Comm_accept and MPI_Comm_connect to implement an MPI
client/server. The code that I have simply sends a single MPI_INT,
the client process pid, to the server and then disconnects. The code
that I have works for a few test runs bu
I'm not an expert of MPI, but I stronly encourage you to use
use mpi
implicit none
This can save a LOT of time in the debug.
On 14 May 2013 18:00, wrote:
> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
Hello I'm kuni255
I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
MPI_REDUCE.
Then, Error occured.
This cluster system consist of one head node and 2 slave nodes.
And sharing home directory in head node by NF