[OMPI users] adding library (libmpi_cxx.so.1)

2013-09-13 Thread amirreza Hashemi
Hi All,

I have a problem to add a MPI library 
libmpi_cxx.so.1 to code which I am workign on it, I export the library 
to LD_LIBRARY_PATH. But whenever I do this, I get this error:
Fatal error in MPI_Comm_dup: Invalid communicator, error stack:
MPI_Comm_dup(168): MPI_Comm_dup(comm=0x0, new_comm=0x7fff39826eac) failed
MPI_Comm_dup(96).: Invalid communicator

It might be possible that I have several openmpi in my linux machine and 
the code could not recognize right one. I just have this library in 
these two path: /usr/lib64/openmpi/lib/ and /usr/local/lib/, but none of them 
is not working in my case. Installed openmpi in my machine is 
openmpi-1.5.4-3.fc16.x86_64 and I am working with Fedora.
Does anybody can help me to figure out this problem?

Thanks,
Ami

Re: [OMPI users] adding library (libmpi_cxx.so.1)

2013-09-13 Thread Jeff Squyres (jsquyres)
This isn't quit enough information to figure out what's going on.  Can you 
provide all the information listed here:

   http://www.open-mpi.org/community/help/


On Sep 13, 2013, at 7:43 PM, amirreza Hashemi  
wrote:

> Hi All,
> 
> I have a problem to add a MPI library libmpi_cxx.so.1 to code which I am 
> workign on it, I export the library to LD_LIBRARY_PATH. But whenever I do 
> this, I get this error:
> Fatal error in MPI_Comm_dup: Invalid communicator, error stack:
> MPI_Comm_dup(168): MPI_Comm_dup(comm=0x0, new_comm=0x7fff39826eac) failed
> MPI_Comm_dup(96).: Invalid communicator
> 
> It might be possible that I have several openmpi in my linux machine and the 
> code could not recognize right one. I just have this library in these two 
> path: /usr/lib64/openmpi/lib/ and /usr/local/lib/, but none of them is not 
> working in my case. Installed openmpi in my machine is 
> openmpi-1.5.4-3.fc16.x86_64 and I am working with Fedora.
> Does anybody can help me to figure out this problem?
> 
> Thanks,
> Ami
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI users] adding library (libmpi_cxx.so.1)

2013-09-13 Thread amirreza Hashemi

Jeff,

Sorry about that, actually I am not sure what more exactly I
 can provide here, I am using openmpi 1.5.4-3.fc16.x86_64 with fedora 
machine, I installed that on yum install openmpi command. But when I am 
trying to make connection with some library  libmpi_cxx.so.1 to main 
code, it gives me an error which it says there is no communicator
Fatal error in MPI_Comm_dup: Invalid communicator, error stack:
MPI_Comm_dup(168): MPI_Comm_dup(comm=0x0, new_comm=0x7fffde7fb24c) failed
MPI_Comm_dup(96).: Invalid communicator

Mpi libraries are found in two paths which are /usr/lib64/openmpi/lib/ and 
/usr/local/lib/
I
 would like to ask how can I can make connection to mpi library in 
fedora linux machine. It is indeed that I added   libmpi_cxx.so.1 
(libmpi_cxx.so.1 => /usr/lib64/openmpi/lib/libmpi_cxx.so.1
 (0x7f120836e000)). 
I don't know is that enough information or not.
I would be really apprecieated if you can help me to figure out this problem!!

Thanks,
Ami



 From: amirreza Hashemi 
To: "us...@open-mpi.org"  
Sent: Friday, September 13, 2013 1:43 PM
Subject: adding library (libmpi_cxx.so.1)



Hi All,

I have a problem to add a MPI library 
libmpi_cxx.so.1 to code which I am workign on it, I export the library 
to LD_LIBRARY_PATH. But whenever I do this, I get this error:
Fatal error in MPI_Comm_dup: Invalid communicator, error stack:
MPI_Comm_dup(168): MPI_Comm_dup(comm=0x0, new_comm=0x7fff39826eac) failed
MPI_Comm_dup(96).: Invalid communicator

It might be possible that I have several openmpi in my linux machine and 
the code could not recognize right one. I just have this library in 
these two path: /usr/lib64/openmpi/lib/ and /usr/local/lib/, but none of them 
is not working in my case. Installed openmpi in my machine is 
openmpi-1.5.4-3.fc16.x86_64 and I am working with Fedora.
Does anybody can help me to figure out this problem?

Thanks,
Ami

[OMPI users] any deadlocks in this sets of MPI_send and MPI_recv ?

2013-09-13 Thread Huangwei
Dear All,

I have a question about using MPI_send and MPI_recv.

*The object  is as follows:*
I would like to send an array Q from rank=1, N-1 to rank=0, and then rank 0
receives the Q from all other processors. Q will be put into a new array Y
in rank 0. (of couse this is not realized by MPI). and then MPI_bcast is
used (from rank 0) to broadcast the Y to all the processors.

*Fortran Code is like:*
if(myrank .eq. 0) then
 itag = myrank
 call mpi_send(Q.., 0, itag, .)
else
 do i=1, N-1
  itag = i
 call mpi_recv(QRECS..., i, itag, .)
 enddo

endif

call mpi_bcast(YVAR., 0, ..)

*Problem I met is:*
In my simulation, time marching is performed, These mpi_send and recv are
fine for the first three time steps. However, for the fourth time step, the
looping is only finished from i=1 to i=13, (totally 48 processors). That
mean after 14th processors, the mpi_recv did not receive the date from
them. Thus the code hangs there forever. Does deadlock occur for this
situation? How can I figure out this problem?

Thank you so much if anyone can give me some suggestions.

best regards,
Huangwei


Re: [OMPI users] any deadlocks in this sets of MPI_send and MPI_recv ?

2013-09-13 Thread Huangwei
*The code I would like to post is like this:*
*
*
if(myrank .ne. 0) then
 itag = myrank
 call mpi_send(Q.., 0, itag, .)
else
 do i=1, N-1
  itag = i
 call mpi_recv(QRECS..., i, itag, .)
 enddo

endif

call mpi_bcast(YVAR., 0, ..)

best regards,
Huangwei






On 13 September 2013 23:25, Huangwei  wrote:

> Dear All,
>
> I have a question about using MPI_send and MPI_recv.
>
> *The object  is as follows:*
> I would like to send an array Q from rank=1, N-1 to rank=0, and then rank
> 0 receives the Q from all other processors. Q will be put into a new array
> Y in rank 0. (of couse this is not realized by MPI). and then MPI_bcast is
> used (from rank 0) to broadcast the Y to all the processors.
>
> *Fortran Code is like:*
> if(myrank .eq. 0) then
>  itag = myrank
>  call mpi_send(Q.., 0, itag, .)
> else
>  do i=1, N-1
>   itag = i
>  call mpi_recv(QRECS..., i, itag, .)
>  enddo
>
> endif
>
> call mpi_bcast(YVAR., 0, ..)
>
> *Problem I met is:*
> In my simulation, time marching is performed, These mpi_send and recv are
> fine for the first three time steps. However, for the fourth time step, the
> looping is only finished from i=1 to i=13, (totally 48 processors). That
> mean after 14th processors, the mpi_recv did not receive the date from
> them. Thus the code hangs there forever. Does deadlock occur for this
> situation? How can I figure out this problem?
>
> Thank you so much if anyone can give me some suggestions.
>
> best regards,
> Huangwei
>
>
>
>
>