Re: [OMPI users] MPI_Irecv, MPI_Wait and MPI_Iprobe

2011-11-20 Thread Lukas Razik
tried that before. Regards and good luck! Lukas >____ > From: Lukas Razik >To: Mudassar Majeed ; "us...@open-mpi.org" > >Sent: Sunday, November 20, 2011 3:22 PM >Subject: Re: [OMPI users] MPI_Irecv, MPI_Wait and MPI_Iprobe > >H

Re: [OMPI users] MPI_Irecv, MPI_Wait and MPI_Iprobe

2011-11-20 Thread Lukas Razik
Hello Mudassar! >Dear people, >   I have a scenario as shown below, please tell me if it >is possible or not > > >-- >while(!IsDone) >{ > >// some code here > >MPI_Irecv( .. ); > >// some code here > >M

Re: [OMPI users] UDP like messaging with MPI

2011-11-19 Thread Lukas Razik
Hi! >I know about tnıs functıons, they special requirements  like the mpi_irecv >call should be made in every process. My processes should not look for >messages or implicitly receive them. I understand. But then I think your UDP comparison is wrong - whatever... :) > But messages shuddering

Re: [OMPI users] UDP like messaging with MPI

2011-11-19 Thread Lukas Razik
Hello Mudassar, I think you want an "asynchronous communication". Therefore you could use these functions: http://mpi.deino.net/mpi_functions/MPI_Isend.html http://mpi.deino.net/mpi_functions/MPI_Irecv.html http://supercomputingblog.com/mpi/mpi-tutorial-5-asynchronous-communication/ Best regard

Re: [OMPI users] Problem with openmpi-default-hostfile

2011-11-07 Thread Lukas Razik
Hello Ralph and thanks for your answer! > Where did you install OMPI? If you check "which mpirun", does it point > to the same installation where you edited the default hostfile? It was installed in the default path which is chosen by OFED. And yes, I've edited the right openmpi-default-hostfi

[OMPI users] Problem with openmpi-default-hostfile

2011-11-06 Thread Lukas Razik
Hello together! I've built v1.4.3 (which was in OFED-1.5.3.2) and v1.4.4 (from you website). But in both versions I've the following problem: If I write some hosts into '/usr/mpi/gcc/openmpi-1.4.4/etc/openmpi-default-hostfile': cluster1 cluster2 cluster3 cluster4 and execute 'mpirun -np 4 " then