Hi
mpirun has an option for this (check the mpirun man page):
-tag-output, --tag-output
Tag each line of output to stdout, stderr, and
stddiag with [jobid, rank] indicating the process jobid and
rank that generated the
output, and the channel which generated
Hi
I am really no python expert, but it looks to me as if you were
gathering arrays filled with zeroes:
a = array('i', [0]) * n
Shouldn't this line be
a = array('i', [r])*n
where r is the rank of the process?
Jody
On Thu, May 20, 2010 at 12:00 AM, Battalgazi YILDIRIM
wrote:
> Hi,
>
>
> I a
Hello,
I assume this question has been already discussed many times, but I can not
find on Internet a solution to my problem.
It is about buffer size limit of MPI_Send and MPI_Recv with heterogeneous
system (32 bit laptop / 64 bit cluster).
My configuration is :
open mpi 1.4, configured with: --wi
Hello,
I have a general question about the best way to implement an openmpi
application, i.e the design of the application.
A machine (I call it the "server") should send to a cluster containing a lot
of processors (the "clients") regularly task to do (byte buffers from very
various size).
The se
Olivier Riff wrote:
Hello,
I assume this question has been already discussed many times, but I
can not find on Internet a solution to my problem.
It is about buffer size limit of MPI_Send and MPI_Recv with
heterogeneous system (32 bit laptop / 64 bit cluster).
My configuration is :
open mpi 1
This probably got fixed in https://svn.open-mpi.org/trac/ompi/ticket/2386
Can you try 1.4.2, the fix should be in there.
Regards
--Nysal
On Thu, May 20, 2010 at 2:02 PM, Olivier Riff wrote:
> Hello,
>
> I assume this question has been already discussed many times, but I can not
> find on Intern
Hello Terry,
Thanks for your answer.
2010/5/20 Terry Dontje
> Olivier Riff wrote:
>
> Hello,
>
> I assume this question has been already discussed many times, but I can not
> find on Internet a solution to my problem.
> It is about buffer size limit of MPI_Send and MPI_Recv with heterogeneous
2010/5/20 Nysal Jan
> This probably got fixed in https://svn.open-mpi.org/trac/ompi/ticket/2386
> Can you try 1.4.2, the fix should be in there.
>
>
I will test it soon (takes some time to install the new version on each
node) . It would be perfect if it fixes it.
I will tell you the result asap
Hello,
Thank for the advice, it works with NFS !
But :
1) it doesn't work anymore, if I remove --prefix /Network/opt/openmpi-1.4.2 (is
there a way to remove it on OSX, already declared ?)
2) I must use the option -static-intel at link else i have a problem with
libiomp5.dylib not found
Chri
I replied to this yesterday:
http://www.open-mpi.org/community/lists/users/2010/05/13090.php
On May 20, 2010, at 8:13 AM, Christophe Peyret wrote:
> Hello,
>
> Thank for the advice, it works with NFS !
>
> But :
>
> 1) it doesn't work anymore, if I remove --prefix /Network/opt/openmpi-1
Thank you!
Sang Chul
On May 20, 2010, at 2:39 AM, jody wrote:
> Hi
> mpirun has an option for this (check the mpirun man page):
>
> -tag-output, --tag-output
> Tag each line of output to stdout, stderr, and
> stddiag with [jobid, rank] indicating the process jobid and
> r
Hi Jody,
I think that it is correct, you can test this example in your desktop,
thanks,
On Thu, May 20, 2010 at 3:18 AM, jody wrote:
> Hi
> I am really no python expert, but it looks to me as if you were
> gathering arrays filled with zeroes:
> a = array('i', [0]) * n
>
> Shouldn't this lin
Can you send us an all-C or all-Fortran example that shows the problem?
We don't have easy access to test through the python bindings. ...ok, I admit
it, it's laziness on my part. :-) But having a pure Open MPI test app would
also remove some possible variables and possible sources of error.
Hi Jose,
On 5/12/2010 10:57 PM, Jos? Ignacio Aliaga Estell?s wrote:
I think that I have found a bug on the implementation of GM collectives
routines included in OpenMPI. The version of the GM software is 2.0.30
for the PCI64 cards.
I obtain the same problems when I use the 1.4.1 or the 1.4.2
I hope I'm not too late in my reply, and I hope I'm not repeating the
same solution others have given you.
I had a similar error in a code a few months ago. The error was this: I
think I was doing an MPI_Pack/Unpack to send data between nodes. The
problem was that I was allocating space for a buff
Hi,
you are right, I should have provided C++ and Fortran example, so I am doing
now
Here is "cplusplus.cpp"
#include
#include
using namespace std;
int main()
{
MPI::Init();
char command[] = "./a.out";
MPI::Info info;
MPI::Intercomm child = MPI::COMM_WORLD.Spawn(command, NULL,
You're basically talking about implementing some kind of application-specific
protocol. A few tips that may help in your design:
1. Look into MPI_Isend / MPI_Irecv for non-blocking sends and receives. These
may be particularly useful in the server side, so that it can do other stuff
while sen
I have done the test with v1.4.2 and indeed it fixes the problem.
Thanks Nysal.
Thank you also Terry for your help. With the fix I do not need anymore to
use a huge value of btl_tcp_eager_limit (I keep the default value) which
considerably decreases the memory consumption I had before. Everything w
thanks for pointing the problem out. I checked in the code, the problem
is the MPI layer itself. The following check prevents us from doing
anything
e.g. ompi/mpi/c/allgather.c
if ((MPI_IN_PLACE != sendbuf && 0 == sendcount) ||
(0 == recvcount)) {
return MPI_SUCCESS;
}
Filed as https://svn.open-mpi.org/trac/ompi/ticket/2415.
Thanks for the bug report!
On May 20, 2010, at 1:33 PM, Edgar Gabriel wrote:
> thanks for pointing the problem out. I checked in the code, the problem
> is the MPI layer itself. The following check prevents us from doing
> anything
>
> -
On 20 May 2010 11:09, Jeff Squyres wrote:
> Can you send us an all-C or all-Fortran example that shows the problem?
>
> We don't have easy access to test through the python bindings. ...ok, I
> admit it, it's laziness on my part. :-)
>
Jeff, you should really learn Python and give a try to mpi
On May 20, 2010, at 2:52 PM, Lisandro Dalcin wrote:
> Jeff, you should really learn Python and give a try to mpi4py. Even if
> you do not consider Python a language for serious, production work
> :-), it would be a VERY productive one for writing tests targeting
> MPI.
Freely admitted laziness on
22 matches
Mail list logo