MPI mandates that the count argument is an "int", which is signed. Hence, the 2GB limit holds true on most modern platforms.

If you need to send an aggregate amount of data over 2GB, you should be able to make a composite datatype and send multiple of those. E.g., make a datatype that is 2 doubles; if you send 2GB of those, then you're actually sending 4GB of doubles. And so on.


On Feb 18, 2009, at 7:20 PM, Justin wrote:

My guess would be that your count argument is overflowing. Is the count a signed 32 bit integer? If so it will overflow around 2GB. Try outputting the size that you are sending and see if you get large negative number.

Justin

Vittorio wrote:
Hi! I'm doing a test to measure the transfer rates and latency of ompi over infiniband

starting from 1 kB everything was doing fine until i wanted to transfer 2 GB and i received this error

[tatami:02271] *** An error occurred in MPI_Recv
[tatami:02271] *** on communicator MPI_COMM_WORLD
[tatami:02271] *** MPI_ERR_COUNT: invalid count argument
[tatami:02271] *** MPI_ERRORS_ARE_FATAL (goodbye)
[randori:12166] *** An error occurred in MPI_Send
[randori:12166] *** on communicator MPI_COMM_WORLD
[randori:12166] *** MPI_ERR_COUNT: invalid count argument
[randori:12166] *** MPI_ERRORS_ARE_FATAL (goodbye)


this error appears if i run the program either on the same node or both
is 2 GB the intrinsic limit of MPI_Send/MPI_Recv?

thanks a lot
Vittorio
------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to