I doubt this come from the MPI_Pack/MPI_Unpack. The difference is 137 seconds for 5 calls. That's basically 27 seconds by call to MPI_Pack, for packing 8 integers. I know the code and I'm affirmative there is no way to spend 27 seconds over there.

Can you run your application using valgrind with the callgrind tool. This will give you some basic informations about where the time is spend. This will give us additional information about where to look.

  Thanks,
    george.

On Mar 6, 2007, at 11:26 AM, Michael wrote:

I have a section of code were I need to send 8 separate integers via
BCAST.

Initially I was just putting the 8 integers into an array and then
sending that array.

I just tried using MPI_PACK on those 8 integers and I'm seeing a
massive slow down in the code, I have a lot of other communication
and this section is being used only 5 times.  I went from 140 seconds
to 277 seconds on 16 processors using TCP via a dual gigabit ethernet
setup (I'm the only user working on this system today).

This was run with OpenMPI 1.1.2 to maintain compatibility with a
major HPC site.

Is there a know problem with MPI_PACK/UNPACK in OpenMPI?

Michael

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

"Half of what I say is meaningless; but I say it so that the other half may reach you"
                                  Kahlil Gibran


Reply via email to