Hello,

        I'm having a weird problem while using the MPI_Comm_Accept (C) or the 
MPI::Comm::Accept (C++ bindings).
        My "server" runs until the call to this function but if there's no 
client 
connecting, it sits there eating all CPU (100%), although if a client connects 
the loop works fine, but when the client disconnects again we are back to the 
same high CPU usage.
        I tried using OpenMPI version 1.1.2 and 1.2. The machines architectures 
are 
AMD Opteron and Intel Itanium2 respectively, the former compiled with gcc 
4.1.1 and the later with gcc 3.2.3.

        The C++ code is here:

        http://compel.bu.edu/~nuno/openmpi/

        along with the logs for orted and the 'server' output.

        I started orted with:

        orted --persistent --seed --scope public  --universe foo

        and the 'server' with

        mpirun --universe foo -np 1 ./server

        The code is a C++ conversion from the C basic one posted at the 
mpi-forum 
website:

        http://www.mpi-forum.org/docs/mpi-20-html/node106.htm#Node109

        Is there an easy fix for this? I tried also the C version having the 
same 
problem...

                                        Regards,
                                                                                
        Nuno
-- 
http://aeminium.org/slug/

Reply via email to