Hi Kalin,
These warning messages are harmless, some of the IPv6 feature is not yet
supported on Windows, but it can still run with IPv4. If you want to get
rid of these messages, just disable the IPv6 support in CMake.
Regards,
Shiqing
On 2010-10-14 6:46 PM, Kalin Kanov wrote:
Thank you for the quick response and I am looking forward to Shiqing's
reply.
Additionally, I noticed that I get the following warnings whenever I
run an OpenMPI application. I am not sure if this has anything to do
with the error that I am getting for MPI_Comm_accept:
[Lazar:03288] mca_oob_tcp_create_listen: unable to disable v4-mapped
addresses
[Lazar:00576] mca_oob_tcp_create_listen: unable to disable v4-mapped
addresses
[Lazar:00576] mca_btl_tcp_create_listen: unable to disable v4-mapped
addresses
Kalin
On 14.10.2010 г. 08:47, Jeff Squyres wrote:
Just FYI -- the main Windows Open MPI guy (Shiqing) is out for a
little while. He's really the best person to answer your question.
I'm sure he'll reply when he can, but I just wanted to let you know
that there may be some latency in his reply.
On Oct 13, 2010, at 5:09 PM, Kalin Kanov wrote:
Hi there,
I am trying to create a client/server application with OpenMPI,
which has been installed on a Windows machine, by following the
instruction (with CMake) in the README.WINDOWS file in the OpenMPI
distribution (version 1.4.2). I have ran other test application that
compile file under the Visual Studio 2008 Command Prompt. However I
get the following errors on the server side when accepting a new
client that is trying to connect:
[Lazar:02716] [[47880,1],0] ORTE_ERROR_LOG: Not found in file
..\..\orte\mca\grp
comm\base\grpcomm_base_allgather.c at line 222
[Lazar:02716] [[47880,1],0] ORTE_ERROR_LOG: Not found in file
..\..\orte\mca\grp
comm\basic\grpcomm_basic_module.c at line 530
[Lazar:02716] [[47880,1],0] ORTE_ERROR_LOG: Not found in file
..\..\ompi\mca\dpm
\orte\dpm_orte.c at line 363
[Lazar:2716] *** An error occurred in MPI_Comm_accept
[Lazar:2716] *** on communicator MPI_COMM_WORLD
[Lazar:2716] *** MPI_ERR_INTERN: internal error
[Lazar:2716] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 476 on
node Lazar exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
The server and client code is attached. I have straggled with this
problem for quite a while, so please let me know what the issue
might be. I have looked at the archives and the FAQ, and the only
thing similar that I have found had to do with different version of
OpenMPI installed, but I only have one version, and I believe it is
the one being used.
Thank you,
Kalin
<server.cpp><client.cpp>_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
--------------------------------------------------------------
Shiqing Fan http://www.hlrs.de/people/fan
High Performance Computing Tel.: +49 711 685 87234
Center Stuttgart (HLRS) Fax.: +49 711 685 65832
Address:Allmandring 30 email: f...@hlrs.de
70569 Stuttgart