Hi Shiqing,

2010/4/3 Shiqing Fan <f...@hlrs.de>:
>
> Hi Andrey,
>
> Thanks for your feedback.
I'm pleased to help making openMPI better, at least this way.

>
> Problem 1, 2 and 3 has been fixed in trunk and will be available in future
> release. We'll fix the last two problems as soon as possible.
Glad to hear. Is there any estimation when 1.4.2 will be released?

Regards,
  Andrey

>
> On 2010-4-1 2:11 PM, NovA wrote:
>>
>> Dear developers,
>>
>> I'm attempting to use openMPI 1.4.1 on Windows XP x64. Almost
>> everything is working fine now, but in the process I've faced several
>> problems and some of them remains...
>>
>> (1) There were problems to configure openMPI using latest CMake 2.8.1.
>> Fortunately this was described in mail-list's posts "[OMPI users]
>> Windows CMake build problems ... (cont.)". So I've switched to CMake
>> 2.6.4 and VC-2005 built everything flawlessly. Looking forward for the
>> real fix though.
>>
>> (2) I've build a test program without any faults, but got the
>> following error trying to run it with mpiexec:
>> ----------
>>>
>>> mpiexec -np 1 hello.exe
>>
>> Cannot open configuration file C:/Program
>> Files/openMPI-1.4.1/vc-x64/share/openmpi\mpiexec-wrapper-data.txt
>> Error parsing data file mpiexec: Not found
>> ----------
>> I've managed to solve this by creating empty files
>> "mpiexec-wrapper-data.txt" and "mpiexec.exe-wrapper-data.txt". This is
>> a rough fix, that files should contain something useful probably.
>> Anyway they have to be created automatically I suppose.
>>
>> (3) Also mpiexec could not report any errors
>>>
>>> mpiexec
>> --------------------------------------------------------------------------
>> Sorry!  You were supposed to get help about:
>>     no-options-support
>> But I couldn't open the help file:
>>     C:\Program
>> Files\openMPI-1.4.1\icc-x64\share\openmpi\help-opal-wrapper.txt:
>> No such file or directory.  Sorry!
>> --------------------------------------------------------------------------
>>
>> The workaround is to rename existing file "help-opal-wrapper.exe.txt"
>> to the needed one. Unfortunately this leads to another error:
>>> mpiexec
>> --------------------------------------------------------------------------
>> Sorry!  You were supposed to get help about:
>>     no-options-support
>> from the file:
>>     help-opal-wrapper.txt
>> But I couldn't find that topic in the file.  Sorry!
>> --------------------------------------------------------------------------
>>
>>
>> (4) I've found out that MPI programs can't run without "mpirun" or
>> "mpiexec". It is expected that a mpi program should just start as 1
>> process if mpirun is not used. Instead this leads to the following
>> error:
>>
>> //////////////////////////////////////////////////////////////////////////////////////
>>
>>>
>>> hello.exe
>>>
>>
>> [nova:14132] [[INVALID],INVALID] ERROR: Failed to identify the local
>> daemon's URI
>> [nova:14132] [[INVALID],INVALID] ERROR: This is a fatal condition when
>> the binomial router
>> [nova:14132] [[INVALID],INVALID] ERROR: has been selected - either
>> select the unity router
>> [nova:14132] [[INVALID],INVALID] ERROR: or ensure that the local
>> daemon info is provided
>> [nova:14132] [[INVALID],INVALID] ORTE_ERROR_LOG: Fatal in file
>> ..\..\..\_src\orte\mca\ess\base\ess_base_std_app.c at line 151
>> --------------------------------------------------------------------------
>> It looks like orte_init failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during orte_init; some of which are due to configuration or
>> environment problems.  This failure appears to be an internal failure;
>> here's some additional information (which may only be relevant to an
>> Open MPI developer):
>>
>>   orte_routed.init_routes failed
>>   -->  Returned value Fatal (-6) instead of ORTE_SUCCESS
>> --------------------------------------------------------------------------
>> [nova:14132] [[INVALID],INVALID] ORTE_ERROR_LOG: Fatal in file
>> ..\..\..\_src\orte\mca\ess\singleton\ess_singleton_module.c at line
>> 189
>> [nova:14132] [[INVALID],INVALID] ORTE_ERROR_LOG: Fatal in file
>> ..\..\..\_src\orte\runtime\orte_init.c at line 132
>> --------------------------------------------------------------------------
>> It looks like orte_init failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during orte_init; some of which are due to configuration or
>> environment problems.  This failure appears to be an internal failure;
>> here's some additional information (which may only be relevant to an
>> Open MPI developer):
>>
>>   orte_ess_set_name failed
>>   -->  Returned value Fatal (-6) instead of ORTE_SUCCESS
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> It looks like MPI_INIT failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during MPI_INIT; some of which are due to configuration or
>> environment
>> problems.  This failure appears to be an internal failure; here's some
>> additional information (which may only be relevant to an Open MPI
>> developer):
>>
>>   ompi_mpi_init: orte_init failed
>>   -->  Returned "Fatal" (-6) instead of "Success" (0)
>> --------------------------------------------------------------------------
>> *** An error occurred in MPI_Init
>> *** before MPI was initialized
>> *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
>> [nova:14132] Abort before MPI_INIT completed successfully; not able to
>> guarantee that all other processes were killed!
>>
>> //////////////////////////////////////////////////////////////////////////////////////
>>
>> It's rather annoying, especially because the error message says
>> nothing useful to end user.
>>
>>
>> (5) And the last my problem concerns a code. I've faced it while
>> building PETSc-3.1, but there is also the following simplest test
>> case:
>> /////////////////////////////   test.c
>>  /////////////////////////////////////
>> #include "mpi.h"
>>
>> MPI_Comm c = MPI_COMM_NULL;
>>
>> int main()
>> {
>>     return 0;
>> }
>>
>> //////////////////////////////////////////////////////////////////////////////////////
>>
>> This file can be compiled using C++ compiler, but pure C compiler
>> produces the following error:
>> -----------
>>
>>>
>>> mpicc test.c
>>>
>>
>> Microsoft (R) C/C++ Optimizing Compiler Version 14.00.50727.42 for x64
>> Copyright (C) Microsoft Corporation.  All rights reserved.
>>
>> test.c
>> test.c(3) : error C2099: initializer is not a constant
>> ------------
>>
>> Is this intended behavior for the MPI_COMM_NULL? PETSc developers said
>> that "this would seem like a violation of the standard..."
>>
>> With best regards,
>>   Andrey
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
>
> --
> --------------------------------------------------------------
> Shiqing Fan                          http://www.hlrs.de/people/fan
> High Performance Computing           Tel.: +49 711 685 87234
>  Center Stuttgart (HLRS)            Fax.: +49 711 685 65832
> Address:Allmandring 30               email: f...@hlrs.de
> 70569 Stuttgart
>
>

Reply via email to