Re: [OMPI users] Linking failure on Windows
So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien On 01/10/2012 9:57 PM, Gib Bogle wrote: Hi Damien, I've checked and double-checked, and I can't see anything not 32-bit. In fact my VS2005 only knows about 32-bit. I just tested copying the source code with appropriate include directories to another directory and built the executable successfully with mpicc. But I can't see that there is anything in the mpicc link (with --showme:link) that is not in VS. The command line in VS has a lot more stuff in it, to be sure. Gib On 2/10/2012 3:55 p.m., Damien Hocking wrote: Gib, If you have OMPI_IMPORTS set that usually removes those symbol errors. Are you absolutely sure you have everything set to 32-bit in Visual Studio? Damien On 01/10/2012 7:55 PM, Gib Bogle wrote: I am building the Sundials examples, with MS Visual Studio 2005 version 8 (i.e. 32-bit) on Windows 7 64-bit. The OpenMPI version is OpenMPI_1.6.2-win32. All the parallel examples fail with the same linker errors. I have added the preprocessor definitions OMPI_IMPORTS, OPAL_IMPORTS and ORTE_IMPORTS. The libraries that are being linked are: libmpi.lib, libmpi_cxx.lib, libopen-pal.lib, libopen-rte.lib. Here are the errors: 1>Linking... 1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: unresolved external symbol _ompi_mpi_op_sum referenced in function _VAllReduce_Parallel 1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: unresolved external symbol _ompi_mpi_op_max referenced in function _VAllReduce_Parallel 1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: unresolved external symbol _ompi_mpi_double referenced in function _VAllReduce_Parallel 1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: unresolved external symbol _ompi_mpi_op_min referenced in function _VAllReduce_Parallel 1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: unresolved external symbol _ompi_mpi_long referenced in function _N_VNewEmpty_Parallel 1>E:\Sundials-Win32\examples\cvode\parallel\Release\cvDiurnal_kry_bbd_p.exe : fatal error LNK1120: 5 unresolved externals What am I missing? Thanks Gib ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Linking failure on Windows
Before I added OMPI_IMPORTS there were 8 errors, so it did help. Here is the link command in VS 2005: /OUT:"E:\Sundials-Win32\examples\cvode\parallel\Release\cvAdvDiff_non_p.exe" /VERSION:0.0 /INCREMENTAL:NO /NOLOGO /LIBPATH:"c:\Program Files (x86)\OpenMPI_v1.6.2-win32\lib" /MANIFEST /MANIFESTFILE:"cvAdvDiff_non_p.dir\Release\cvAdvDiff_non_p.exe.intermediate.manifest" /PDB:"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /SUBSYSTEM:CONSOLE /IMPLIB:"E:\Sundials-Win32\examples\cvode\parallel\Release\cvAdvDiff_non_p.lib" /ERRORREPORT:PROMPT kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib shlwapi.lib ws2_32.lib ..\..\..\src\cvode\Release\sundials_cvode.lib ..\..\..\src\nvec_par\Release\sundials_nvecparallel.lib libmpi.lib libopen-pal.lib libopen-rte.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib Additional options: /STACK:1000 /machine:X86 Gib On 2/10/2012 3:55 p.m., Damien Hocking wrote: Gib, If you have OMPI_IMPORTS set that usually removes those symbol errors. Are you absolutely sure you have everything set to 32-bit in Visual Studio?
Re: [OMPI users] Linking failure on Windows
I guess it's conceivable that one of these Sundials include files is doing something: #include /* prototypes for CVODE fcts. */ #include /* definition of N_Vector and macros / #include /* definition of realtype / #include/* definition of EXP */ I am a complete beginner with Sundials, so I have no idea how it might interfere with the preprocessor definitions. Here is the compile command line from VS: /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD /Fo"cvAdvDiff_non_p.dir\Release\\" /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /W3 /c /TC /errorReport:prompt Gib On 2/10/2012 5:06 p.m., Damien Hocking wrote: So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien
Re: [OMPI users] Linking failure on Windows
There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS. That might be part of it. Damien On 01/10/2012 10:20 PM, Gib Bogle wrote: I guess it's conceivable that one of these Sundials include files is doing something: #include /* prototypes for CVODE fcts. */ #include /* definition of N_Vector and macros / #include /* definition of realtype / #include/* definition of EXP */ I am a complete beginner with Sundials, so I have no idea how it might interfere with the preprocessor definitions. Here is the compile command line from VS: /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD /Fo"cvAdvDiff_non_p.dir\Release\\" /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /W3 /c /TC /errorReport:prompt Gib On 2/10/2012 5:06 p.m., Damien Hocking wrote: So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Linking failure on Windows
They don't make any difference. I had them in, but dropped them when I found that the mpicc build didn't need them. Gib From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Damien Hocking [dam...@khubla.com] Sent: Tuesday, 2 October 2012 7:21 p.m. To: Open MPI Users Subject: Re: [OMPI users] Linking failure on Windows There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS. That might be part of it. Damien On 01/10/2012 10:20 PM, Gib Bogle wrote: > I guess it's conceivable that one of these Sundials include files is > doing something: > > #include /* prototypes for CVODE > fcts. */ > #include /* definition of N_Vector and > macros / > #include /* definition of realtype / > #include/* definition of EXP */ > > I am a complete beginner with Sundials, so I have no idea how it might > interfere with the preprocessor definitions. > > Here is the compile command line from VS: > > /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" > /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D > "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" > /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD > /Fo"cvAdvDiff_non_p.dir\Release\\" > /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" > /W3 /c /TC /errorReport:prompt > > Gib > > > On 2/10/2012 5:06 p.m., Damien Hocking wrote: >> So mpicc builds it completely? The only thing I can think of is look >> closely at both the compile and link command lines and see what's >> different. It might be going sideways at the compile from something >> in an include with a preprocessor def. >> >> Damien > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
[OMPI users] question to binding options in openmpi-1.6.2
Hi, I tried to reproduce the bindings from the following blog http://blogs.cisco.com/performance/open-mpi-v1-5-processor-affinity-options on a machine with two dual-core processors and openmpi-1.6.2. I have ordered the lines and removed the output from "hostname" so that it is easier to see the bindings. mpiexec -report-bindings -host sunpc0 -np 4 -bind-to-socket hostname [sunpc0:05410] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:05410] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:05410] MCW rank 2 bound to socket 1[core 0-1]: [. .][B B] [sunpc0:05410] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] The output is consistent with the illustration in the above blog. Now I add one more machine. mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 \ -bind-to-socket hostname [sunpc0:06015] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] [sunpc1:25543] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:06015] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] [sunpc1:25543] MCW rank 3 bound to socket 0[core 0-1]: [B B][. .] I would have expected the same output as before and not a distribution of the processes across both nodes. Did I misunderstand the concept so that the output is correct? When I try "-bysocket" with one machine, I get once more a consistent output to the above blog. mpiexec -report-bindings -host sunpc0 -np 4 -bysocket \ -bind-to-socket hostname [sunpc0:05451] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:05451] MCW rank 1 bound to socket 1[core 0-1]: [. .][B B] [sunpc0:05451] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:05451] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] However I get once more an unexpected output when I add one more machine and not the expected output from above. mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 -bysocket \ -bind-to-socket hostname [sunpc0:06130] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] [sunpc1:25660] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:06130] MCW rank 2 bound to socket 1[core 0-1]: [. .][B B] [sunpc1:25660] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] I would have expected a distribution of the processes across all nodes, if I would have used "-bynode" (as in the following example). mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 -bynode \ -bind-to-socket hostname [sunpc0:06171] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] [sunpc1:25696] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] [sunpc0:06171] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] [sunpc1:25696] MCW rank 3 bound to socket 0[core 0-1]: [B B][. .] Option "-npersocket" doesnt't work, even if I reduce "-npersocket" to "1". Why doesn't it find any sockets, although the above commands could find both sockets? mpiexec -report-bindings -host sunpc0 -np 2 -npersocket 1 hostname -- Your job has requested a conflicting number of processes for the application: App: hostname number of procs: 2 This is more processes than we can launch under the following additional directives and conditions: number of sockets: 0 npersocket: 1 Please revise the conflict and try again. -- By the way I get the same output if I use Linux instead of Solaris. I would be grateful if somebody could clarify if I misunderstood the binding concept or if the binding is wrong if I use more than one machine. Thank you very much for any comments in advance. Kind regards Siegmar
[OMPI users] crashes in VASP with openmpi 1.6.x
Hi - I've been trying to run VASP 5.2.12 with ScaLAPACK and openmpi 1.6.x on a single 32 core (4 x 8 core) Opteron node, purely shared memory. We've always had occasional hangs with older OpenMPI versions (1.4.3 and 1.5.5) on these machines, but infrequent enough to be usable and not worth my time to debug. However, now that I've got to the 1.6 series (1.6.2, specifically), we're getting frequent crashes, mostly but maybe not entirely deterministic. The symptom is a segmentation fault in libopmpi.so, someplace under a call to PZHEEVX, but in the traceback only routines names in VASP are being printed, despite the fact that I have scalapack compiled with -g. ScaLAPACK is v 1.8.0, because with v 2.0.2, it completely fails to converge. I've tried a couple varieties of the intel compiler (11.1.080 and 12.1.6.631), and a couple of versions of ACML (4.4.0 and 5.2.0). ACML version seems not to matter, and the two varieties of ifort give the same type of behavior, but crash in different places in the run. When I switch compilers and acml/scalapack libraries I recompile everything, except fpr OpenMPI which is always compiled with ifort 11.1.080. These crashes do not seem to occur on our 2 x 4 core Xeon + IB QDR nodes. Has anyone seen anything like this, or has any idea how to get additional useful information, for example traceback information so I can figure out what mpi routine is having problems? thanks, Noam
Re: [OMPI users] crashes in VASP with openmpi 1.6.x
For what it's worth, on our cluster I currently do compile VASP with OpenMPI but we do not include ScaLAPACK because we didn't see a speedup from including it. So far we haven't seen improvements from using OpenMP in VASP or MKL, so we're not doing much with OpenMP either. On our shared memory machine we will probably do more with OpenMP, especially for MKL. We're relatively new to VASP, though, so we're eager to hear what works for other people. We're also curious to see how 5.3.x behavior compares with 5.2.x. Albert On Oct 2, 2012, at 8:11 AM, Noam Bernstein wrote: > Hi - I've been trying to run VASP 5.2.12 with ScaLAPACK and openmpi > 1.6.x on a single 32 core (4 x 8 core) Opteron node, purely shared memory. > We've always had occasional hangs with older OpenMPI versions > (1.4.3 and 1.5.5) on these machines, but infrequent enough to be usable > and not worth my time to debug. > > However, now that I've got to the 1.6 series (1.6.2, specifically), we're > getting frequent crashes, mostly but maybe not entirely deterministic. The > symptom is a segmentation fault in libopmpi.so, someplace under a call to > PZHEEVX, but in the traceback only routines names in VASP are being printed, > despite the fact that I have scalapack compiled with -g. > > ScaLAPACK is v 1.8.0, because with v 2.0.2, it completely fails to converge. > I've tried a couple varieties of the intel compiler (11.1.080 and > 12.1.6.631), > and a couple of versions of ACML (4.4.0 and 5.2.0). ACML version seems > not to matter, and the two varieties of ifort give the same type of behavior, > but > crash in different places in the run. When I switch compilers and > acml/scalapack > libraries I recompile everything, except fpr OpenMPI which is always compiled > with > ifort 11.1.080. > > These crashes do not seem to occur on our 2 x 4 core Xeon + IB QDR nodes. > > Has anyone seen anything like this, or has any idea how to get additional > useful information, for example traceback information so I can figure out > what mpi > routine is having problems? > > > thanks, > > Noam > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] question to binding options in openmpi-1.6.2
On Oct 2, 2012, at 2:44 AM, Siegmar Gross wrote: > Hi, > > I tried to reproduce the bindings from the following blog > http://blogs.cisco.com/performance/open-mpi-v1-5-processor-affinity-options > on a machine with two dual-core processors and openmpi-1.6.2. I have > ordered the lines and removed the output from "hostname" so that it > is easier to see the bindings. > > mpiexec -report-bindings -host sunpc0 -np 4 -bind-to-socket hostname > [sunpc0:05410] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:05410] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:05410] MCW rank 2 bound to socket 1[core 0-1]: [. .][B B] > [sunpc0:05410] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] > > The output is consistent with the illustration in the above blog. > Now I add one more machine. > > mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 \ > -bind-to-socket hostname > [sunpc0:06015] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] > [sunpc1:25543] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:06015] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] > [sunpc1:25543] MCW rank 3 bound to socket 0[core 0-1]: [B B][. .] > > I would have expected the same output as before and not a distribution > of the processes across both nodes. Did I misunderstand the concept > so that the output is correct? The output is correct. The key is in your -host specification. In the absence of an allocation or hostfile giving further slot information, this indicates there is one slot on each host. Oversubscription is allowed by default, else this would have exited as an error due to insufficient slots. Instead, what happens is that we map the 1st proc to the first node, which "fills" its one slot allocation. We therefore move to the next node and "fill" it with rank 1. Since both nodes are now "oversubscribed", we just balance the remaining procs across the available nodes. > When I try "-bysocket" with one > machine, I get once more a consistent output to the above blog. > > mpiexec -report-bindings -host sunpc0 -np 4 -bysocket \ > -bind-to-socket hostname > [sunpc0:05451] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:05451] MCW rank 1 bound to socket 1[core 0-1]: [. .][B B] > [sunpc0:05451] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:05451] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] > > However I get once more an unexpected output when I add one more > machine and not the expected output from above. > > mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 -bysocket \ > -bind-to-socket hostname > [sunpc0:06130] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] > [sunpc1:25660] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:06130] MCW rank 2 bound to socket 1[core 0-1]: [. .][B B] > [sunpc1:25660] MCW rank 3 bound to socket 1[core 0-1]: [. .][B B] Same reason as above. > > I would have expected a distribution of the processes across all > nodes, if I would have used "-bynode" (as in the following example). > > mpiexec -report-bindings -host sunpc0,sunpc1 -np 4 -bynode \ > -bind-to-socket hostname > [sunpc0:06171] MCW rank 0 bound to socket 0[core 0-1]: [B B][. .] > [sunpc1:25696] MCW rank 1 bound to socket 0[core 0-1]: [B B][. .] > [sunpc0:06171] MCW rank 2 bound to socket 0[core 0-1]: [B B][. .] > [sunpc1:25696] MCW rank 3 bound to socket 0[core 0-1]: [B B][. .] > > > Option "-npersocket" doesnt't work, even if I reduce "-npersocket" > to "1". Why doesn't it find any sockets, although the above commands > could find both sockets? > > mpiexec -report-bindings -host sunpc0 -np 2 -npersocket 1 hostname > -- > Your job has requested a conflicting number of processes for the > application: > > App: hostname > number of procs: 2 > > This is more processes than we can launch under the following > additional directives and conditions: > > number of sockets: 0 > npersocket: 1 > > Please revise the conflict and try again. > -- No idea - will have to look at the code to find the bug. > > > By the way I get the same output if I use Linux instead of Solaris. > I would be grateful if somebody could clarify if I misunderstood the > binding concept or if the binding is wrong if I use more than one > machine. Thank you very much for any comments in advance. > > > Kind regards > > Siegmar > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Linking failure on Windows
OK, I give. I think this is a Shiqing question. Damien On 02/10/2012 12:25 AM, Gib Bogle wrote: They don't make any difference. I had them in, but dropped them when I found that the mpicc build didn't need them. Gib From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Damien Hocking [dam...@khubla.com] Sent: Tuesday, 2 October 2012 7:21 p.m. To: Open MPI Users Subject: Re: [OMPI users] Linking failure on Windows There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS. That might be part of it. Damien On 01/10/2012 10:20 PM, Gib Bogle wrote: I guess it's conceivable that one of these Sundials include files is doing something: #include /* prototypes for CVODE fcts. */ #include /* definition of N_Vector and macros / #include /* definition of realtype / #include/* definition of EXP */ I am a complete beginner with Sundials, so I have no idea how it might interfere with the preprocessor definitions. Here is the compile command line from VS: /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD /Fo"cvAdvDiff_non_p.dir\Release\\" /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /W3 /c /TC /errorReport:prompt Gib On 2/10/2012 5:06 p.m., Damien Hocking wrote: So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Linking failure on Windows
Hi Gib, Actually, I also think defining OMPI_IMPORTS would solve the problem. And I also double checked the released binaries and the source code, those symbols are surely exported. So I'm now really confused. Gib, do you know how to generate preprocessor files in VS 2005? It should be one option under the C/C++ settings of the project. If you can provide me the preprocessor file of nvector_parallel.c, it would be helpful to find out the problem. Regards, Shiqing On 2012-10-02 8:25 AM, Gib Bogle wrote: They don't make any difference. I had them in, but dropped them when I found that the mpicc build didn't need them. Gib From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Damien Hocking [dam...@khubla.com] Sent: Tuesday, 2 October 2012 7:21 p.m. To: Open MPI Users Subject: Re: [OMPI users] Linking failure on Windows There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS. That might be part of it. Damien On 01/10/2012 10:20 PM, Gib Bogle wrote: I guess it's conceivable that one of these Sundials include files is doing something: #include /* prototypes for CVODE fcts. */ #include /* definition of N_Vector and macros / #include /* definition of realtype / #include/* definition of EXP */ I am a complete beginner with Sundials, so I have no idea how it might interfere with the preprocessor definitions. Here is the compile command line from VS: /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD /Fo"cvAdvDiff_non_p.dir\Release\\" /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /W3 /c /TC /errorReport:prompt Gib On 2/10/2012 5:06 p.m., Damien Hocking wrote: So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- --- Shiqing Fan High Performance Computing Center Stuttgart (HLRS) Tel: ++49(0)711-685-87234 Nobelstrasse 19 Fax: ++49(0)711-685-65832 70569 Stuttgart http://www.hlrs.de/organization/people/shiqing-fan/ email: f...@hlrs.de
Re: [OMPI users] fortran bindings for MPI_Op_commutative
Perfect; many thanks. This is now filed as a CMR and will be included in 1.6.3. Part of this was also necessary for the trunk/v1.7 branch (in the TKR mpi module implementation). https://svn.open-mpi.org/trac/ompi/ticket/3337 https://svn.open-mpi.org/trac/ompi/ticket/3338 Many thanks! On Sep 27, 2012, at 11:06 AM, Ralph Castain wrote: > Ouch! Thanks - I'll fix that and check for any other missing entries (Jeff is > on a plane back from Europe today). Don't know when Jeff will want to roll a > replacement 1.6.3 release, but he can address that when he returns to the > airwaves. > > > On Thu, Sep 27, 2012 at 7:45 AM, Ake Sandgren > wrote: > On Thu, 2012-09-27 at 16:31 +0200, Ake Sandgren wrote: > > Hi! > > > > Building 1.6.1 and 1.6.2 i seem to be missing the actual fortran > > bindings for MPI_Op_commutative and a bunch of other functions. > > > > My configure is > > ./configure --enable-orterun-prefix-by-default --enable-cxx-exceptions > > > > When looking in libmpi_f77.so there is no mpi_op_commutative_ defined. > > mpi_init_ is there (as a weak) as it should. > > > > All compilers give me the same result. > > > > Any ideas why? > > Ahh, pop_commutative_f.c is missing from the profile/Makefile.am > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
Re: [OMPI users] Linking failure on Windows
Hi Shiqing, Your post made me realize my mistake! I was thinking only of the preprocessor definitions for compiling cvAdvDiff_non_p.c, forgetting about the previously built library sundials_nvecparallel.lib, which is of course where nvector_parallel.c was compiled. When I rebuild that library with OMPI_IMPORTS my problem disappears. Thanks Shiqing, and sorry Damien! Gib On 3/10/2012 4:02 a.m., Shiqing Fan wrote: Hi Gib, Actually, I also think defining OMPI_IMPORTS would solve the problem. And I also double checked the released binaries and the source code, those symbols are surely exported. So I'm now really confused. Gib, do you know how to generate preprocessor files in VS 2005? It should be one option under the C/C++ settings of the project. If you can provide me the preprocessor file of nvector_parallel.c, it would be helpful to find out the problem. Regards, Shiqing On 2012-10-02 8:25 AM, Gib Bogle wrote: They don't make any difference. I had them in, but dropped them when I found that the mpicc build didn't need them. Gib From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Damien Hocking [dam...@khubla.com] Sent: Tuesday, 2 October 2012 7:21 p.m. To: Open MPI Users Subject: Re: [OMPI users] Linking failure on Windows There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS. That might be part of it. Damien On 01/10/2012 10:20 PM, Gib Bogle wrote: I guess it's conceivable that one of these Sundials include files is doing something: #include /* prototypes for CVODE fcts. */ #include /* definition of N_Vector and macros / #include /* definition of realtype / #include/* definition of EXP */ I am a complete beginner with Sundials, so I have no idea how it might interfere with the preprocessor definitions. Here is the compile command line from VS: /O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" /I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D "_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" /D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD /Fo"cvAdvDiff_non_p.dir\Release\\" /Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" /W3 /c /TC /errorReport:prompt Gib On 2/10/2012 5:06 p.m., Damien Hocking wrote: So mpicc builds it completely? The only thing I can think of is look closely at both the compile and link command lines and see what's different. It might be going sideways at the compile from something in an include with a preprocessor def. Damien ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dr. Gib Bogle Senior Research Fellow Auckland Bioengineering Institute University of Auckland New Zealand http://www.bioeng.auckland.ac.nz g.bo...@auckland.ac.nz (64-9) 373-7599 Ext. 87030
Re: [OMPI users] Linking failure on Windows
No worries. It's good to see that it compiles. Damien On 02/10/2012 2:25 PM, Gib Bogle wrote: Hi Shiqing, Your post made me realize my mistake! I was thinking only of the preprocessor definitions for compiling cvAdvDiff_non_p.c, forgetting about the previously built library sundials_nvecparallel.lib, which is of course where nvector_parallel.c was compiled. When I rebuild that library with OMPI_IMPORTS my problem disappears. Thanks Shiqing, and sorry Damien! Gib