No worries. It's good to see that it compiles.
Damien
On 02/10/2012 2:25 PM, Gib Bogle wrote:
Hi Shiqing,
Your post made me realize my mistake! I was thinking only of the
preprocessor definitions for compiling cvAdvDiff_non_p.c, forgetting
about the previously built library sundials_nvecpa
Hi Shiqing,
Your post made me realize my mistake! I was thinking only of the preprocessor
definitions for compiling cvAdvDiff_non_p.c, forgetting about the previously
built library sundials_nvecparallel.lib, which is of course where
nvector_parallel.c was compiled. When I rebuild that librar
Perfect; many thanks.
This is now filed as a CMR and will be included in 1.6.3. Part of this was
also necessary for the trunk/v1.7 branch (in the TKR mpi module implementation).
https://svn.open-mpi.org/trac/ompi/ticket/3337
https://svn.open-mpi.org/trac/ompi/ticket/3338
Many thanks!
Hi Gib,
Actually, I also think defining OMPI_IMPORTS would solve the problem.
And I also double checked the released binaries and the source code,
those symbols are surely exported. So I'm now really confused.
Gib, do you know how to generate preprocessor files in VS 2005? It
should be one o
OK, I give. I think this is a Shiqing question.
Damien
On 02/10/2012 12:25 AM, Gib Bogle wrote:
They don't make any difference. I had them in, but dropped them when I found
that the mpicc build didn't need them.
Gib
From: users-boun...@open-mpi.org [
On Oct 2, 2012, at 2:44 AM, Siegmar Gross
wrote:
> Hi,
>
> I tried to reproduce the bindings from the following blog
> http://blogs.cisco.com/performance/open-mpi-v1-5-processor-affinity-options
> on a machine with two dual-core processors and openmpi-1.6.2. I have
> ordered the lines and remo
For what it's worth, on our cluster I currently do compile VASP with OpenMPI
but we do not include ScaLAPACK because we didn't see a speedup from including
it. So far we haven't seen improvements from using OpenMP in VASP or MKL, so
we're not doing much with OpenMP either.
On our shared memory
Hi - I've been trying to run VASP 5.2.12 with ScaLAPACK and openmpi
1.6.x on a single 32 core (4 x 8 core) Opteron node, purely shared memory.
We've always had occasional hangs with older OpenMPI versions
(1.4.3 and 1.5.5) on these machines, but infrequent enough to be usable
and not worth my tim
Hi,
I tried to reproduce the bindings from the following blog
http://blogs.cisco.com/performance/open-mpi-v1-5-processor-affinity-options
on a machine with two dual-core processors and openmpi-1.6.2. I have
ordered the lines and removed the output from "hostname" so that it
is easier to see the bi
They don't make any difference. I had them in, but dropped them when I found
that the mpicc build didn't need them.
Gib
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of
Damien Hocking [dam...@khubla.com]
Sent: Tuesday, 2 October
There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS.
That might be part of it.
Damien
On 01/10/2012 10:20 PM, Gib Bogle wrote:
I guess it's conceivable that one of these Sundials include files is
doing something:
#include /* prototypes for CVODE
fcts.
I guess it's conceivable that one of these Sundials include files is doing
something:
#include /* prototypes for CVODE fcts. */
#include /* definition of N_Vector and macros /
#include /* definition of realtype /
#include/* definition of EXP */
I am a complete begin
Before I added OMPI_IMPORTS there were 8 errors, so it did help.
Here is the link command in VS 2005:
/OUT:"E:\Sundials-Win32\examples\cvode\parallel\Release\cvAdvDiff_non_p.exe"
/VERSION:0.0 /INCREMENTAL:NO /NOLOGO /LIBPATH:"c:\Program Files
(x86)\OpenMPI_v1.6.2-win32\lib" /MANIFEST
/MANIFES
So mpicc builds it completely? The only thing I can think of is look
closely at both the compile and link command lines and see what's
different. It might be going sideways at the compile from something in
an include with a preprocessor def.
Damien
On 01/10/2012 9:57 PM, Gib Bogle wrote:
H
14 matches
Mail list logo