Hi,
I think this is a broader issue in case an MPI library is used in conjunction
with threads while running inside a queuing system. First: whether your actual
installation of Open MPI is SGE-aware you can check with:
$ ompi_info | grep grid
MCA ras: gridengine (MCA v2.0, API
On Aug 14, 2014, at 5:52 AM, Christoph Niethammer wrote:
> I just gave gcc 4.9.0 a try and the mpi_f09 module
Wow -- that must be 1 better than the mpi_f08 module!
:-p
> is there but it seems to miss some functions:
>
> mpifort test.f90
> /tmp/ccHCEbXC.o: In function `MAIN__':
> test.f90:(.te
Guys
I changed the line to run the program in the script with both options
/usr/bin/time -f "%E" /opt/openmpi/bin/mpirun -v --bind-to-none -np $NSLOTS
./inverse.exe
/usr/bin/time -f "%E" /opt/openmpi/bin/mpirun -v --bind-to-socket -np $NSLOTS
./inverse.exe
but I got the same results. When I use
One more, Maxime, can you please make sure you've covered everything here:
http://www.open-mpi.org/community/help/
Josh
On Thu, Aug 14, 2014 at 3:18 PM, Joshua Ladd wrote:
> And maybe include your LD_LIBRARY_PATH
>
> Josh
>
>
> On Thu, Aug 14, 2014 at 3:16 PM, Joshua Ladd wrote:
>
>> Can you
And maybe include your LD_LIBRARY_PATH
Josh
On Thu, Aug 14, 2014 at 3:16 PM, Joshua Ladd wrote:
> Can you try to run the example code "ring_c" across nodes?
>
> Josh
>
>
> On Thu, Aug 14, 2014 at 3:14 PM, Maxime Boissonneault <
> maxime.boissonnea...@calculquebec.ca> wrote:
>
>> Yes,
>> Every
Can you try to run the example code "ring_c" across nodes?
Josh
On Thu, Aug 14, 2014 at 3:14 PM, Maxime Boissonneault <
maxime.boissonnea...@calculquebec.ca> wrote:
> Yes,
> Everything has been built with GCC 4.8.x, although x might have changed
> between the OpenMPI 1.8.1 build and the gromac
Yes,
Everything has been built with GCC 4.8.x, although x might have changed
between the OpenMPI 1.8.1 build and the gromacs build. For OpenMPI
1.8.2rc4 however, it was the exact same compiler for everything.
Maxime
Le 2014-08-14 14:57, Joshua Ladd a écrit :
Hmmm...weird. Seems like maybe a m
Hmmm...weird. Seems like maybe a mismatch between libraries. Did you build
OMPI with the same compiler as you did GROMACS/Charm++?
I'm stealing this suggestion from an old Gromacs forum with essentially the
same symptom:
"Did you compile Open MPI and Gromacs with the same compiler (i.e. both gcc
I just tried Gromacs with two nodes. It crashes, but with a different
error. I get
[gpu-k20-13:142156] *** Process received signal ***
[gpu-k20-13:142156] Signal: Segmentation fault (11)
[gpu-k20-13:142156] Signal code: Address not mapped (1)
[gpu-k20-13:142156] Failing at address: 0x8
[gpu-k20-1
What about between nodes? Since this is coming from the OpenIB BTL, would
be good to check this.
Do you know what the MPI thread level is set to when used with the Charm++
runtime? Is it MPI_THREAD_MULTIPLE? The OpenIB BTL is not thread safe.
Josh
On Thu, Aug 14, 2014 at 2:17 PM, Maxime Boisson
Hi,
I ran gromacs successfully with OpenMPI 1.8.1 and Cuda 6.0.37 on a
single node, with 8 ranks and multiple OpenMP threads.
Maxime
Le 2014-08-14 14:15, Joshua Ladd a écrit :
Hi, Maxime
Just curious, are you able to run a vanilla MPI program? Can you try
one one of the example programs in
Hi, Maxime
Just curious, are you able to run a vanilla MPI program? Can you try one
one of the example programs in the "examples" subdirectory. Looks like a
threading issue to me.
Thanks,
Josh
Open MPI Users,
I work on a large climate model called GEOS-5 and we've recently managed to
get it to compile with gfortran 4.9.1 (our usual compilers are Intel and
PGI for performance). In doing so, we asked our admins to install Open MPI
1.8.1 as the MPI stack instead of MVAPICH2 2.0 mainly beca
Hi,
I just did with 1.8.2rc4 and it does the same :
[mboisson@helios-login1 simplearrayhello]$ ./hello
[helios-login1:11739] *** Process received signal ***
[helios-login1:11739] Signal: Segmentation fault (11)
[helios-login1:11739] Signal code: Address not mapped (1)
[helios-login1:11739] Failin
Hi,
You DEFINITELY need to disable OpenMPI's new default binding. Otherwise,
your N threads will run on a single core. --bind-to socket would be my
recommendation for hybrid jobs.
Maxime
Le 2014-08-14 10:04, Jeff Squyres (jsquyres) a écrit :
I don't know much about OpenMP, but do you need t
Hi,
Am 14.08.2014 um 15:50 schrieb Oscar Mojica:
> I am trying to run a hybrid mpi + openmp program in a cluster. I created a
> queue with 14 machines, each one with 16 cores. The program divides the work
> among the 14 processors with MPI and within each processor a loop is also
> divided in
Can you try the latest 1.8.2 rc tarball? (just released yesterday)
http://www.open-mpi.org/software/ompi/v1.8/
On Aug 14, 2014, at 8:39 AM, Maxime Boissonneault
wrote:
> Hi,
> I compiled Charm++ 6.6.0rc3 using
> ./build charm++ mpi-linux-x86_64 smp --with-production
>
> When compiling
I don't know much about OpenMP, but do you need to disable Open MPI's default
bind-to-core functionality (I'm assuming you're using Open MPI 1.8.x)?
You can try "mpirun --bind-to none ...", which will have Open MPI not bind MPI
processes to cores, which might allow OpenMP to think that it can us
Hello everybody
I am trying to run a hybrid mpi + openmp program in a cluster. I created a
queue with 14 machines, each one with 16 cores. The program divides the work
among the 14 processors with MPI and within each processor a loop is also
divided into 8 threads for example, using openmp. Th
Note that if I do the same build with OpenMPI 1.6.5, it works flawlessly.
Maxime
Le 2014-08-14 08:39, Maxime Boissonneault a écrit :
Hi,
I compiled Charm++ 6.6.0rc3 using
./build charm++ mpi-linux-x86_64 smp --with-production
When compiling the simple example
mpi-linux-x86_64-smp/tests/charm+
Hi,
I compiled Charm++ 6.6.0rc3 using
./build charm++ mpi-linux-x86_64 smp --with-production
When compiling the simple example
mpi-linux-x86_64-smp/tests/charm++/simplearrayhello/
I get a segmentation fault that traces back to OpenMPI :
[mboisson@helios-login1 simplearrayhello]$ ./hello
[helios-
Hello,
I just gave gcc 4.9.0 a try and the mpi_f09 module is there but it seems to
miss some functions:
mpifort test.f90
/tmp/ccHCEbXC.o: In function `MAIN__':
test.f90:(.text+0x35a): undefined reference to `mpi_win_lock_all_'
test.f90:(.text+0x373): undefined reference to `mpi_win_lock_all_'
te
Hi,
In http://www.mpich.org/static/docs/v3.1/www3/MPI_Win_attach.html, for MPI
3, The API MPI_Win_attach is supported :
int MPI_Win_attach(MPI_Win win, void *base, MPI_Aint size)
It allows multiple (but non-overlapping) memory regions to be attached to
the same window, after the window
You can use hybrid mode.
following code works for me with ompi 1.8.2
#include
#include
#include "shmem.h"
#include "mpi.h"
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
start_pes(0);
{
int version = 0;
int subversion = 0;
int num_proc = 0;
Hello!
I use Open MPI v1.9a132520.
Can I use hybrid mpi+openshmem?
Where can i read about?
I have some problems in simple programm:
#include
#include "shmem.h"
#include "mpi.h"
int main(int argc, char* argv[])
{
int proc, nproc;
int rank, size, len;
char version[MPI_MAX_LIBRARY_VERSION_STRING]
Hi Jeff,
Works for me!
(With mpi_f08, GCC 4.9.1 absolutely insists on getting the finer details right
on things like MPI_User_function types for MPI_Op_create. So I'll assume the
rest of the type checking is just as good, and be glad I took that minor
detour..)
Thanks,
Marcus
-Orig
26 matches
Mail list logo