Typically it is something like 'qsub -W group_list=groupB
myjob.sh'. Ultimately myjob.sh runs with gid groupB on some host in the
cluster. When that script reaches the mpirun command, then mpirun and the
processes started on the same host all run with gid groupB, but any of the
spawned processes
In my case the directories are actually the "tmp" directories created by the
job-scheduling system, but I think a wrapper script could chgrp and setguid
appropriately so that a process running group 1040 would effectively write
files with group ownership 650. Thanks for the clever idea.
-Origi
Thank you for all this information.
Your diagnosis is totally right. I actually sent e-mail yesterday but
apparently it never got through :<
It IS the MPI application that is failing to link, not OpenMPI itself; my
e-mail was not well written; sorry Brice.
The situation is this: I am trying to
Thanks very much, exactly what I wanted to hear. How big is /tmp?
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of David Turner
Sent: Thursday, November 03, 2011 6:36 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] EXTERNAL: Re: How t
Consider this Fortran program snippet:
program test
! everybody except rank=0 exits.
call mpi_init(ierr)
call mpi_comm_rank(MPI_COMM_WORLD,irank,ierr)
if (irank /= 0) then
call mpi_finalize(ierr)
stop
endif
! rank 0 tries to set number of OpenMP threads to 4
call omp_set_nu
pen-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Reuti
Sent: Thursday, April 04, 2013 7:13 AM
To: Open MPI Users
Subject: Re: [OMPI users] Confused on simple MPI/OpenMP program
Hi,
Am 04.04.2013 um 04:35 schrieb Ed Blosch:
> Consider this Fortran program snippet:
>
> p
Much appreciated, guys. I am a middle man in a discussion over whether MPI
should be handled by apps people or system people and there was some confusion
when we saw RHEL6 had an RPM for OpenMpi. Your comments make it clear that
there is a pretty strong preference to build OpenMpi on the syste
It ran a bit longer but still deadlocked. All matching sends are posted
1:1with posted recvs so it is a delivery issue of some kind. I'm running a
debug compiled version tonight to see what that might turn up. I may try to
rewrite with blocking sends and see if that works. I can also try add
Compile with -traceback and -check all if using Intel. Otherwise find the
right compiler options to check array bounds accesses and to dump a stack
trace. Then compile debug and run that way. Assuming it fails, you probably
will get good info on the source of the problem. If it doesn't fail th