Hello!
I have a problem with the hybrid MPI/OpenMP C++ code, which does not
produce acceleration in OpenMP mode at the local, 4th-core home computer.
Open MPI loaded from www.open-mpi.org/
mpirun -V
mpirun (Open MPI) 1.8.1.
Compiled from the source.
Ubuntu 14.04
// ===
//main.c
#include
mpirun binds a.out on a single core, so when you
OMP_NUM_THREADS=2 mpirun -np 1 a.out
the two OpenMP threads ends up doing time sharing.
you can confirm that by running
grep Cpus_allowed_list /proc/self/status
mpirun -np 1 grep Cpus_allowed_list /proc/self/status
here is what i get :
[gil
thank you!
mpirun --bind-to none ...
gives what I need:
echo " run 1 " ; export OMP_NUM_THREADS=1 ; time mpirun -np 1 --bind-to
none a.out ; echo " run 2 " ; export OMP_NUM_THREADS=2 ; time
mpirun -np 1 --bind-to none a.out
run 1
0 0
0 0
real0m43.593s
user0m43.282s
sys0m0
Hi,
I have built openmpi-v1.10.3rc4 on my machines (Solaris 10 Sparc,
Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0
and Sun C 5.13. Unfortunately I have once more a problem with
"--slot-list". This time a small program breaks on my Sparc machine
while it works as expected on L
Hi,
I have built openmpi-v2.x-dev-1468-g6011906 on my machines (Solaris 10
Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0
and Sun C 5.13. Unfortunately I have a problem with "--host" for a MPMD
program. The behaviour is different on different machines. Why do I need
two
Hi,
I have built openmpi-dev-4221-gb707d13 on my machines (Solaris 10
Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with
gcc-5.1.0 and Sun C 5.13. Unfortunately I get an error for a small
program.
tyr hello_1 109 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
absolute:"
note this is still suboptimal.
for example, if you run a job with two MPI tasks with two OpenMP threads
each on the same node,
then it is likely the OpenMP runtime will bind both thread 0 on core 0,
and both thread 1 on core 1, which one more time means time sharing.
Cheers,
Gilles
On 6/
Apparently Solaris 10 lacks support for strnlen. We should add it to our
configure and provide a replacement where needed.
George.
On Wed, Jun 8, 2016 at 4:30 PM, Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de> wrote:
> Hi,
>
> I have built openmpi-dev-4221-gb707d13 on my machines (Solar
What part of this output indicates this non-communicative configuration?
Please recall, this is using the precompiled OpenMpi Windows installation
When the 'verbose' option is added, I see this sequence of output for the
scheduler and each of the executor processes:
--
[sweet1:06412] mca: bas
Hello everyone,
in my application I use CUDA-aware OpenMPI 1.10.2 together with CUDA 7.5. If I
call cudaSetDevice() cuda-memcheck reports this error for all subsequent MPI
function calls:
= CUDA-MEMCHECK
= Program hit CUDA_ERROR_INVALID_VALUE (error 1) due to "invalid
argument"
Filed https://github.com/open-mpi/ompi/issues/1771 to track the issue.
> On Jun 8, 2016, at 1:47 AM, George Bosilca wrote:
>
> Apparently Solaris 10 lacks support for strnlen. We should add it to our
> configure and provide a replacement where needed.
>
> George.
>
>
> On Wed, Jun 8, 2016 at
> On Jun 8, 2016, at 4:30 AM, Roth, Christopher wrote:
>
> What part of this output indicates this non-communicative configuration?
--
At least one pair of MPI processes are unable to reach each other for
MPI communications
Well, that obvious error message states the basic problem - I was hoping you
had noticed a detail in the ompi_info output that would point to a reason for
it.
Further test runs with the option '-mca btl tcp,self' (excluding 'sm' from the
mix) and '-mca btl_base_verbose 100', supply some more in
Christopher,
the sm btl does not work with inter communicators and hence disqualifies
itself.
i guess this is what you interpreted as 'partially working'
i am surprised you are using a privileged port (260 < 1024), are you
running as an admin ?
Open MPI is no more supported on windows,
14 matches
Mail list logo