Indeed mpirun shows slots=1 per node, but I create allocation with
--ntasks-per-node 24, so I do have all cores of the node allocated.
When I use srun I can get all the cores.
On 09/07/2017 02:12 PM, r...@open-mpi.org wrote:
> My best guess is that SLURM has only allocated 2 slots, and we respec
Maxsym,
can you please post your sbatch script ?
fwiw, i am unable to reproduce the issue with the latest v2.x from github.
by any chance, would you be able to test the latest openmpi 2.1.2rc3 ?
Cheers,
Gilles
On 9/8/2017 4:19 PM, Maksym Planeta wrote:
Indeed mpirun shows slots=1 per n
I run start an interactive allocation and I just noticed that the problem
happens, when I join this allocation from another shell.
Here is how I join:
srun --pty --x11 --jobid=$(squeue -u $USER -o %A | tail -n 1) bash
And here is how I create the allocation:
srun --pty --nodes 8 --ntasks-per-n
by any chance, would you be able to test the latest openmpi 2.1.2rc3 ?
OpenMPI 2.1.0 is the latest on our cluster.
--
Regards,
Maksym Planeta
smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.open-mpi.org
Thanks, now i can reproduce the issue
Cheers,
Gilles
On 9/8/2017 5:20 PM, Maksym Planeta wrote:
I run start an interactive allocation and I just noticed that the problem
happens, when I join this allocation from another shell.
Here is how I join:
srun --pty --x11 --jobid=$(squeue -u $USE
Hello,
How can I compile openmpi without the support of open-cl?
The only link I could find is [1], but openmpi doesn't configure this
option.
The reason why I'm trying to build openmpi without open-cl is it throws the
following errors even with the nvidia installed opencl.
./mpicc -I/usr/local
For the time being, you can
srun --ntasks-per-node 24 --jobid=...
When joining the allocation.
This use case looks a bit convoluted to me, so i am not even sure we should
consider there is a bug in Open MPI.
Ralph, any thoughts ?
Cheers,
Gilles
Gilles Gouaillardet wrote:
>Thanks, now i can
Nilesh,
Can you
configure --without-nvidia ...
And see if it helps ?
Cheers,
Gilles
Nilesh Kokane wrote:
>Hello,
>
>
>How can I compile openmpi without the support of open-cl?
>
>
>The only link I could find is [1], but openmpi doesn't configure this option.
>
>The reason why I'm trying to bu
Hi,
I have a system running openmpi programs over archlinux. I had the programs
compiled and running on July when I was using the version 1.10.4 or .7 of
openmpi if I remember correctly. Just recently I updated the openmpi
version to 2.1.1 and tried running a compiled program and ran correctly.
The
On Fri, Sep 8, 2017 at 3:33 PM, Gilles Gouaillardet
wrote:
>
> Nilesh,
>
> Can you
> configure --without-nvidia ...
> And see if it helps ?
No, I need Nvidia cuda support.
//Nilesh Kokane
___
users mailing list
users@lists.open-mpi.org
https://lists.
On Fri, Sep 8, 2017 at 4:08 PM, Nilesh Kokane wrote:
> On Fri, Sep 8, 2017 at 3:33 PM, Gilles Gouaillardet
> wrote:
>>
>> Nilesh,
>>
>> Can you
>> configure --without-nvidia ...
>> And see if it helps ?
>
> No, I need Nvidia cuda support.
Or else do you have a way to solve this open-cl errors?
On 08/09/2017 02:38, Llelan D. wrote:
Windows 10 64bit, Cygwin64, openmpi 1.10.7-1 (dev, c, c++, fortran),
x86_64-w64-mingw32-gcc 6.3.0-1 (core, gcc, g++, fortran)
I am compiling the standard "hello_c.c" example with *mgicc* configured
to use the Cygwin installed MinGW gcc compiler:
$ export
can you
./mpicc -showme -I/usr/local/cuda-8.0.61/lib64 -lcuda test_cuda_aware.c -o myapp
and double check -lcuda is *after* -lopen-pal ?
Cheers,
Gilles
On Fri, Sep 8, 2017 at 7:40 PM, Nilesh Kokane wrote:
> On Fri, Sep 8, 2017 at 4:08 PM, Nilesh Kokane
> wrote:
>> On Fri, Sep 8, 2017 at 3:33
Hi,
I have a successful installation of Nvidia drivers and cuda which is
confirmed by
"nvcc -V" and "nvidia-smi".
after configuring the openmpi with
"./configure --with-cuda --prefix=/home/umashankar/softwares/openmpi-2.0.3"
"make all install"
and after exporting paths ,
I ended up with an e
I posted this question last year and we ended up not upgrading to the newer
openmpi. Now I need to change to openmpi 1.10.5 and have the same issue.
Specifically, using 1.4.2, I can run two 12 core jobs on a 24 core node and the
processes would bind to cores and only have 1 process per core. ie
It isn’t an issue as there is nothing wrong with OMPI. Your method of joining
the allocation is a problem. What you have done is to create a job step that
has only 1 slot/node. We have no choice but to honor that constraint and run
within it.
What you should be doing is to use salloc to create
What you probably want to do is add --cpu-list a,b,c... to each mpirun command,
where each one lists the cores you want to assign to that job.
> On Sep 8, 2017, at 6:46 AM, twu...@goodyear.com wrote:
>
>
> I posted this question last year and we ended up not upgrading to the newer
> openmpi.
Tom --
If you're going to upgrade, can you upgrade to the latest Open MPI (2.1.1)?
I.e., unless you have a reason for wanting to stay back at an already-old
version, you might as well upgrade to the latest latest latest to give you the
longest shelf life.
I mention this because we are immanen
We are currently discussing internally how to proceed with this issue on
our machine. We did a little survey to see the setup of some of the
machines we have access to, which includes an IBM, a Bull machine, and
two Cray XC40 machines. To summarize our findings:
1) On the Cray systems, both /t
Joseph,
Thanks for sharing this !
sysv is imho the worst option because if something goes really wrong, Open MPI
might leave some shared memory segments behind when a job crashes. From that
perspective, leaving a big file in /tmp can be seen as the lesser evil.
That being said, there might be o
In my experience, POSIX is much more reliable than Sys5. Sys5 depends on
the value of shmmax, which is often set to a small fraction of node
memory. I've probably seen the error described on
http://verahill.blogspot.com/2012/04/solution-to-nwchem-shmmax-too-small.html
with NWChem a 1000 times bec
On 09/08/2017 8:16 AM, Marco Atzeri wrote:
please reply in the mailing list
Oops! My apologies. I'm not used to a mailing list without the reply-to
set to the mailing list.
Can a version of open mpi be built using x86_64-w64-mingw32-gcc so
that it will work with code compiled with x86_64-w64-
To solve the undefined references to cudaMalloc and cudaFree, you need
to link the CUDA runtime. So you should replace -lcuda by -lcudart.
For the OPENCL undefined references, I don't know where those are coming
from ... could it be that hwloc is compiling OpenCL support but not
adding -lOpenC
On Fri, Sep 8, 2017 at 4:45 PM, Gilles Gouaillardet
wrote:
> can you
> ./mpicc -showme -I/usr/local/cuda-8.0.61/lib64 -lcuda test_cuda_aware.c -o
> myapp
> and double check -lcuda is *after* -lopen-pal ?
gcc -I/usr/local/cuda-8.0.61/lib64 -lcuda test_cuda_aware.c -o myapp
-I/home/kokanen/opt/inc
On 07/09/2017 21:56, Marco Atzeri wrote:
On 07/09/2017 21:12, Llelan D. wrote:
Windows 10 64bit, Cygwin64, openmpi 1.10.7-1 (dev, c, c++, fortran),
GCC 6.3.0-2 (core, gcc, g++, fortran)
However, when I run it using mpiexec:
$ mpiexec -n 4 ./hello_c
$ ^C
Nothing is displayed and I have to ^
25 matches
Mail list logo