Hello,
I have an application in C++(main.cpp) that is launched with multiple
processes via mpirun. Master process calls matlab via system('matlab
-nosplash -nodisplay -nojvm -nodesktop -r "interface"'), which executes
simple script interface.m that calls mexFunction (mexsolve.cpp) from which
I try
Hi Juraj,
Although MPI infrastructure may technically support forking, it's known
that not all system resources can correctly replicate themselves to forked
process. For example, forking inside MPI program with active CUDA driver
will result into crash.
Why not to compile down the MATLAB into a n
Additionally:
- When Open MPI migrated to github, we only brought over relevant open Trac
tickets to Github. As such, many old 1.10 and 1.8 (and earlier) issues were
not brought over.
- Trac is still available in a read-only manner at
https://svn.open-mpi.org/trac/ompi/report.
> On Oct 5, 20
$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
$ yum list installed | grep openmpi
openmpi.x86_64 1.10.0-10.el7 @base
openmpi-devel.x86_64 1.10.0-10.el7 @base
(1) When I run
$ mpirun -H myhosts -np myprocs executable
t
Hi,
Is there any idea about the following error? On that node, there are 15
empty cores.
Regards,
Mahmood
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Sorry about the incomplete message...
Is there any idea about the following error? On that node, there are 15
empty cores.
$ /share/apps/siesta/openmpi-2.0.1/bin/mpirun --host compute-0-3 -np 2
/share/apps/siesta/siesta-4.0-mpi201/tpar/transiesta < A.fdf
--
I found that if I put "compute-0-3" in a file (say hosts.txt) and pass that
file name via --hostfile, then the error disappears.
It is interesting to know what is the difference between these two? I think
it is very common to specify the nodes with --host option.
Regards,
Mahmood
On Wed, Oct
We did have some kind of stdout/stderr truncation issue a little while ago, but
I don't remember what version it specifically affected.
I would definitely update to at least Open MPI 1.10.4 (lots of bug fixes since
1.10.0). Better would be to update to Open MPI 2.0.1 -- that's the current
gene
Thank you for the sanity check and recommendations.
I will post my results here when resolved.
Jeff Squyres (jsquyres) wrote:
We did have some kind of stdout/stderr truncation issue a little while ago, but
I don't remember what version it specifically affected.
I would definitely update to at
Hi everyone,
I had a quick question regarding the compatibility of openmpi and mpi4py. I
have openmpi 1.7.3 and mpi4py 1.3.1. I know these are older versions of
each, but I was having some problems running a program that uses mpi4py and
openmpi, and I wanted to make sure it wasn't a compatibility
Hi Sam,
I am not a developer but I am using mpi4py with openmpi-1.10.2. For that
version, most of the functionality works, but I think there are some issues
with the mpi_spawn commands. Are you using the spawn commands?
I have no experience with the versions you are using, but I thought I'd
chim
On 5 October 2016 at 22:29, Mahdi, Sam wrote:
> Hi everyone,
>
> I had a quick question regarding the compatibility of openmpi and mpi4py.
> I have openmpi 1.7.3 and mpi4py 1.3.1. I know these are older versions of
> each, but I was having some problems running a program that uses mpi4py and
> o
Juraj,
if i understand correctly, the "master" task calls MPI_Init(), and then
fork&exec matlab.
In some cases (lack of hardware support), fork cannot even work. but
let's assume it is fine for now.
Then, if i read between the lines, matlab calls mexFunction that MPI_Init().
As far as i a
Matlab may have its own MPI installed. It definitely does if you have
the parallel computing toolbox. If you have that, it could be causing
problems. If you can, you might consider compiling your Matlab
application into a standalone executable, then call that from your own
program. That bypasse
14 matches
Mail list logo