hi -
I have an application that consistently segfault when I do
"mpirun --oversubscribe" and the following message came AFTER application
runs. My running environment: MacOS with openmpi 3.1.2.
Is this a problme with my application? or my environment? any help?
than
Gilles,
Upon close look, the previous errors of spec file are caused by CRLF line
terminators (it seems the file is prepared on windows(?) ). Once converted
to Unix, everything seems to fine.
Thanks for your spec.
Oliver
On Wed, Nov 4, 2015 at 7:24 PM, Gilles Gouaillardet
wrote:
> Oliv
ttached an updated spec file that works on Cent OS 7
>
> i will double think about it before a permanent fix
>
> Cheers,
>
> Gilles
>
> On 10/31/2015 9:09 PM, Oliver wrote:
>
> hi all
>
> I am trying to rebuild 1.10 RPM from the src rpm on Cent OS 7. The buil
etc
/etc/openmpi-default-hostfile
/etc/openmpi-mca-params.conf
/etc/openmpi-totalview.tcl
/etc/vtsetup-config.dtd
/etc/vtsetup-config.xml
/usr
/usr/bin
/usr/bin/mpiCC
/usr/bin/mpiCC-vt
/usr/bin/mpic++
....
Best,
Oliver
On Sun, Nov 1, 2015 at 8:20 PM, Gilles Gouaillardet
wrote:
> Olivier,
>
l7.x86_64
file /usr/lib64 from install of openmpi-1.10.0-1.x86_64 conflicts with
file from package filesystem-3.2-18.el7.x86_64
what am I missing, is there a fix?
TIA
--
Oliver
;ve examine the code many times over and couldn't see why. I am wondering
if this is something else and has anyone else run into similar problem
before?
TIA
Oliver
re 0-5]: [B B B B
B B . . . . . .][. . . . . . . . . . . .][. . . . . . . . . . . .][. . .
. . . . . . . . .] (slot list 0-5)
Actually I'm dreaming of
mpirun --bind-to-NUMAnode --bycore ...
or
mpirun --bind-to-NUMAnode --byNUMAnode ...
Is there any workaround execpt rankfiles for this?
Regards,
Oliver Weihe
e most elegant solution but it works eventually.
Best regards,
Oliver
Jeff Squyres (jsquyres) wrote:
Lam and open mpi are two different mpi implementations. Lam came before open mpi; we stopped developing lam years ago.
Lamboot is a lam-specific command. It has no analogue in open mpi.
Orter
"lamboot" across multiple machines.
-----
Thanks in advance,
Oliver
Jeff Squyres wrote:
If you're just starting with MPI, is there any chance you can upgrade to Open
MPI instead of LAM/MPI? All of the LAM/MPI developers moved to Open MPI
ostnames ./mpitest
Hello, World. I am 0 of 1
Hello, World. I am 0 of 1
Hello, World. I am 0 of 1
Hello, World. I am 0 of 1
Hello, World. I am 0 of 1
Hello, World. I am 0 of 1
And I don't get it why everyone has the rank 0 and the size is only 1. I
followed many tutorials and i proved it rig
To keep this thread updated:
After I posted to the developers list, the community was able to guide
to a solution to the problem:
http://www.open-mpi.org/community/lists/devel/2010/04/7698.php
To sum up:
The extended communication times while using shared memory communication
of openmpi processe
On 4/6/2010 2:53 PM, Jeff Squyres wrote:
>
> Try NetPIPE -- it has both MPI communication benchmarking and TCP
> benchmarking. Then you can see if there is a noticable difference between
> TCP and MPI (there shouldn't be). There's also a "memcpy" mode in netpipe,
> but it's not quite the sam
On 4/1/2010 12:49 PM, Rainer Keller wrote:
> On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
>> Does anyone know a benchmark program, I could use for testing?
> There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance
> analysis tools (Scala
> However, reading through your initial description on Tuesday, none of these
> fit: You want to actually measure the kernel time on TCP communication costs.
>
Since the problem occurs also on node only configuration and mca-option
btl = self,sm,tcp is used, I doubt it has to do with TCP communi
Does anyone know a benchmark program, I could use for testing?
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
I have tried up to kernel 2.6.33.1 on both architectures (Core2 Duo and
I5) with the same results. The "slow" results are also appearing for
distribution of processes on the 4 cores one single node.
We use
btl = self,sm,tcp
in
/etc/openmpi/openmpi-mca-params.conf
Distributing several process to eac
VN head, which compiles but
I can't test it because the current SVN head doesn't work for me at all
at present (for an appfile with less than 128 entries).
Sorry to send this here rather than the dev list, but I don't really
have the time to sign up and get involved at the mome
tively, if you point me at the appropriate piece of code, I'll
have a go at making the number a #define or something, and putting some
checks in so it doesn't just crash.
Oliver
The full output with '-d' and the config.log from the build of 1.4.1 are
also attached.
I don't know the exact setup of the network, but I can ask our sysadmin
anything else that might help.
Thanks in advance,
Oliver Ford
Culham Centre for Fusion Energy
Oxford, UK
-np 1
19 matches
Mail list logo