mpi-5.0.0/3rd-party/openpmix/src/util/pmix_path.c"
> > #include
> > ^
> >
> > compilation aborted for
> /Users/christophe/Developer/openmpi-5.0.0/3rd-party/openpmix/src/util/pmix_path.c
> (code 4)
> > make[4]: *** [pmix_path
at this is related to some earlier pmix related issues that
> can be found by google but wanted to report.
>
> Thank you!
> Best wishes
> Volker
>
>
> Volker Blum
> Associate Professor, Duke MEMS & Chemistry
> https://aims.pratt.duke.edu
> https://bsky.app/profile/aims
ll), so I'm guessing it's not
functionality but rather performance?
Just curious,
Matt
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- Director of Better Anna
Rampton
ny given compiler supported
> enough F2008 support for some / all of the mpi_f08 module. That's why the
> configure tests are... complicated.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ____
> From: users on behalf of Matt Tho
?
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ____
> From: users on behalf of Matt Thompson
> via users
> Sent: Thursday, December 30, 2021 9:56 AM
> To: Open MPI Users
> Cc: Matt Thompson; Christophe Peyret
> Subject: Re: [OMPI us
mpi_init(ierror)
>
> comm=MPI_COMM_WORLD
>
> call mpi_comm_rank(comm, rank, ierror)
>
> call mpi_comm_size(comm, size, ierror)
>
> print '("rank=",i0,"/",i0)',rank,size
>
> call mpi_finalize(ierror)
>
> end program toto
>
>
> --
>
> *Christophe Peyret*
>
> *ONERA/DAAA/NFLU*
>
> 29 ave de la Division Leclerc
> F92322 Châtillon Cedex
>
>
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- Director of Better Anna
Rampton
> seeing, I figured why bother the admins.
>>
>> Still, it does *seem* like it should work. I might ask NAG support about
>> it.
>>
>> On Wed, Dec 22, 2021 at 6:28 PM Tom Kacvinsky wrote:
>>
>>> On Wed, Dec 22, 2021 at 5:45 PM Tom Kacvinsky
>&g
21 at 6:28 PM Tom Kacvinsky wrote:
> On Wed, Dec 22, 2021 at 5:45 PM Tom Kacvinsky wrote:
> >
> > On Wed, Dec 22, 2021 at 4:11 PM Matt Thompson wrote:
> > >
> > > All,
> > >
> > > When I build Open MPI with NAG, I have to pass in:
> >
Error: conftest.f90, line 50: Incorrect data type INTEGER (expected
> CHARACTER) for argument BUFFER (no. 1) of FOO
> [NAG Fortran Compiler error termination, 2 errors, 2 warnings]
> configure:69740: $? = 2
>
> So I suspect this makes the Fortran checks unhappy so that the
> con
-lopen-rte -lopen-pal -lm -lz
so that I don't need to remember it every time?
On Fri, Oct 29, 2021 at 10:32 AM Matt Thompson wrote:
> So, an update. Nothing I seem to do with libtool seems to help, but I'm
> trying various things. For example, I tried editing libtool to use:
>
&
On Fri, Oct 29, 2021 at 8:24 AM Matt Thompson wrote:
> Gilles,
>
> I tried both NAG 7.0.7062 and 7.0.7048. Both fail in the same way. And I
> was using the official tarball from the Open MPI website. I downloaded it
> long ago and then kept it around.
>
> And I didn't
ld a shared archive.
>
> archive_cmds="\$CC -dynamiclib \$allow_undef ..."
>
> simply manually remove "-dynamiclib" here and see if it helps
>
>
> Cheers,
>
> Gilles
> On Fri, Oct 29, 2021 at 12:30 AM Matt Thompson via users <
> users@lists.open-
s far as I
know.
I do see references to -Bstatic and -Bdynamic in the source code, but
apparently I'm not triggering the configure step to use them?
Anyone else out there encounter this?
NOTE: I did try doing an Intel Fortran + Clang shared build today and that
seemed to work. I think that'
d then simply
>
> mpirun ...
>
> Cheers,
>
> Gilles
>
> On Fri, Mar 19, 2021 at 5:44 AM Matt Thompson via users
> wrote:
> >
> > Prentice,
> >
> > Ooh. The first one seems to work. The second one apparently is not liked
> by zsh and I had to do:
See
>
> https://www.open-mpi.org/faq/?category=sm
>
> for more info.
>
> Prentice
>
> On 3/18/21 12:28 PM, Matt Thompson via users wrote:
>
> All,
>
> This isn't specifically an Open MPI issue, but as that is the MPI stack I
> use on my laptop, I'm ho
I I'm doing,
so at most an "mpirun --oversubscribe -np 12" or something. It'll never go
over my network to anything, etc.
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- Director of Better Anna
Rampton
ving trouble after fixing the above you may need to
> check yama on the host. You can check with "sysctl -w
> kernel.yama.ptrace_scope", if it returns a value other than 0 you may
> need to disable it with "sysctl -w kernel.yama.ptrace_scope=0".
>
> Adam
>
>
4, 2020, at 2:59 PM, Gabriel, Edgar via users <
> users@lists.open-mpi.org> wrote:
>
>
>
> I am not an expert for the one-sided code in Open MPI, I wanted to comment
> briefly on the potential MPI -IO related item. As far as I can see, the
> error message
>
>
>
On Mon, Feb 24, 2020 at 4:57 PM Gabriel, Edgar
wrote:
> I am not an expert for the one-sided code in Open MPI, I wanted to comment
> briefly on the potential MPI -IO related item. As far as I can see, the
> error message
>
>
>
> “Read -1, expected 48, errno = 1”
>
> does not stem from MPI I/O, at
his was built by
sysadmins on a cluster), but this is pretty close to how I build on my
desktop and it has the same issue.
Any ideas from the experts?
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- Director of Better Anna
Rampton
gt;
>
> _MAC
>
>
>
> *From:* users [mailto:users-boun...@lists.open-mpi.org] *On Behalf Of *Matt
> Thompson
> *Sent:* Tuesday, January 22, 2019 6:04 AM
> *To:* Open MPI Users
> *Subject:* Re: [OMPI users] Help Getting Started with Open MPI and PMIx
> and UCX
>
>
as mpirun hostname (both with sbatch
>> and salloc)
>> - explicitly specify the network to be used for the wire-up. you can
>> for example mpirun --mca oob_tcp_if_include 192.168.0.0/24 if this is
>> the network subnet by which all the nodes (e.g. compute nodes and
>
On Fri, Jan 18, 2019 at 1:13 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> On Jan 18, 2019, at 12:43 PM, Matt Thompson wrote:
> >
> > With some help, I managed to build an Open MPI 4.0.0 with:
>
> We can discuss each of these params to
t I
managed to break? Perhaps I added too much to my configure line? Not enough?
Thanks,
Matt
On Thu, Jan 17, 2019 at 11:10 AM Matt Thompson wrote:
> Dear Open MPI Gurus,
>
> A cluster I use recently updated their SLURM to have support for UCX and
> PMIx. These are names I've seen a
er end of the
cluster than the administrator end, so I tend to get lost in the detailed
presentations, etc. I see online.
Thanks,
Matt
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- D
den #ifdef OPENMPI somewhere.
Matt
--
Matt Thompson
“The fact is, this is about us identifying what we do best and
finding more ways of doing less of it better” -- Director of Better Anna
Rampton
___
users mailing list
users@lists.open-mpi
PB3.3-MPI/BT'
> make[1]: *** [../bin/bt.D.4] Error 2
> make[1]: Leaving directory `/home/mahmood/Downloads/NPB3.
> 3.1/NPB3.3-MPI/BT'
> make: *** [bt] Error 2
>
>
> There is a good guide about that (https://www.technovelty.org/
> c/relocation-truncated-to-fit-wtf.h
I work to incorporate one-sided MPI
combined with OpenMP/threads into our code. So far, Open MPI is the only
stack we've tried where the code doesn't weirdly die in odd places, so I
might be coming back here with more questions when we try to improve the
performance/encounter problems.
--
M
ps not XPMEM related?
Matt
On Mon, Jun 5, 2017 at 1:00 PM, Nathan Hjelm wrote:
> Can you provide a reproducer for the hang? What kernel version are you
> using? Is xpmem installed?
>
> -Nathan
>
> On Jun 05, 2017, at 10:53 AM, Matt Thompson wrote:
>
> OMPI Users,
>
&
egression that lowered the memory maximum message bandwidth for
large messages on some BTL network transports, such as openib, sm,
and vader.
- The vader BTL is now more efficient in terms of memory usage when
using XPMEM.
Thanks for any help,
Matt
--
Matt Thompson
Man Among Men
Fulcr
;>>>>>>./configure \
> >>>>>>>>>> --prefix=/usr/pppl/pgi/17.3-pkgs/openmpi-1.10.3 \
> >>>>>>>>>> --disable-silent-rules \
> >>>>>>>>>> --enable-shared \
> >>>>>>>>>> --enable-static \
> >>>>>>>>>> --enable-mpi-thread-multiple \
> >>>>>>>>>> --with-pmi=/usr/pppl/slurm/15.08.8 \
> >>>>>>>>>> --with-hwloc \
> >>>>>>>>>> --with-verbs \
> >>>>>>>>>> --with-slurm \
> >>>>>>>>>> --with-psm \
> >>>>>>>>>> CC=pgcc \
> >>>>>>>>>> CFLAGS="-tp x64 -fast" \
> >>>>>>>>>> CXX=pgc++ \
> >>>>>>>>>> CXXFLAGS="-tp x64 -fast" \
> >>>>>>>>>> FC=pgfortran \
> >>>>>>>>>> FCFLAGS="-tp x64 -fast" \
> >>>>>>>>>> 2>&1 | tee configure.log
> >>>>>>>>>>
> >>>>>>>>>>Which leads to this error from libtool during make:
> >>>>>>>>>>
> >>>>>>>>>>pgcc-Error-Unknown switch: -pthread
> >>>>>>>>>>
> >>>>>>>>>>I've searched the archives, which ultimately lead to this
> work
> >>>>>>>>>>around from 2009:
> >>>>>>>>>>
> >>>>>>>>>> https://www.open-mpi.org/community/lists/users/2009/04/8724.php
> >>>>>>>>>> <https://www.open-mpi.org/community/lists/users/2009/04/
> 8724.php>
> >>>>>>>>>>
> >>>>>>>>>>Interestingly, I participated in the discussion that lead to
> that
> >>>>>>>>>>workaround, stating that I had no problem compiling Open MPI
> with
> >>>>>>>>>>PGI v9. I'm assuming the problem now is that I'm specifying
> >>>>>>>>>>--enable-mpi-thread-multiple, which I'm doing because a user
> >>>>>>>>>>requested that feature.
> >>>>>>>>>>
> >>>>>>>>>>It's been exactly 8 years and 2 days since that workaround
> was
> >>>>>>>>>>posted to the list. Please tell me a better way of dealing
> with
> >>>>>>>>>>this issue than writing a 'fakepgf90' script. Any
> suggestions?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>--
> >>>>>>>>>>Prentice
> >>>>>>>>>>
> >>>>>>>>>> ___
> >>>>>>>>>>users mailing list
> >>>>>>>>>>users@lists.open-mpi.org
> >>>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >>>>>>>>>> <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> ___
> >>>>>>>>>> users mailing list
> >>>>>>>>>> users@lists.open-mpi.org
> >>>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ___
> >>>>>>>>> users mailing list
> >>>>>>>>> users@lists.open-mpi.org
> >>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>> ___
> >>>> users mailing list
> >>>> users@lists.open-mpi.org
> >>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >>>
> >>
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >
>
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iEYEARECAAYFAljiu0YACgkQo/GbGkBRnRpGowCgha3O1wvYyQQOrsYuUqSGJq2B
> qHEAnRyT0PHY75NmmI9Efv4CkM7aJjVp
> =f5Xk
> -END PGP SIGNATURE-
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Matt Thompson
Man Among Men
Fulcrum of History
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
opal_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> opal_shmem_base_select failed
> --> Returned value -1 instead of OPAL_SUCCESS
> -
sages related
> to C++,
> and you should instead focus on the Fortran issue.
>
> Cheers,
>
> Gilles
>
> On Thursday, March 23, 2017, Matt Thompson wrote:
>
>> All, I'm hoping one of you knows what I might be doing wrong here. I'm
>> trying to u
compiler vendor... portland group
so I thought the C++ section would as well. I also tried passing in
--enable-mpi-cxx, but that did nothing.
Is this just a red herring? My real concern is with pgfortran/mpifort, but
I thought I'd start with this. If this is okay, I'll move on and detail
r sys/types.h... no
> checking for sys/stat.h... no
> checking for stdlib.h... no
> checking for string.h... no
> checking for memory.h... no
> checking for strings.h... no
> checking for inttypes.h... no
> checking for stdint.h... no
> checking for unistd.h... no
>
&g
t; > * @{
> >
> > */
> >
> > #ifdef _EVENT_HAVE_UINT64_T
> >
> > #define ev_uint64_t uint64_t
> >
> > #define ev_int64_t int64_t
> >
> > #elif defined(WIN32)
> >
> > #define ev_uint64_t unsigned __int64
> >
> &
rums as
well (http://www.pgroup.com/userforum/viewtopic.php?t=5413&start=0) since
I'm not sure. But, no matter what, does anyone have thoughts on how to
solve this?
Thanks,
Matt
--
Matt Thompson
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
On Fri, Aug 19, 2016 at 8:54 PM, Jeff Squyres (jsquyres) wrote:
> On Aug 19, 2016, at 6:32 PM, Matt Thompson wrote:
>
> > > that the comm == MPI_COMM_WORLD evaluates to .TRUE.? I discovered that
> once when I was printing some stuff.
> >
> > That might well be a c
On Fri, Aug 19, 2016 at 2:55 PM, Jeff Squyres (jsquyres) wrote:
> On Aug 19, 2016, at 2:30 PM, Matt Thompson wrote:
> >
> > I'm slowly trying to learn and transition to 'use mpi_f08'. So, I'm
> writing various things and I noticed that this triggers
stuff.
Thanks for helping me learn,
Matt
--
Matt Thompson
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
unning? (e.g., OpenSM, a vendor-specific
subnet manager, etc.)
Mellanox UFM (OpenSM under the covers)
--
Matt Thompson
Man Among Men
Fulcrum of History
info -v ompi full --parsable
ompi_info: Error: unknown option "-v"
Type 'ompi_info --help' for usage.
I am asking our machine gurus about the Infiniband network per:
https://www.open-mpi.org/faq/?category=openfabrics#ofa-troubleshoot
--
Matt Thompson
Man Among Men
Fulcrum of History
u don't need oshmem, you could try again with --disable-oshmem added
> to the config line
>
> Howard
>
>
> 2016-01-22 12:15 GMT-07:00 Matt Thompson :
>
>> All,
>>
>> I'm trying to duplicate an issue I had with ESMF long ago (not sure if I
>&g
have built static versions of Open MPI in the past (say
1.8.7 era with Intel Fortran 15), but this is a new OS (RHEL 7 instead of
6) so I can see issues possible.
Anyone seen this before? As I said, the "usual" build way is just fine.
Perhaps I need an extra RPM that isn't installed? I do have libnl-devel
installed.
--
Matt Thompson
Man Among Men
Fulcrum of History
h as me?”. Hoping to have it later this year, perhaps in the summer.
>
>
> On Jan 15, 2016, at 7:56 AM, Matt Thompson wrote:
>
> Ralph,
>
> That doesn't help:
>
> (1004) $ mpirun -map-by node -np 8 ./hostenv.x | sort -g -k2
> Process0 of8 is on host borgo0
wrote:
>
>
>
> On Fri, Jan 15, 2016 at 7:53 AM, Matt Thompson wrote:
>
>> All,
>>
>> I'm not too sure if this is an MPI issue, a Fortran issue, or something
>> else but I thought I'd ask the MPI gurus here first since my web search
>> fail
ocess 0.
So, I guess my question is: can this be done? Is there an option to Open
MPI that might do it? Or is this just something MPI doesn't do? Or is my
Google-fu just too weak to figure out the right search-phrase to find the
answer to this probable FAQ?
Matt
[1] Note, this might be unnecessary, but I got to the point where I wanted
to see if I *could* do it, rather than *should*.
--
Matt Thompson
Man Among Men
Fulcrum of History
OpenMP wiki to map all
the combinations of compiler+mpistack.
Or pray the MPI Forum and OpenMP combine and I can just look in a Standard.
:D
Thanks,
Matt
--
Matt Thompson
Man Among Men
Fulcrum of History
gt; provide them for easy testing changes.
> Surely this is application dependent, but for my case it was performing
> really well.
>
>
> 2016-01-06 20:48 GMT+01:00 Erik Schnetter :
>
>> Setting KMP_AFFINITY will probably override anything that OpenMPI
>> sets. Can
bably override anything that OpenMPI
> sets. Can you try without?
>
> -erik
>
> On Wed, Jan 6, 2016 at 2:46 PM, Matt Thompson wrote:
> > Hello Open MPI Gurus,
> >
> > As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do
> > things to ge
iously not right. Any ideas on how to help me learn? The man mpirun page
is a bit formidable in the pinning part, so maybe I've missed an obvious
answer.
Matt
--
Matt Thompson
Man Among Men
Fulcrum of History
it would not compile ESMF correctly. I'm going to try and
revisit it next week because I want Intel OpenMPI as shared so I can easily
use Allinea MAP.
I'll try and make a good report for you/ESMF.
--
Matt Thompson
Man Among Men
Fulcrum of History
ight function, but I was just acking the code.
Thanks,
Matt
On Mon, Dec 21, 2015 at 10:51 AM, Ralph Castain wrote:
> Try adding —cpu-set a,b,c,… where the a,b,c… are the core id’s of your
> second socket. I’m working on a cleaner option as this has come up before.
>
>
> On D
ll only have 8 processes per node,
so it shouldn't need to use Socket 0.
--
Matt Thompson
Man Among Men
Fulcrum of History
r509i2n17
Process 0 of 2 is on r509i2n17
So that is nice. Now the spin up if I have 8 or so nodes is rather...slow.
But at this point I'll take working over efficient. Quick startup can come
later.
Matt
>
>
> On Sep 24, 2015, at 8:56 AM, Matt Thompson wrote:
>
> Ralph,
>
&g
t; :-)
>
> In which case, you need to remove the obstacle. You might check for
> firewall, or check to see if multiple NICs are on the non-maia nodes (this
> can sometimes confuse things, especially if someone put the NICs on the
> same IP subnet)
>
> HTH
> Ralph
>
>
d searching the web, but the only place I've seen
tcp_peer_send_blocking is in a PDF where they say it's an error that can be
seen:
http://www.hpc.mcgill.ca/downloads/checkpointing_workshop/20150326%20-%20McGill%20-%20Checkpointing%20Techniques.pdf
Any ideas for what this error can mean?
--
Matt Thompson
Man Among Men
Fulcrum of History
|
> | BP 53 X | Tel 04 76 82 51 35 |
> | 38041 GRENOBLE CEDEX| Fax 04 76 82 52 71 |
> ===
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: htt
print("The --task-prolog option is unsupported at . Please " .
> "contact the for assistance.\n");
> exit(1);
> } else {
> push(@command, $_);
> }
> }
> system(@command);
>
>
>
> On Sep 4, 2014, at 1
t be easy. It might just be the difference
> between system("...string...") and system(@argv).
>
> Sent from my phone. No type good.
>
> On Sep 4, 2014, at 8:35 AM, "Matt Thompson" wrote:
>
> Jeff,
>
> I actually misspoke earlier. It turns out
and
maybe you can see how it is affecting Open MPI's argument passage.
Matt
On Thu, Sep 4, 2014 at 8:04 AM, Jeff Squyres (jsquyres)
wrote:
> On Sep 3, 2014, at 9:27 AM, Matt Thompson wrote:
>
> > Just saw this, sorry. Our srun is indeed a shell script. It seems to be
> a w
On Tue, Sep 2, 2014 at 8:38 PM, Jeff Squyres (jsquyres)
wrote:
> Matt: Random thought -- is your "srun" a shell script, perchance? (it
> shouldn't be, but perhaps there's some kind of local override...?)
>
> Ralph's point on the call today is that it doesn't matter *how* this
> problem is happen
u won't see the "hello world" output.
>
> The purpose of this test is that I want to see if OMPI is just totally
> erring out and not even running your job (which is quite unlikely; OMPI
> should be much more noisy when this happens), or whether we're simply not
case
> others have similar issues. Out of curiosity, what OS are you using?
>
>
> On Sep 1, 2014, at 9:00 AM, Matt Thompson wrote:
>
> Ralph,
>
> Okay that seems to have done it here (well, minus the
> usual shmem_mmap_enable_nfs_warning that our syste
ld, and try again?
>
> Much appreciate the help. Everyone's system is slightly different, and I
> think you've uncovered one of those differences.
> Ralph
>
>
>
> On Aug 31, 2014, at 6:25 AM, Matt Thompson wrote:
>
> Ralph,
>
> Sorry it took me a bi
component tcp
On Fri, Aug 29, 2014 at 3:18 PM, Ralph Castain wrote:
> Rats - I also need "-mca plm_base_verbose 5" on there so I can see the cmd
> line being executed. Can you add it?
>
>
> On Aug 29, 2014, at 11:16 AM, Matt Thompson wrote:
>
> Ralph,
>
>
d line?
>
>
> On Aug 29, 2014, at 4:22 AM, Matt Thompson wrote:
>
> Ralph,
>
> For 1.8.2rc4 I get:
>
> (1003) $
> /discover/nobackup/mathomp4/MPI/gcc_4.9.1-openmpi_1.8.2rc4/bin/mpirun
> --leave-session-attached --debug-daemons -np 8 ./helloWorld.182.x
> srun.slu
let's see if any errors get reported.
>
>
> On Aug 28, 2014, at 12:20 PM, Matt Thompson wrote:
>
> Open MPI List,
>
> I recently encountered an odd bug with Open MPI 1.8.1 and GCC 4.9.1 on our
> cluster (reported on this list), and decided to try it with 1.8.2. However,
Open MPI List,
I recently encountered an odd bug with Open MPI 1.8.1 and GCC 4.9.1 on our
cluster (reported on this list), and decided to try it with 1.8.2. However,
we seem to be having an issue with Open MPI 1.8.2 and SLURM. Even weirder,
Open MPI 1.8.2rc4 doesn't show the bug. And the bug is: I
running 96 processes on a single machine, or spread across
> multiple machines?
>
> Note that Open MPI 1.8.x binds each MPI process to a core by default, so
> if you're oversubscribing the machine, it could be fairly disastrous...?
>
>
> On Aug 14, 2014, at 1:29 PM, Matt Thomps
mpirun -np NPROCS ./mpi_reproducer.x NX NY
where NX*NY has to equal NPROCS and it's best to keep them even numbers.
(There might be a few more restrictions and the code will die if you
violate them.)
Thanks,
Matt Thompson
--
Matt Thompson SSAI, Sr Software Test Engr
NASA GSFC, Global Modeling and Assimilation Office
Code 610.1, 8800 Greenbelt Rd, Greenbelt, MD 20771
Phone: 301-614-6712 Fax: 301-614-6246
r occurs as expected. Hope springs eternal, you know?
On Mon, Mar 24, 2014 at 6:48 PM, Jeff Squyres (jsquyres) wrote:
> On Mar 24, 2014, at 6:34 PM, Matt Thompson wrote:
>
> > Sorry for the late reply. The answer is: No, 1.14.1 has not fixed the
> problem (and indeed, that'
f not, I need to tweak this project a bit more to make it
> more like OMPI's build system behavior.
>
> If you can replicate the error, then also try the second attached tarball:
> it's the same project, but bootstrapped with the latest versions of GNU
> Automake (the others ar
/usr/bin/install -c
> libmpi_usempi_ignore_tkr.la'/Users/fortran/MPI/openmpi_1.7.4-pgi_14.3-gcc/lib'
> libtool: install: /usr/bin/install -c
> .libs/libmpi_usempi_ignore_tkr.0.dylib
> /Users/fortran/MPI/openmpi_1.7.4-pgi_14.3-gcc/lib/libmpi_usempi_ignore_tkr.0.dylib
> install: .libs/
#x27;s a strange error. Can you confirm whether
> ompi_buil_dir/ompi/mpi/fortran/use-mpi-ignore-tkr/.libs/libmpi_usempi_ignore_tkr.0.dylib
> exists or not?
>
> Can you send all the info listed here:
>
> http://www.open-mpi.org/community/help/
>
>
> On Mar 18, 2014, at 8:
t; /Users/fortran/MPI/openmpi_1.7.4-pgi_14.3-gcc-mmacosx/lib/libmpi_usempi_ignore_tkr.0.dylib
> install: .libs/libmpi_usempi_ignore_tkr.0.dylib: No such file or directory
> make[3]: *** [install-libLTLIBRARIES] Error 71
> make[2]: *** [install-am] Error 2
> make[1]: *** [install-recursive] Err
76 matches
Mail list logo