Has anyone found the magic to apply the traditional PLFS
ompi-1.7.x-plfs-prep.patch to the current version of Open MPI? It looks
like it shouldn't take too much effort to update the patch, but it would
be even better to learn that someone else has already made that available!
Andy
--
Hi,
The short answer: Environment module files are probably the best
solution for your problem.
The long answer: See
,
which pretty much addresses your question.
Andy
On 05/23/2016 07:40 AM, Megdich Islem
wrote:
I gleaned from the web that I need to comment out
"opal_event_include=epoll" in /etc/openmpi-mca-params.conf
in order to use Open MPI with PBS Pro.
Can we also disable that in other cases, like Slurm, or is this
something specific to PBS Pro?
Andy
--
Andy Riebs
andy.ri...@hpe.c
’ve never heard of that, and cannot imagine what it has to do with the
resource manager. Can you point to where you heard that one?
FWIW: we don’t ship OMPI with anything in the default mca params file, so
somebody must have put it in there for you.
On Aug 23, 2016, at 4:48 PM, Andy Riebs
PU=$SLURM_LOCALID*$stride
exec numactl --membind=0 --physcpubind=$bindCPU $*
fi
$*
%
--
Andy Riebs
andy.ri...@hpe.com
Hewlett-Packard Enterprise
High Performance Computing Software Engineering
+1 404 648 9024
My opinions are not necessarily those of HPE
M
œthey explicitly said not to do itâ€, then we can
avoid the situation.
Ralph
On Oct 27, 2016, at 8:48 AM, Andy Riebs wrote:
Hi All,
We are running Open MPI version 1.10.2, built with support for Slurm version 16.05.0.
When a user specifies "--cpu_bind=none", MPI tries to bind by core
cmd line? Do these not exist?
On Oct 27, 2016, at 10:14 AM, Andy Riebs <andy.ri...@hpe.com> wrote:
Hi
Ralph,
I
think I've found the
e it alone
Would that make sense? Is there anything else that
could be in that envar which would trip us up?
On Oct 27, 2016, at 10:37 AM, Andy Riebs <andy.ri...@hpe.com> wrote:
us up?
On Oct 27, 2016, at
10:37 AM, Andy Riebs <andy.ri...@hpe.com>
g list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Andy Riebs
andy.ri...@hpe.com
Hewlett-Packar
ike to buy a clue, please?
Andy
--
Andy Riebs
andy.ri...@hpe.com
Hewlett-Packard Enterprise
High Performance Computing Software Engineering
+1 404 648 9024
My opinions are not necessarily those of HPE
May the source be with you!
___
users mailing list
expect ?
Cheers,
Gilles
On 5/25/2017 11:02 AM, Andy Riebs wrote:
Hi,
I'm trying to build OMPI on RHEL 7.2 with MOFED on an x86_64 system,
and I'm seeing
=
Open MPI gitclone: test/datatype/test
Process name: [[30881,1],0]
Exit code: 255
--
Any thoughts about where to go from here?
Andy
--
Andy Riebs
Hewlett-Packard Company
High Performance Computing
+1 404 648 9024
My opinions are not necessarily those of HP
You might also add —enable-debug to that configure
line and then put -mca plm_base_verbose on the shmemrun cmd to
get more help
On Apr 10, 2015, at 11:55 AM, Andy Riebs <andy.ri...@hp.com> wrote:
r. Are you
running shmemrun on the PHI itself? Or is it running on the host
processor, and you are trying to spawn a process onto the Phi?
On Apr 11, 2015, at 7:55 AM, Andy Riebs <andy.ri...@hp.com> wrote:
but I probably need to let the OSHMEM folks comment on it
On Apr 11, 2015, at 9:52 AM, Andy Riebs <andy.ri...@hp.com> wrote:
Everything
is built on the Xeon side, with the icc "-mm
100” to your cmd line so we can see why none of the memheap
components are being selected.
On Apr 12, 2015, at 11:30 AM, Andy Riebs <andy.ri...@hp.com> wrote:
Hi Ralph,
our system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
---
aid, it looks like the LD_LIBRARY_PATH is wrong on the remote
system. It looks like it can't find the intel compiler libraries.
-Nathan Hjelm
HPC-5, LANL
On Mon, Apr 13, 2015 at 04:06:21PM -0400, Andy Riebs wrote:
Progress! I can run my trivial program on the local PHI
useful:
—leave-session-attached
—mca mca_component_show_load_errors 1
You might also do an ldd on
/home/ariebs/mic/mpi-nightly/bin/orted and see where it is
looking for libimf since it (and not mic.out) is the one
Nick,
You may have more luck looking into the OSHMEM layer of Open MPI;
SHMEM is designed for one-sided communications.
BR,
Andy
On 04/14/2015 02:36 PM, Nick Papior
Andersen wrote:
Dear all,
I am tr
On 4/14/2015 11:20 PM,
Ralph Castain wrote:
Hmmm…certainly looks that way.
I’ll investigate.
On Apr 14, 2015, at 6:06 AM, Andy
Hi Ralph,
If I did this right (NEVER a good bet :-) ), it didn't work...
Using last night's master nightly,
openmpi-dev-1515-gc869490.tar.bz2, I built with the same script as
yesterday, but removing the LDFLAGS=-Wl, stuff:
$ ./configure --prefix=/home/a
Ralph Castain
wrote:
Sorry - I had to revert the commit due to a
reported MTT problem. I'll reinsert it after I get home and can
debug the problem this weekend.
On Thu, Apr 16, 2015 at 9:41 AM, Andy
Riebs <andy
that area had an
impact here.
Are you saying it just works, even without passing
the new param?
On Apr 26, 2015, at 6:39 AM, Andy Riebs <andy.ri...@hp.com> wrote:
The challenge for the MPI experts here (of which I am NOT one!) is
that the problem appears to be in your program; MPI is simply
reporting that your program failed. If you got the program from
someone else, you will need to solicit their help. If you wrote it,
well, it is
ges in the build log -- what am I
missing?
Andy
--
Andy Riebs
andy.ri...@hpe.com
Hewlett-Packard Enterprise
High Performance Computing Software Engineering
+1 404 648 9024
My opinions are not necessarily those of HPE
.
Josh
On Thu, May 5, 2016 at 7:32 AM, Andy
Riebs <andy.ri...@hpe.com>
wrote:
I've built
1.10.2 with all my favorite configuration options, but I get
messages such as this (one for eac
M, Nathan Hjelm
wrote:
It should work fine with ob1 (the default). Did you determine what was
causing it to fail?
-Nathan
On Thu, May 05, 2016 at 06:04:55PM -0400, Andy Riebs wrote:
For anyone like me who happens to google this in the future, the solutio
LNX_OFED_LINUX-3.4-2.1.8.0-redhat7.3-x86_64/knem
--with-libevent=/usr
--with-mxm=/opt/mellanox/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-3.4-2.1.8.0-redhat7.3-x86_64/mxm
--with-platform=contrib/platform/mellanox/optimized
--with-pmi=/opt/local/slurm/default
--with-pmix=/opt/l
that series?
On Oct 27, 2017, at 1:24 PM, Andy Riebs wrote:
We have built a version of Open MPI 3.0.x that works with Slurm (our primary
use case), but it fails when executed without Slurm.
If I srun an MPI "hello world" program, it works just fine. Likewise, if I
salloc a couple o
Noam,
Start with the FAQ, etc., under "Getting Help/Support" in the
left-column menu at https://www.open-mpi.org/
Andy
*From:* Noam Bernstein
*Sent:* Tuesday, October 09, 2018 2:26PM
*To:* Open Mpi Users
*Cc:*
*Subject
The web suggests that OpenMP should work just fine with OpenMPI/MPI --
does this also work with OpenMPI/SHMEM?
Andy
--
Andy Riebs
andy.ri...@hpe.com
Hewlett-Packard Enterprise
High Performance Computing Software Engineering
+1 404 648 9024
My opinions are not necessarily those of HPE
May
Daniel,
I think you need to have "--with-pmix=" point to a specific directory;
either "/usr" if you installed it in /usr/lib and /usr/include, or the
specific directory, like "--with-pmix=/usr/local/pmix-3.0.2"
Andy
*Fr
34 matches
Mail list logo