Hello,
I've configured OpenMPI1.8.3 with the following command line
$ AXFLAGS="-xSSE4.2 -axAVX,CORE-AVX-I,CORE-AVX2"
$ myFLAGS="-O2 ${AXFLAGS}" ;
$ ./configure --prefix=${proot} \
--with-lsf \
--with-cma \
--enable-peruse --enable-branch-probabilities \
--enable-mpi-fortran=all
wrote:
> Hi Michael,
>
> If you do not include --enable-ipv6 in the config line, do you still
> observe the problem?
> Is it possible that one or more interfaces on nodes H1 and H2 do not have
> ipv6 enabled?
>
> Howard
>
>
> 2014-10-06 16:51 GMT-06:00 Micha
Dear OpenMPI list,
As far as I know, when we build OpenMPI itself with GNU or Intel compilers
we expect that the subsequent MPI application binary will use the same
compiler set and run-times.
Would it be possible to build OpenMPI with the GNU tool chain but then
subsequently instruct the OpenMPI
; Windows.
>
>
>
>
>
>
> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
>
> ---- Original message
> From: Michael Thomadakis
> Date: 9/18/17 3:51 PM (GMT-05:00)
> To: users@lists.open-mpi.org
> Subject: [OMPI users] Question concerning com
you use Fortran bindings (use mpi
> and use mpi_f08), and you'd better keep yourself out of trouble with
> C/C++ and mpif.h
>
> Cheers,
>
> Gilles
>
> On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis
> wrote:
> > Thanks for the note. How about OMP runtimes t
>
> Cheers,
>
> Gilles
>
> On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis
> wrote:
> > Thanks for the note. How about OMP runtimes though?
> >
> > Michael
> >
> > On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users <
> users@lists.open-mpi.org&
Hello OpenMPI
We area seriously considering deploying OpenMPI 1.6.5 for production (and
1.7.2 for testing) on HPC clusters which consists of nodes with *different
types of networking interfaces*.
1) Interface selection
We are using OpenMPI 1.6.5 and was wondering how one would go about
selectin
sses cannot communicate.
>
> HTH
> Ralph
>
> On Jul 5, 2013, at 2:34 PM, Michael Thomadakis
> wrote:
>
> Hello OpenMPI
>
> We area seriously considering deploying OpenMPI 1.6.5 for production (and
> 1.7.2 for testing) on HPC clusters which consists of nodes with *di
Great ... thanks. We will try it out as soon as the common backbone IB is
in place.
cheers
Michael
On Fri, Jul 5, 2013 at 6:10 PM, Ralph Castain wrote:
> As long as the IB interfaces can communicate to each other, you should be
> fine.
>
> On Jul 5, 2013, at 3:26 PM, Michae
Hello OpenMPI,
I am wondering what level of support is there for CUDA and GPUdirect on
OpenMPI 1.6.5 and 1.7.2.
I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, it
seems that with configure v1.6.5 it was ignored.
Can you identify GPU memory and send messages from it directl
Hello OpenMPI,
When you stack runs on SandyBridge nodes atached to HCAs ove PCI3 *gen
3*do you pay any special attention to the memory buffers according to
which
socket/memory controller their physical memory belongs to?
For instance, if the HCA is attached to the PCIgen3 lanes of Socket 1 do
yo
of support. The CUDA code
> is not in the 1.6 series as it was developed after that series went
> "stable". It is in the 1.7 series, although the level of support will
> likely be incrementally increasing as that "feature" series continues to
> evolve.
>
>
> On
does anything special memory mapping to work
around this. And if with Ivy Bridge (or Haswell) he situation has improved.
thanks
Mike
On Mon, Jul 8, 2013 at 9:57 AM, Jeff Squyres (jsquyres)
wrote:
> On Jul 6, 2013, at 4:59 PM, Michael Thomadakis
> wrote:
>
> > When you stack runs
rs] Support for CUDA and GPU-direct with OpenMPI
> 1.6.5 an 1.7.2
>
> ** **
>
> There was discussion of this on a prior email thread on the OMPI devel
> mailing list:
>
> ** **
>
> http://www.open-mpi.org/community/lists/devel/2013/05/12354.php
&
l 8, 2013, at 11:35 AM, Michael Thomadakis
> wrote:
>
> > The issue is that when you read or write PCIe_gen 3 dat to a non-local
> NUMA memory, SandyBridge will use the inter-socket QPIs to get this data
> across to the other socket. I think there is considerable limitation in
&g
the 1.6 series as it was developed after that series went
> "stable". It is in the 1.7 series, although the level of support will
> likely be incrementally increasing as that "feature" series continues to
> evolve.
>
>
>
> On Jul 6, 2013, at 12:06 PM, M
verload the CPU even more because of the additional
> copies.
>
> Brice
>
>
>
> Le 08/07/2013 18:27, Michael Thomadakis a écrit :
>
> People have mentioned that they experience unexpected slow downs in
> PCIe_gen3 I/O when the pages map to a socket different from the o
gt; Rolf will have to answer the question on level of support. The CUDA code
> is not in the 1.6 series as it was developed after that series went
> "stable". It is in the 1.7 series, although the level of support will
> likely be incrementally increasing as that "feature"
On old AMD platforms
> (and modern Intels with big GPUs), issues are not that uncommon (we've seen
> up to 40% DMA bandwidth difference there).
>
> Brice
>
>
>
> Le 08/07/2013 19:44, Michael Thomadakis a écrit :
>
> Hi Brice,
>
> thanks for testing this out.
| Remember that the point of IB and other operating-system bypass devices
is that the driver is not involved in the fast path of sending /
| receiving. One of the side-effects of that design point is that
userspace does all the allocation of send / receive buffers.
That's a good point. It was not
e. But I think if you have a compiler for Xeon Phi (Intel Compiler
>> or GCC) and an interconnect for it, you should be able to build an Open
>> MPI
>> that works on Xeon Phi.
>>
>> Cheers,
>> Tom Elken
>>
>> thanks...
>>
>> Michael
>>
&
gt;have not built
>>>Open MPI for Xeon Phi for your interconnect, but it seems to me
>>>that it
>>>should work.
>>>
>>>
>>>
>>>-Tom
>>>
>>>
>>>
>>>Cheers
Hello OpenMPI,
I was wondering what is the support that is being implemented for the Intel
Phi platforms. That is would we be able to run MPI code in "symmetric"
fashion, where some ranks run on the cores of the multicore hostst and some
on the cores of the Phis in a multinode cluster environment.
This discussion started getting into an interesting question: ABI
standardization for portability by language. It makes sense to have ABI
standardization for portability of objects across environments. At the same
time it does mean that everyone follows the exact same recipe for low level
implement
Hello OpenMPI
I was wondering if the MPI_Neighbor_x calls have received any special
design and optimizations in OpenMPI 4.1.x+ for these patterns of
communication.
For instance, these could benefit from proximity awareness and intra- vs
inter-node communications. However, even single node com
architecture-aware. The only 2
> components that provide support for neighborhood collectives are basic (for
> the blocking version) and libnbc (for the non-blocking versions).
>
> George.
>
>
> On Wed, Jun 8, 2022 at 1:27 PM Michael Thomadakis via users <
> users@lists.
26 matches
Mail list logo