Hi,
I would suggest use MXM (part of mofed, can be downloaded as standalone rpm
from http://mellanox.com/products/mxm for ofed)
It uses UD (constant memory footprint) and should provide good performance.
The next MXM v2.0 will support RC and DC (reliable UD) as well.
Once mxm is installed from rp
Tim, thanks for trying this out ...
Now you should be able to let part of the same OpenMPI application run on
the host multi-core side and the other part on the MIC. IntelMPI can do
this using an MPMD command line where the Xeon binaries run on the host,
whereas the MIC ones on MIC card(s).
I gue
Understood - but I was wondering if that was true for OMPI as well.
On Jul 9, 2013, at 11:30 AM, "Daniels, Marcus G" wrote:
> The Intel MPI implementation does this. The performance between the
> accelerators and the host is poor though. About 20mb/sec in my ping/pong
> test. Intra-MIC co
The Intel MPI implementation does this. The performance between the
accelerators and the host is poor though. About 20mb/sec in my ping/pong
test. Intra-MIC communication is about a 1GB/sec whereas intra-host is about
6GB/sec. Latency is higher (i.e. worse) for the intra-MIC communication
Hi Tim
Quick question: can the procs on the MIC communicate with procs on (a) the
local host, (b) other hosts, and (c) MICs on other hosts?
The last two would depend on having direct access to one or more network
transports.
On Jul 9, 2013, at 10:18 AM, Tim Carlson wrote:
> On Mon, 8 Jul 20
On Mon, 8 Jul 2013, Tim Carlson wrote:
Now that I have gone through this process, I'll report that it works with
the caveat that you can't use the openmpi wrappers for compiling. Recall
that the Phi card does not have either the GNU or Intel compilers
installed. While you could build up a tool
You must be using an older version of Gromacs, because the version I'm
looking at (git master) has nary a reference to the C++ bindings.
Since you say that Gromacs alone compiles fine, I suspect the problem
is that Plumed uses the C++ bindings. The Plumed download site hosted
by Google Docs (yuck
Oh you are right.
Thanks.
Best
tomek
On Tue, Jul 9, 2013 at 2:44 PM, Jeff Squyres (jsquyres)
wrote:
> If you care, the issue is that it looks like Gromacs is using the MPI C++
> bindings. You therefore need to use the MPI C++ wrapper compiler, mpic++
> (vs. mpicc, which is the MPI C wrapper c
If you care, the issue is that it looks like Gromacs is using the MPI C++
bindings. You therefore need to use the MPI C++ wrapper compiler, mpic++ (vs.
mpicc, which is the MPI C wrapper compiler).
On Jul 9, 2013, at 9:41 AM, Tomek Wlodarski wrote:
> I used mpicc but when I switched in Makefi
I used mpicc but when I switched in Makefile to mpic++ it compiled
without errors.
Thanks a lot!
Best,
tomek
On Tue, Jul 9, 2013 at 2:31 PM, Jeff Squyres (jsquyres)
wrote:
> I don't see all the info requested from that web page, but it looks like OMPI
> built the C++ bindings ok.
>
> Did you us
I don't see all the info requested from that web page, but it looks like OMPI
built the C++ bindings ok.
Did you use mpic++ to build Gromacs?
On Jul 9, 2013, at 9:20 AM, Tomek Wlodarski wrote:
> So I am running OpenMPi1.6.3 (config.log attached)
> And I would like to install gromacs patched w
So I am running OpenMPi1.6.3 (config.log attached)
And I would like to install gromacs patched with plumed (scientific
computing). Both uses openmpi.
Gromacs alone compiles without errors (openMPI works). But when
patched I got one mentioned before.
I am sending config file for patched gromacs.
If
Please send all the information listed here:
http://www.open-mpi.org/community/help/
On Jul 9, 2013, at 8:36 AM, Tomek Wlodarski wrote:
> Hi,
>
> I am trying to locally compile software which uses openmpi (1.6.3),
> but I got this error:
>
> restraint_camshift2.o:(.toc+0x98): undefined r
Hi,
I am trying to locally compile software which uses openmpi (1.6.3),
but I got this error:
restraint_camshift2.o:(.toc+0x98): undefined reference to
`ompi_mpi_cxx_op_intercept'
restraint_camshift2.o: In function `Intracomm':
/home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intrac
14 matches
Mail list logo