Hi,
Just want to verify if sendrecv provides any guarantees as to which
operation (send or receive) happens first. I think it is not, is it?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
Karl,
The Xcode command-line tools provide a g++ command that is effectively
clang++. Since mpic++ invokes g++ without the full path to it, it might pick
the wrong g++. Make sure that /opt/local/bin is the first item in your
$PATH.
Hristo
--
Hristo Iliev, PhD - High Performance Computing Team /
Why do you need an order? If you plan to send and receive on the same buffer,
you should use the MPI constructs for that, namely MPI_IN_PLACE.
George.
On Nov 27, 2013, at 07:20 , Saliya Ekanayake wrote:
> Hi,
>
> Just want to verify if sendrecv provides any guarantees as to which operation
Thanks this went in as r29761.
George.
On Nov 26, 2013, at 20:17 , Pierre Jolivet wrote:
> Hello,
> Just like r29736, I believe that there are some missing tests in
> ompi/mca/coll/libnbc/nbc_iscatterv.c and ompi/mca/coll/libnbc/nbc_igatherv.c
> Thoughts ?
> Pierre
>
> Index: nbc_igatherv.
Hi,
We have an in-house application where we run two solvers in a loosely
coupled manner: The first solver runs a timestep, then the second solver
does work on the same timestep, etc. As the two solvers never execute at
the same time, we would like to run the two solvers in the same allocation
Dear list,
I've gone through several hours of configuring and testing to get a grasp
of the current status for multi-threading support.
I want to use a program with MPI_THREAD_MULTIPLE, over the openib BTL. I'm
using openmpi-1.6.5 and SLC6 (rhel6), for what's worth.
Upon configuring my own openm
I’m pretty sure it is using the correct g++
$ )which g++
/opt/local/bin/g++
$ )echo $PATH
/Users/meredithk/tools/openmpi/bin:/opt/local/bin:/opt/local/sbin:/Users/meredithk/tools/bin:/Users/meredithk/OpenFOAM/fireFoam-2.2.x/scripts:/Users/meredithk/OpenFOAM/ThirdParty-2.2.x/platforms/darwinIntel
Openib does not currently support thread multiple - hopefully in 1.9 series
Sent from my iPhone
> On Nov 27, 2013, at 7:14 AM, Daniel Cámpora wrote:
>
> Dear list,
>
> I've gone through several hours of configuring and testing to get a grasp of
> the current status for multi-threading support
Are you wanting to run the solvers on different nodes within the
allocation? Or on different cores across all nodes?
For different nodes, you can just use -host to specify which nodes you want
that specific mpirun to use, or a hostfile should also be fine. The FAQ's
comment was aimed at people who
Okay, in order to try and track down this problem I have done a fresh install
of OpenMPI-1.7.3 on Mac OS 10.9 (Mavericks). I am using the Apple compilers,
and not using anything from macports. The code compiles fine, but when running
the examples for openmpi-1.7.3, the code hangs in MPI::Init(
Been running on Mavericks all summer with no issues. I do not use the C++
interface
though (C++ bindings were removed from the MPI standard in 3.0.) Can you try a C
example and see if that works?
-Nathan Hjelm
Application Readiness, HPC-5, LANL
On Wed, Nov 27, 2013 at 11:43:05AM -0500, Meredith,
None of the C or C++ examples run. For example, the hello_c.c example in the
examples directory included with openmpi. It compiles, but hangs on MPI_Init().
Are your compiler versions different from the ones I’m showing? Apple LLVM
version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Karl
On
Puzzling! I removed my entire Mac Ports installation in order to fall back to
the default set of tools provided by Apple via the command tool package. I
recompile a fresh copy of Open MPI with the following configure:
./configure —prefix=/opt/trunk/apple-only —enable-shared —disable-static
--en
So, I decided to try out your configuration flags to see if that’d work. All
Apple tools, no macports.
With openmpi-1.7.3 I did the following:
make clean
./configure --prefix=/opt/trunk/apple-only --enable-shared --disable-static
--enable-debug --disable-io-romio --enable-contrib-no-build=vt,l
Hi Ola, Ralph
I may be wrong, but I'd guess launching the two solvers
in MPMD/MIMD mode would work smoothly with the torque PBS_NODEFILE,
in a *single* Torque job.
If I understood Ola right, that is what he wants.
Say, something like this (for one 32-core node):
#PBS -l nodes=1:ppn=32
...
mpiex
$ /usr/bin/gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr
--with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix
The other thing that can go wrong is
/opt/trunk/apple-only/bin/ompi_info --param oob tcp --level 9
MCA oob: parameter "oob_tcp_verbose" (current value: "0", data
source: default, level: 9 dev/all, type: int)
Verbose level for the OOB tcp component
MCA oob: parameter "oob_tcp
Nathan,
I got a compile-time error (see below). I use a script from
contrib/platform/lanl/cray_xe6 with gcc-4.7.2. Is there any problem in my
environment?
Thanks,
Keita
CC oob_tcp.lo
oob_tcp.c:353:7: error: expected identifier or '(' before 'else'
oob_tcp.c:358:5: warning: data definiti
See slide 22 of http://www.open-mpi.org/papers/sc-2013/Open-MPI-SC13-BOF.pdf
Jeff
On Wed, Nov 27, 2013 at 8:59 AM, Ralph Castain wrote:
> Openib does not currently support thread multiple - hopefully in 1.9 series
>
> Sent from my iPhone
>
> On Nov 27, 2013, at 7:14 AM, Daniel Cámpora wrote:
>
I'm afraid the two solvers would be in the same comm_world if launched that way
Sent from my iPhone
> On Nov 27, 2013, at 11:58 AM, Gus Correa wrote:
>
> Hi Ola, Ralph
>
> I may be wrong, but I'd guess launching the two solvers
> in MPMD/MIMD mode would work smoothly with the torque PBS_NODEFI
Performance benchmarks are always problematic and a source of contention -
it is very hard to obtain a meaningful comparison between MPI
implementations without taking great care that they are being wholly
optimized for the testbed environment. As you can imagine, all the MPIs
watch each other pret
21 matches
Mail list logo