On Jul 21, 2010, at 2:54 PM CDT, Jed Brown wrote:
> On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis wrote:
>> Hi Jed,
>>
>> Thanks for the reply and suggestion. I tried adding -mca
>> yield_when_idle 1 (and later mpi_yield_when_idle 1 which is what
>> ompi_info reports the variable as) but it s
I can't speak to what OMPI might be doing to your program, but I have a few
suggestions for looking into the Valgrind issues.
Valgrind's "--track-origins=yes" option is usually helpful for figuring out
where the uninitialized values came from. However, if I understand you
correctly and if you
FWIW, we solved this problem with ROMIO in MPICH2 by making the "big global
lock" a recursive mutex. In the past it was implicitly so because of the way
that recursive MPI calls were handled. In current MPICH2 it's explicitly
initialized with type PTHREAD_MUTEX_RECURSIVE instead.
-Dave
On Ap
On Jun 29, 2011, at 10:56 AM CDT, Jeff Squyres wrote:
> There's probably an alignment gap between the short and char array, and
> possibly an alignment gap between the char array and the double array
> (depending on the value of SHORT_INPUT and your architecture).
>
> So for your displacements,
This has been discussed previously in the MPI Forum:
http://lists.mpi-forum.org/mpi-forum/2010/11/0838.php
I think it resulted in this proposal, but AFAIK it was never pushed forward by
a regular attendee of the Forum:
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/ReqPPMacro
-Dave
On Aug
On May 24, 2012, at 10:22 AM CDT, Jeff Squyres wrote:
> I read it to be: reduce the data in the local group, scatter the results to
> the remote group.
>
> As such, the reduce COUNT is sum(recvcounts), and is used for the reduction
> in the local group. Then use recvcounts to scatter it to the
On May 24, 2012, at 10:57 AM CDT, Lisandro Dalcin wrote:
> On 24 May 2012 12:40, George Bosilca wrote:
>
>> I don't see much difference with the other collective. The generic behavior
>> is that you apply the operation on the local group but the result is moved
>> into the remote group.
>
> W
On May 24, 2012, at 8:13 PM CDT, Jeff Squyres wrote:
> On May 24, 2012, at 11:57 AM, Lisandro Dalcin wrote:
>
>> The standard says this:
>>
>> "Within each group, all processes provide the same recvcounts
>> argument, and provide input vectors of sum_i^n recvcounts[i] elements
>> stored in the
On May 24, 2012, at 10:34 PM CDT, George Bosilca wrote:
> On May 24, 2012, at 23:18, Dave Goodell wrote:
>
>> So I take back my prior "right". Upon further inspection of the text and
>> the MPICH2 code I believe it to be true that the number of the elements in
>
On Jan 4, 2013, at 2:55 AM CST, Torbjörn Björkman wrote:
> It seems that a very old bug (svn.open-mpi.org/trac/ompi/ticket/1982) is
> playing up when linking fortran code with mpicc on Mac OS X 10.6 and the
> Macports distribution openmpi @1.6.3_0+gcc44. I got it working by reading up
> on this
On Feb 3, 2010, at 6:24 PM, Dorian Krause wrote:
Unless it is also specified that a process must eventually exit with
a call to MPI_Finalize (I couldn't find such a requirement),
progress for RMA access to a passive server which does not
participate actively in any MPI communication is not
On Mar 3, 2010, at 11:35 AM, Richard Treumann wrote:
If the application will make MPI calls from multiple threads and
MPI_INIT_THREAD has returned FUNNELED, the application must be
willing to take the steps that ensure there will never be concurrent
calls to MPI from the threads. The threads
On Mar 4, 2010, at 7:36 AM, Richard Treumann wrote:
A call to MPI_Init allows the MPI library to return any level of
thread support it chooses.
This is correct, insofar as the MPI implementation can always choose
any level of thread support.
This MPI 1.1 call does not let the application say
On Mar 4, 2010, at 10:52 AM, Anthony Chan wrote:
- "Yuanyuan ZHANG" wrote:
For an OpenMP/MPI hybrid program, if I only want to make MPI calls
using the main thread, ie., only in between parallel sections, can
I just
use SINGLE or MPI_Init?
If your MPI calls is NOT within OpenMP direc
On Apr 1, 2014, at 10:26 AM, "Blosch, Edwin L" wrote:
> I am getting some errors building 1.8 on RHEL6. I tried autoreconf as
> suggested, but it failed for the same reason. Is there a minimum version of
> m4 required that is newer than that provided by RHEL6?
Don't run "autoreconf" by hand,
On Apr 1, 2014, at 12:13 PM, Filippo Spiga wrote:
> Dear Ralph, Dear Jeff,
>
> I've just recompiled the latest Open MPI 1.8. I added
> "--enable-mca-no-build=btl-usnic" to configure but the message still appear.
> Here the output of "--mca btl_base_verbose 100" (trunked immediately after
> th
On Apr 2, 2014, at 12:57 PM, Filippo Spiga wrote:
> I still do not understand why this keeps appearing...
>
> srun: cluster configuration lacks support for cpu binding
>
> Any clue?
I don't know what causes that message. Ralph, any thoughts here?
-Dave
On Apr 14, 2014, at 12:15 PM, Djordje Romanic wrote:
> When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
> -
> starting wrf task0 of1
> starting wrf task0 of1
> starting wrf task
I don't know of any workaround. I've created a ticket to track this, but it
probably won't be very high priority in the short term:
https://svn.open-mpi.org/trac/ompi/ticket/4575
-Dave
On Apr 25, 2014, at 3:27 PM, Jamil Appa wrote:
>
> Hi
>
> The following program deadlocks in mpi_f
On Jun 27, 2014, at 8:53 AM, Brock Palen wrote:
> Is there a way to import/map memory from a process (data acquisition) such
> that an MPI program could 'take' or see that memory?
>
> We have a need to do data acquisition at the rate of .7TB/s and need todo
> some shuffles/computation on these
Looks like boost::mpi and/or your python "mpi" module might be creating a bogus
argv array and passing it to OMPI's MPI_Init routine. Note that argv is
required by C99 to be terminated with a NULL pointer (that is,
(argv[argc]==NULL) must hold). See http://stackoverflow.com/a/3772826/158513.
On Nov 24, 2014, at 12:06 AM, George Bosilca wrote:
> https://github.com/open-mpi/ompi/pull/285 is a potential answer. I would like
> to hear Dave Goodell comment on this before pushing it upstream.
>
> George.
I'll take a look at it today. My notification settings were m
On Jan 9, 2015, at 7:46 AM, Jeff Squyres (jsquyres) wrote:
> Yes, I know examples 3.8/3.9 are blocking examples.
>
> But it's morally the same as:
>
> MPI_WAITALL(send_requests...)
> MPI_WAITALL(recv_requests...)
>
> Strictly speaking, that can deadlock, too.
>
> It reality, it has far less
Lachlan mentioned that he has "M Series" hardware, which, to the best of my
knowledge, does not officially support usNIC. It may not be possible to even
configure the relevant usNIC adapter policy in UCSM for M Series
modules/chassis.
Using the TCP BTL may be the only realistic option here.
-
Perhaps there's an RPATH issue here? I don't fully understand the structure of
Rmpi, but is there both an app and a library (or two separate libraries) that
are linking against MPI?
I.e., what we want is:
app -> ~ross/OMPI
\ /
--> library --
But what we'r
On Jun 5, 2015, at 8:47 PM, Gilles Gouaillardet
wrote:
> i did not use the term "pure" properly.
>
> please read instead "posix_memalign is a function that does not modify any
> user variable"
> that assumption is correct when there is no wrapper, and incorrect in our
> case.
My suggestion i
On Sep 27, 2015, at 1:38 PM, marcin.krotkiewski
wrote:
>
> Hello, everyone
>
> I am struggling a bit with IB performance when sending data from a POSIX
> shared memory region (/dev/shm). The memory is shared among many MPI
> processes within the same compute node. Essentially, I see a bit hec
27 matches
Mail list logo