On 5/24/2012 2:51 AM, Jeff Squyres wrote:
On May 23, 2012, at 6:20 PM, marco atzeri wrote:
~ 90% of the time we have mismatch problems between upstream and
cygwin on autoconf/automake/libtool versions that are not cygwin
aware or updated.
Ok, fair enough.
I'd be curious if you actually need
Hi Kjell-Arne,
The released installed is configured only with Intel Fortran compiler,
it won't work with other Fortran compilers. If you want to stick with
GNU compilers, you probably could try GNU f77 or g95. But anyway, the
MinGW build is only experimental, there still might be runtime issue
Hi Toufik,
I'm not 100% sure about this. Could you provide a small example that I
can test on? Thanks.
Regards,
Shiqing
On 2012-04-17 4:43 PM, toufik hadjazi wrote:
Hi,
does openmpi (installed on windows 7) support name publication throw
different jobs? if yes, how to make two different jo
Many thanks for trans-coding to C; this was a major help in debugging the issue.
Thankfully, it turned out to be a simple bug. OMPI's parameter checking for
MPI_ALLGATHERV was using the *local* group size when checking the recvcounts
parameter, where it really should have been using the *remote
Hi
When is it thought that 1.6.1 goes public?
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www
It seems like this might also be an issue for gatherv and reduce_scatter
as well.
- Jonathan
--
Jonathan Dursi | SciNet, Compute/Calcul Canada | www.SciNetHPC.ca
On May 24, 2012, at 6:53 AM, Jonathan Dursi wrote:
> It seems like this might also be an issue for gatherv and reduce_scatter as
> well.
Gah. I spot-checked a few of these before my first commit, but didn't see
these.
So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER a
On May 24, 2012, at 9:28 AM, Jeff Squyres wrote:
> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER all
> had the issue. Now fixed on the trunk, and will be in 1.6.1.
I forgot to mention -- this issue exists waaay back in the Open MPI code base.
I spot-checked Open MP
This bug had the opportunity to appear in all collectives supporting
intercommunicators where we check the receive buffer(s) consistency. In
addition to what Jeff fixed already, I fix it in ALLTOALLV, ALLTOALLW and
GATHER.
george.
On May 24, 2012, at 09:37 , Jeff Squyres wrote:
> On May 24,
On May 24, 2012, at 6:35 AM, Ricardo Reis wrote:
> When is it thought that 1.6.1 goes public?
FWIW, the nightly tarballs of the head of the v1.6 SVN branch are available
here:
http://www.open-mpi.org/nightly/v1.6/
There's only been a few minor fixes applied so far after 1.6 was released:
On 24 May 2012 10:28, Jeff Squyres wrote:
> On May 24, 2012, at 6:53 AM, Jonathan Dursi wrote:
>
>> It seems like this might also be an issue for gatherv and reduce_scatter as
>> well.
>
>
> Gah. I spot-checked a few of these before my first commit, but didn't see
> these.
>
> So I checked them
On May 24, 2012, at 11:10 AM, Lisandro Dalcin wrote:
>> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER all
>> had the issue. Now fixed on the trunk, and will be in 1.6.1.
>
> Please be careful with REDUCE_SCATTER[_BLOCK] . My understanding of
> the MPI standard is that
On May 24, 2012, at 10:22 AM CDT, Jeff Squyres wrote:
> I read it to be: reduce the data in the local group, scatter the results to
> the remote group.
>
> As such, the reduce COUNT is sum(recvcounts), and is used for the reduction
> in the local group. Then use recvcounts to scatter it to the
On May 24, 2012, at 11:22 , Jeff Squyres wrote:
> On May 24, 2012, at 11:10 AM, Lisandro Dalcin wrote:
>
>>> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER
>>> all had the issue. Now fixed on the trunk, and will be in 1.6.1.
>>
>> Please be careful with REDUCE_SCATTER
On 24 May 2012 12:40, George Bosilca wrote:
> On May 24, 2012, at 11:22 , Jeff Squyres wrote:
>
>> On May 24, 2012, at 11:10 AM, Lisandro Dalcin wrote:
>>
So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER
all had the issue. Now fixed on the trunk, and will be in
On 05/23/2012 07:29 AM, Barrett, Brian W wrote:
On 5/22/12 10:36 PM, "Orion Poplawski" wrote:
On 05/22/2012 10:34 PM, Orion Poplawski wrote:
On 05/21/2012 06:15 PM, Jeff Squyres wrote:
On May 15, 2012, at 10:37 AM, Orion Poplawski wrote:
$ mpicc -showme:link
-pthread -m64 -L/usr/lib64/open
On May 24, 2012, at 6:07 PM, Orion Poplawski wrote:
>>> I should add the caveat that they are need when linking statically, but
>>> not when using shared libraries.
>>
>> And therein lies the problem. We have a number of users who build Open
>> MPI statically and even some who build both static
Sorry for taking so long to respond to this. :-(
Patrick -- I just created https://svn.open-mpi.org/trac/ompi/ticket/3109 to
track this issue. Could you attach your patch to that ticket?
On May 23, 2012, at 5:30 AM, Patrick Le Dot wrote:
> David Singleton anu.edu.au> writes:
>
>>
>>
>> I
On May 24, 2012, at 11:57 AM, Lisandro Dalcin wrote:
> The standard says this:
>
> "Within each group, all processes provide the same recvcounts
> argument, and provide input vectors of sum_i^n recvcounts[i] elements
> stored in the send buffers, where n is the size of the group"
>
> So, I read
On May 24, 2012, at 10:57 AM CDT, Lisandro Dalcin wrote:
> On 24 May 2012 12:40, George Bosilca wrote:
>
>> I don't see much difference with the other collective. The generic behavior
>> is that you apply the operation on the local group but the result is moved
>> into the remote group.
>
> W
On May 24, 2012, at 8:13 PM CDT, Jeff Squyres wrote:
> On May 24, 2012, at 11:57 AM, Lisandro Dalcin wrote:
>
>> The standard says this:
>>
>> "Within each group, all processes provide the same recvcounts
>> argument, and provide input vectors of sum_i^n recvcounts[i] elements
>> stored in the
On May 24, 2012, at 23:18, Dave Goodell wrote:
> On May 24, 2012, at 8:13 PM CDT, Jeff Squyres wrote:
>
>> On May 24, 2012, at 11:57 AM, Lisandro Dalcin wrote:
>>
>>> The standard says this:
>>>
>>> "Within each group, all processes provide the same recvcounts
>>> argument, and provide input v
On May 24, 2012, at 10:34 PM CDT, George Bosilca wrote:
> On May 24, 2012, at 23:18, Dave Goodell wrote:
>
>> So I take back my prior "right". Upon further inspection of the text and
>> the MPICH2 code I believe it to be true that the number of the elements in
>> the recvcounts array must be
23 matches
Mail list logo