On 08/09/2012 03:52 PM, Rolf vandeVaart wrote:
-Original Message-
From: Jeff Squyres [mailto:jsquy...@cisco.com]
Sent: Thursday, August 09, 2012 9:45 AM
To: Open MPI Users
Cc: Rolf vandeVaart
Subject: CUDA in v1.7? (was: Compilation of OpenMPI 1.5.4 & 1.6.X fail for PGI
compiler...)
On A
>-Original Message-
>From: Jeff Squyres [mailto:jsquy...@cisco.com]
>Sent: Thursday, August 09, 2012 9:45 AM
>To: Open MPI Users
>Cc: Rolf vandeVaart
>Subject: CUDA in v1.7? (was: Compilation of OpenMPI 1.5.4 & 1.6.X fail for PGI
>compiler...)
>
>On Aug 9, 2012, at 9:37 AM, ESCOBAR Juan wro
On Aug 9, 2012, at 9:37 AM, ESCOBAR Juan wrote:
> ... but as I'am also interested in testing the Open-MPI/CUDA feature ( with
> potentially pgi-acc or open-acc directive )
> I've 'googled' and finish in the the Open-MPI 'trunck' .
>
> => this Open-MPI/CUDA feature will be only in the 1.9 serie
On 08/09/2012 01:41 PM, Jeff Squyres wrote:
On Aug 9, 2012, at 5:53 AM, ESCOBAR Juan wrote:
Perhaps a Boeotian question .
Why the nightly version are called 1.9.xxx ??
http://www.open-mpi.org/nightly/trunk/
are they from 1.6 (or 1.7 series) ?
Neither. They're from the v1.9 series. :-) Kee
On Aug 9, 2012, at 5:53 AM, ESCOBAR Juan wrote:
> Perhaps a Boeotian question .
>
> Why the nightly version are called 1.9.xxx ??
> http://www.open-mpi.org/nightly/trunk/
>
> are they from 1.6 (or 1.7 series) ?
Neither. They're from the v1.9 series. :-) Keep in mind that we have active
nigh
Hi all,
I encountered a strange problem: when running across nodes, the first
send/recv pairs work, but the second recv blocks indefinitely.
After some google, I found this:
http://www.open-mpi.org/community/lists/users/2012/02/18383.php
I use my laptop as a wireless router, and I NAT all traffic
On 08/08/2012 04:31 PM, Jeff Squyres wrote:
Hello Jeff .
Perhaps a Boeotian question .
Why the nightly version are called 1.*9*.xxx ??
http://www.open-mpi.org/nightly/trunk/
are they from 1.6 (or 1.7 series) ?
A+
Juan
Perfect; fixed in:
https://svn.open-mpi.org/trac/ompi/changeset/269