vasilis gkanis wrote:
I had a similar problem with the portland Fortran compiler. I new that this
was not caused by a network problem ( I run the code on a single node with 4
CPUs). After I tested pretty much anything, I decided to change the compiler.
I used the Intel Fortran compiler and ever
Jeff Squyres wrote:
On Dec 3, 2009, at 10:56 AM, Brock Palen wrote:
The allocation statement is ok:
allocate(vec(vec_size,vec_per_proc*(size-1)))
This allocates memory vec(32768, 2350)
So this allocates 32768 rows, each with 2350 columns -- all stored contiguously in mem
On Thu, 3 Dec 2009 12:21:50 -0500, Jeff Squyres wrote:
> On Dec 3, 2009, at 10:56 AM, Brock Palen wrote:
>
> > The allocation statement is ok:
> > allocate(vec(vec_size,vec_per_proc*(size-1)))
> >
> > This allocates memory vec(32768, 2350)
It's easier to translate to C rather than trying to rea
On Dec 3, 2009, at 10:56 AM, Brock Palen wrote:
> The allocation statement is ok:
> allocate(vec(vec_size,vec_per_proc*(size-1)))
>
> This allocates memory vec(32768, 2350)
So this allocates 32768 rows, each with 2350 columns -- all stored contiguously
in memory, in column-major order. Does th
Ashley Pittman wrote:
On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote:
On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote:
On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
The attached code, is an example where openmpi/1.3.2 will lock up, if
ran on 48 cores, of IB (4 c
I had a similar problem with the portland Fortran compiler. I new that this
was not caused by a network problem ( I run the code on a single node with 4
CPUs). After I tested pretty much anything, I decided to change the compiler.
I used the Intel Fortran compiler and everything is running fine.
On Dec 1, 2009, at 8:09 PM, John R. Cary wrote:
Jeff Squyres wrote:
(for the web archives)
Brock and I talked about this .f90 code a bit off list -- he's
going to investigate with the test author a bit more because both
of us are a bit confused by the F90 array syntax used.
Jeff, I talke
..@open-mpi.org wrote on 12/03/2009 05:33:51 AM:
> [image removed]
>
> Re: [OMPI users] Program deadlocks, on simple send/recv loop
>
> Ashley Pittman
>
> to:
>
> Open MPI Users
>
> 12/03/2009 05:35 AM
>
> Sent by:
>
> users-boun...@open-mpi.org
>
On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote:
> On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote:
> > On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
> >> The attached code, is an example where openmpi/1.3.2 will lock up, if
> >> ran on 48 cores, of IB (4 cores per node),
> >> The co
On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote:
On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
The attached code, is an example where openmpi/1.3.2 will lock up, if
ran on 48 cores, of IB (4 cores per node),
The code loops over recv from all processors on rank 0 and sends from
all oth
John R. Cary wrote:
Jeff Squyres wrote:
(for the web archives)
Brock and I talked about this .f90 code a bit off list -- he's going
to investigate with the test author a bit more because both of us are
a bit confused by the F90 array syntax used.
Attached is a simple send/recv code writte
Jeff Squyres wrote:
(for the web archives)
Brock and I talked about this .f90 code a bit off list -- he's going
to investigate with the test author a bit more because both of us are
a bit confused by the F90 array syntax used.
Attached is a simple send/recv code written (procedural) C++ that
(for the web archives)
Brock and I talked about this .f90 code a bit off list -- he's going
to investigate with the test author a bit more because both of us are
a bit confused by the F90 array syntax used.
On Dec 1, 2009, at 10:46 AM, Brock Palen wrote:
The attached code, is an example
On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
> The attached code, is an example where openmpi/1.3.2 will lock up, if
> ran on 48 cores, of IB (4 cores per node),
> The code loops over recv from all processors on rank 0 and sends from
> all other ranks, as far as I know this should work
The attached code, is an example where openmpi/1.3.2 will lock up, if
ran on 48 cores, of IB (4 cores per node),
The code loops over recv from all processors on rank 0 and sends from
all other ranks, as far as I know this should work, and I can't see
why not.
Note yes I know we can do the sam
15 matches
Mail list logo