Thanks for the reply, Ralph.
Now I think it is clearer to me why it could be so much slower. The reason
would be that the blocking algorithm for reduction has a implementation
very different than the non-blocking.
Since there are lots of ways to implement it, are there options to tune the
non-blo
One thing you might want to keep in mind is that “non-blocking” doesn’t mean
“asynchronous progress”. The API may not block, but the communications only
progress whenever you actually call down into the library.
So if you are calling a non-blocking collective, and then make additional calls
int
>Try and do a variable amount of work for every process, I see non-blocking
>as a way to speed-up communication if they arrive individually to the
call.
>Please always have this at the back of your mind when doing this.
I tried to simplify the problem at the explanation. The "local_computation"
is
Try and do a variable amount of work for every process, I see non-blocking
as a way to speed-up communication if they arrive individually to the call.
Please always have this at the back of your mind when doing this.
Surely non-blocking has overhead, and if the communication time is low, so
will t
Hello!
I have a program that basically is (first implementation):
for i in N:
local_computation(i)
mpi_allreduce(in_place, i)
In order to try to mitigate the implicit barrier of the mpi_allreduce, I
tried to start an mpi_Iallreduce. Like this(second implementation):
for i in N:
local_comput
On Nov 30, 2012, at 2:04 PM, Shane Hart wrote:
> I've attached a small sample program that demonstrates the problem. You can
> toggle working/non-working behaviour by toggling commenting on line 27.
Thanks! I got swamped this week, but I'll try to look at it next week
(although with the Forum
I've attached a small sample program that demonstrates the problem. You can
toggle working/non-working behaviour by toggling commenting on line 27.
I've tried to open a bug report, but the system isn't letting me register for
Trac:
Trac detected an internal error:
KeyError: 'recaptcha_challeng
All,
I have a Fortran code that works quite well with OpenMPI 1.4.3 where I create
a handle using:
call MPI_TYPE_CREATE_F90_INTEGER(9, COMM_INT4, ierror)
and then do a reduction with:
call MPI_ALLREDUCE(send_buffer, buffer, count, COMM_INT4, MPI_SUM,
communicator,
ierror)
Howev
Hi Jeff,
Thanks for your response. Like you said, the code works for you in a Linux
system. And I am sure that the code works on Linux and even Mac os x. But if
you use MinGW (basically you have all gnu things on windows) to compile, the
code abort when running to MPI_Allreduce.
In my opinion
I am unable to replicate your problem, but admittedly I only have access to
gfortran on Linux. And I am definitely *not* a Fortran expert. :-\
The code seems to run fine for me -- can you send another test program that
actually tests the results of the all reduce? Fortran allocatable stuff al
Dear mpi users and developers,
I am having some trouble with MPI_Allreduce. I am using MinGW (gcc 4.6.2)
with OpenMPI 1.6.1. The MPI_Allreduce in c version works fine, but the
fortran version failed with error. Here is the simple fortran code to
reproduce the error:
program ma
On Jun 27, 2012, at 6:32 PM, Martin Siegert wrote:
> However, there is another issue that may affect the performance of the 1.6.1
> version. I see a LOT of the following messages on stderr:
>
> --
> The OpenFabrics (openib) B
On Wed, Jun 27, 2012 at 02:30:11PM -0400, Jeff Squyres wrote:
> On Jun 27, 2012, at 2:25 PM, Martin Siegert wrote:
>
> >> http://www.open-mpi.org/~jsquyres/unofficial/openmpi-1.6.1ticket3131r26612M.tar.bz2
> >
> > Thanks! I tried this and, indeed, the program (I tested quantum espresso,
> > pw.x,
On Jun 27, 2012, at 2:25 PM, Martin Siegert wrote:
>> http://www.open-mpi.org/~jsquyres/unofficial/openmpi-1.6.1ticket3131r26612M.tar.bz2
>
> Thanks! I tried this and, indeed, the program (I tested quantum espresso,
> pw.x, so far) no longer hangs.
Good! We're doing a bit more definitive testin
Hi Jeff,
On Wed, Jun 20, 2012 at 04:16:12PM -0400, Jeff Squyres wrote:
> On Jun 20, 2012, at 3:36 PM, Martin Siegert wrote:
>
> > by now we know of three programs - dirac, wrf, quantum espresso - that
> > all hang with openmpi-1.4.x (have not yet checked with openmpi-1.6).
> > All of these progra
On Jun 20, 2012, at 3:36 PM, Martin Siegert wrote:
> by now we know of three programs - dirac, wrf, quantum espresso - that
> all hang with openmpi-1.4.x (have not yet checked with openmpi-1.6).
> All of these programs run to completion with the mpiexec commandline
> argument: --mca btl_openib_fla
Hi,
by now we know of three programs - dirac, wrf, quantum espresso - that
all hang with openmpi-1.4.x (have not yet checked with openmpi-1.6).
All of these programs run to completion with the mpiexec commandline
argument: --mca btl_openib_flags 305
We now set this in the global configuration file
Hello,
I think that my problem:
http://www.open-mpi.org/community/lists/users/2012/05/19182.php
is similar to yours. Following the advice in the thread that you posted:
http://www.open-mpi.org/community/lists/users/2011/07/16996.php
I have tried to run my program adding:
-mca btl_openib_flags 305
On Tue, Apr 24, 2012 at 04:19:31PM -0400, Brock Palen wrote:
> To throw in my $0.02, though it is worth less.
>
> Were you running this on verb based infiniband?
Correct.
> We see a problem that we have a work around for even with the newest 1.4.5
> only on IB, we can reproduce it with IMB.
I
To throw in my $0.02, though it is worth less.
Were you running this on verb based infiniband?
We see a problem that we have a work around for even with the newest 1.4.5 only
on IB, we can reproduce it with IMB. You can find an old thread from me about
it. Your problem might not be the same.
Could you repeat your tests with 1.4.5 and/or 1.5.5?
On Apr 23, 2012, at 1:32 PM, Martin Siegert wrote:
> Hi,
>
> I am debugging a program that hangs in MPI_Allreduce (openmpi-1.4.3).
> An strace of one of the processes shows:
>
> Process 10925 attached with 3 threads - interrupt to quit
> [pi
Hi Martin
Not sure this solution will help with your problem,
but a workaround for situations where the count number
exceeds the maximum 32-bit positive integer
is to declare a user defined type,
say MPI_Type_Contiguous or MPI_Type_Vector,
large enough to aggregate a bunch of your
original data (
Hi,
I am debugging a program that hangs in MPI_Allreduce (openmpi-1.4.3).
An strace of one of the processes shows:
Process 10925 attached with 3 threads - interrupt to quit
[pid 10927] poll([{fd=17, events=POLLIN}, {fd=16, events=POLLIN}], 2, -1
[pid 10926] select(15, [8 14], [], NULL, NULL
[pi
Hi all,
We have a code built with OpenMPI (v1.4.3) and the Intel v12.0 compiler that
has been tested successfully on 10s - 100s of cores on our cluster. We recently
ran the same code with 1020 cores and received the following runtime error:
> [d6cneh042:28543] *** Process received signal ***
>
Running with rdmacm the problem does seam to resolve its self,
The code is large and complicated, but the problem does appear to arise
regularly when ran.
Just FYI, can I collect extra information to help find a fix?
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.ed
This could be related to https://svn.open-mpi.org/trac/ompi/ticket/2714 and/or
https://svn.open-mpi.org/trac/ompi/ticket/2722.
There isn't much info in the ticket, but we've been talking about it a bunch
offline. IBM and Mellanox have had reports of the error, but haven't been able
to reproduc
I have a user whos code when ran on ethernet performs fine. When ran on verbs
based IB the code deadlocks in an MPI_AllReduce() call.
We are using openmpi/1.4.3 with the intel compilers.
I poked at the running code with padb and I get the following:
051525354...
Jeff,
it's funny because I do not see my problem with C (when using
long long) but only with Fortran and INTEGER8.
I have rewritten the testcase so that it uses MPI_REDUCE_LOCAL,
which unfortunately does not link with openmpi-1.4.3. Apparently
this is a new feature of openmpi-1.5.
Here's the mo
Try as I might, I cannot reproduce this error. :-(
I only have the intel compiler version 11.x, though -- not 12.
Can you change your test to use MPI_Reduce_local with INTEGER8 and see if the
problem still occurs? (it probably will, but it is a significantly simpler
code path to get down to t
Please find attached the output of:
configure
make all
make install
ompi_info -all
mpif90 -v mpiallreducetest.f90
ldd a.out
./a.out
System: OpenSuse Linux 11.1 on Core2Duo, i686
Compiler is:
Intel(R) Fortran Compiler XE for applications running on IA-32, Version
12.0.1.107 Build 20101116
(The p
I am unable to reproduce your problem with the 1.5.2rc3 tarball...?
Does your compiler support INTEGER8? Can you send the data requested here:
http://www.open-mpi.org/community/help/
On Mar 1, 2011, at 4:16 PM, Harald Anlauf wrote:
> Hi,
>
> there appears to be a regression in revision 1
Hi,
there appears to be a regression in revision 1.5.2rc3r24441.
The attached program crashes even with 1 PE with:
Default real, digits: 4 24
Real kind,digits: 8 53
Integer kind, bits: 8 64
Default Integer :
On 01/24/2011 11:28 PM, Harald Anlauf wrote:
> Hi,
>
> MPI_Allreduce works for me with MPI_INTEGER8 for all OpenMPI
> versions up to 1.4.3. However, with OpenMPI 1.5.1 I get a
> failure at runtime:
>
> [proton:23642] *** An error occurred in MPI_Allreduce: the reduction
> operation MPI_SUM is n
Hi,
MPI_Allreduce works for me with MPI_INTEGER8 for all OpenMPI
versions up to 1.4.3. However, with OpenMPI 1.5.1 I get a
failure at runtime:
[proton:23642] *** An error occurred in MPI_Allreduce: the reduction
operation MPI_SUM is not defined on the MPI_INTEGER8 datatype
[proton:23642] *** on
On Aug 10, 2010, at 3:59 PM, Gus Correa wrote:
> Thank you for opening a ticket and taking care of this.
Sorry -- I missed your inline questions when I first read this mail...
> > That being said, we didn't previously find any correctness
> > issues with using an alignment of 1.
>
> Does it aff
Hi Jeff
Thank you for opening a ticket and taking care of this.
Jeff Squyres wrote:
On Jul 28, 2010, at 5:07 PM, Gus Correa wrote:
Still, the alignment under Intel may or may not be right.
And this may or may not explain the errors that Hugo has got.
FYI, the ompi_info from my OpenMPI 1.3.2
On Jul 28, 2010, at 5:07 PM, Gus Correa wrote:
> Still, the alignment under Intel may or may not be right.
> And this may or may not explain the errors that Hugo has got.
>
> FYI, the ompi_info from my OpenMPI 1.3.2 and 1.2.8
> report exactly the same as OpenMPI 1.4.2, namely
> Fort dbl prec size
On Jul 28, 2010, at 12:21 PM, Åke Sandgren wrote:
> > Jeff: Is this correct?
>
> This is wrong, it should be 8 and alignement should be 8 even for intel.
> And i also see exactly the same thing.
Good catch!
I just fixed this in https://svn.open-mpi.org/trac/ompi/changeset/23580 -- it
looks li
I also get 8 from "call MPI_Type_size(MPI_DOUBLE_PRECISION, size,
mpierr)", but really I don't think this is the issue anymore. I mean I
checked on my school cluster where OpenMPI has also been compiled with
the intel64 compilers and "Fort dbl prec size:" also returns 4 but
unlike on my Mac the cod
Hi All
Martin Siegert wrote:
On Wed, Jul 28, 2010 at 01:05:52PM -0700, Martin Siegert wrote:
On Wed, Jul 28, 2010 at 11:19:43AM -0400, Gus Correa wrote:
Hugo Gagnon wrote:
Hi Gus,
Ompi_info --all lists its info regarding fortran right after C. In my
case:
Fort real size: 4
On Wed, Jul 28, 2010 at 01:05:52PM -0700, Martin Siegert wrote:
> On Wed, Jul 28, 2010 at 11:19:43AM -0400, Gus Correa wrote:
> > Hugo Gagnon wrote:
> >> Hi Gus,
> >> Ompi_info --all lists its info regarding fortran right after C. In my
> >> case:
> >> Fort real size: 4
> >> Fort
On Wed, Jul 28, 2010 at 11:19:43AM -0400, Gus Correa wrote:
> Hugo Gagnon wrote:
>> Hi Gus,
>> Ompi_info --all lists its info regarding fortran right after C. In my
>> case:
>> Fort real size: 4
>> Fort real4 size: 4
>> Fort real8 size: 8
>> Fort real16 size: 16
On Wed, 2010-07-28 at 11:48 -0400, Gus Correa wrote:
> Hi Hugo, Jeff, list
>
> Hugo: I think David Zhang's suggestion was to use
> MPI_REAL8 not MPI_REAL, instead of MPI_DOUBLE_PRECISION in your
> MPI_Allreduce call.
>
> Still, to me it looks like OpenMPI is making double precision 4-byte
> long
Here they are.
--
Hugo Gagnon
On Wed, 28 Jul 2010 12:01 -0400, "Jeff Squyres"
wrote:
> On Jul 28, 2010, at 11:55 AM, Gus Correa wrote:
>
> > I surely can send you the logs, but they're big.
> > Off the list perhaps?
>
> If they're still big when compressed, sure, send them to me off list.
>
On Jul 28, 2010, at 11:55 AM, Gus Correa wrote:
> I surely can send you the logs, but they're big.
> Off the list perhaps?
If they're still big when compressed, sure, send them to me off list.
But I think I'd be more interested to see Hugo's logs. :-)
--
Jeff Squyres
jsquy...@cisco.com
For co
Hi Jeff
I surely can send you the logs, but they're big.
Off the list perhaps?
Thanks,
Gus
Jeff Squyres wrote:
On Jul 28, 2010, at 11:19 AM, Gus Correa wrote:
Ompi_info --all lists its info regarding fortran right after C. In my
Ummm right... I should know that. I wrote ompi_info, aft
Hi Hugo, Jeff, list
Hugo: I think David Zhang's suggestion was to use
MPI_REAL8 not MPI_REAL, instead of MPI_DOUBLE_PRECISION in your
MPI_Allreduce call.
Still, to me it looks like OpenMPI is making double precision 4-byte
long, which shorter than I expected it be (8 bytes),
at least when look
On Jul 28, 2010, at 11:19 AM, Gus Correa wrote:
> > Ompi_info --all lists its info regarding fortran right after C. In my
Ummm right... I should know that. I wrote ompi_info, after all. :-) I
ran "ompi_info -all | grep -i fortran" and didn't see the fortran info, and I
forgot that I put
Hugo Gagnon wrote:
Hi Gus,
Ompi_info --all lists its info regarding fortran right after C. In my
case:
Fort real size: 4
Fort real4 size: 4
Fort real8 size: 8
Fort real16 size: 16
Fort dbl prec size: 4
Does it make any sense to you?
Hi Hugo
No, dbl pre
I mean to write:
call mpi_allreduce(inside, outside, 5,mpi_real, mpi_double_precision,
mpi_comm_world, ierr)
--
Hugo Gagnon
On Wed, 28 Jul 2010 09:33 -0400, "Hugo Gagnon"
wrote:
> And how do I know how big my data buffer is? I ran MPI_TYPE_EXTENT of
> And how do I know how big my data buffer
I installed with:
./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
make all install
I would gladly give you a corefile but I have no idea on to produce one,
I'm just an end user...
--
Hugo Gagnon
On Wed, 28 Jul 2010 08:57 -0400, "Jeff Squyres"
wrote:
> I don't have the i
And how do I know how big my data buffer is? I ran MPI_TYPE_EXTENT of
And how do I know how big my data buffer is? I ran MPI_TYPE_EXTENT of
MPI_DOUBLE_PRECISION and the result was 8. So I changed my program to:
1 program test
2
3 use mpi
4
5 implicit none
6
7
I don't have the intel compilers on my Mac, but I'm unable to replicate this
issue on Linux with the intel compilers v11.0.
Can you get a corefile to see a backtrace where it died in Open MPI's allreduce?
How exactly did you configure your Open MPI, and how exactly did you compile /
run your sa
On Jul 27, 2010, at 4:19 PM, Gus Correa wrote:
> Is there a simple way to check the number of bytes associated to each
> MPI basic type of OpenMPI on a specific machine (or machine+compiler)?
>
> Something that would come out easily, say, from ompi_info?
Not via ompi_info, but the MPI function M
On Jul 27, 2010, at 11:21 AM, Hugo Gagnon wrote:
> I appreciate your replies but my question has to do with the function
> MPI_Allreduce of OpenMPI built on a Mac OSX 10.6 with ifort (intel
> fortran compiler).
The implication I was going for was that if you were using MPI_DOUBLE_PRECISION
with
On Tue, 2010-07-27 at 16:19 -0400, Gus Correa wrote:
> Hi Hugo, David, Jeff, Terry, Anton, list
>
> I suppose maybe we're guessing that somehow on Hugo's iMac
> MPI_DOUBLE_PRECISION may not have as many bytes as dp = kind(1.d0),
> hence the segmentation fault on MPI_Allreduce.
>
> Question:
>
>
I did and it runs now, but the result is wrong: outside is still 1.d0,
2.d0, 3.d0, 4.d0, 5.d0
How can I make sure to compile OpenMPI so that datatypes such as
mpi_double_precision behave as they "should"?
Are there flags during the OpenMPI building process or something?
Thanks,
--
Hugo Gagnon
Hi Hugo, David, Jeff, Terry, Anton, list
I suppose maybe we're guessing that somehow on Hugo's iMac
MPI_DOUBLE_PRECISION may not have as many bytes as dp = kind(1.d0),
hence the segmentation fault on MPI_Allreduce.
Question:
Is there a simple way to check the number of bytes associated to eac
Try mpi_real8 for the type in allreduce
On 7/26/10, Hugo Gagnon wrote:
> Hello,
>
> When I compile and run this code snippet:
>
> 1 program test
> 2
> 3 use mpi
> 4
> 5 implicit none
> 6
> 7 integer :: ierr, nproc, myrank
> 8 integer, parameter :: d
I appreciate your replies but my question has to do with the function
MPI_Allreduce of OpenMPI built on a Mac OSX 10.6 with ifort (intel
fortran compiler).
--
Hugo Gagnon
On Tue, 27 Jul 2010 13:23 +0100, "Anton Shterenlikht"
wrote:
> On Tue, Jul 27, 2010 at 08:11:39AM -0400, Jeff Squyres wrot
On Tue, Jul 27, 2010 at 08:11:39AM -0400, Jeff Squyres wrote:
> On Jul 26, 2010, at 11:06 PM, Hugo Gagnon wrote:
>
> > 8 integer, parameter :: dp = kind(1.d0)
> > 9 real(kind=dp) :: inside(5), outside(5)
>
> I'm not a fortran expert -- is kind(1.d0) really double precision? A
On Tue, 2010-07-27 at 08:11 -0400, Jeff Squyres wrote:
> On Jul 26, 2010, at 11:06 PM, Hugo Gagnon wrote:
>
> > 8 integer, parameter :: dp = kind(1.d0)
> > 9 real(kind=dp) :: inside(5), outside(5)
>
> I'm not a fortran expert -- is kind(1.d0) really double precision? Accordin
On Jul 26, 2010, at 11:06 PM, Hugo Gagnon wrote:
> 8 integer, parameter :: dp = kind(1.d0)
> 9 real(kind=dp) :: inside(5), outside(5)
I'm not a fortran expert -- is kind(1.d0) really double precision? According
to http://gcc.gnu.org/onlinedocs/gcc-3.4.6/g77/Kind-Notation.htm
Hello,
When I compile and run this code snippet:
1 program test
2
3 use mpi
4
5 implicit none
6
7 integer :: ierr, nproc, myrank
8 integer, parameter :: dp = kind(1.d0)
9 real(kind=dp) :: inside(5), outside(5)
10
11 call mpi_
This is good information for me!
For my users though its over the top. I was looking for simple
reasons why, I think I have that now though. Thanks!
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Mar 13, 2008, at 11:16 AM, George Bosilca wrote:
Collective co
Collective communication was a hot topic for the last [at least one]
decade, and it is still today. Just minimizing the number of messages
is not generally the best approach. The collectives are sensitive to
the network characteristics and to the amount of data in a very
complex way. The be
Brock Palen wrote:
Yeah, I know what you mean about if you have to use a 'all to all'
use MPI_Alltoall() don't roll your own.
So on paper, alltoall at first glance appears to be: n*(n-1) -> n^2-
n -> n^2 (for large n).
Allreduce appears to be simplest, n point to points followed by a
On Wed, 2008-03-12 at 18:05 -0400, Aurélien Bouteiller wrote:
> If you can avoid them it is better to avoid them. However it is always
> better to use a MPI_Alltoall than coding your own all to all with
> point to point, and in some algorithms you *need* to make a all to all
> communication.
Yeah, I know what you mean about if you have to use a 'all to all'
use MPI_Alltoall() don't roll your own.
So on paper, alltoall at first glance appears to be: n*(n-1) -> n^2-
n -> n^2 (for large n).
Allreduce appears to be simplest, n point to points followed by a
bcast(). Which can
If you can avoid them it is better to avoid them. However it is always
better to use a MPI_Alltoall than coding your own all to all with
point to point, and in some algorithms you *need* to make a all to all
communication. What you should understand by "avoid all to all" is not
avoid MPI_al
I have always been told that calls like MPI_Barrior() MPI_Allreduce()
and MPI_Alltoall() should be avoided.
I understand MPI_Alltoall() as it goes n*(n-1) sends and thus grows
very very quickly. MPI_Barrior() is very latency sensitive and
generally is not needed in most cases I have seen
The primary person you need to talk to is turning in her dissertation
within the next few days. So I think she's kinda busy at the
moment... :-)
Sorry for the delay -- I'll take a shot at answers below...
On Aug 14, 2007, at 4:39 PM, smai...@ksu.edu wrote:
Can anyone help on this?
-Tha
Thanks, I understand what you are saying. But my query is regarding the
design of MPI_AllReduce for shared-memory systems. I mean is there any
different logic/design of MPI_AllReduce when OpenMPI is used on
shared-memory systems?
The standard MPI_AllReduce says,
1. Each MPI process sends its value
Can anyone help on this?
-Thanks,
Sarang.
Quoting smai...@ksu.edu:
> Hi,
> I am doing a research on parallel techniques for shared-memory
> systems(NUMA). I understand that OpenMPI is intelligent to utilize
> shared-memory system and it uses processor-affinity. Is the OpenMPI
> design of MPI_All
Hi,
I am doing a research on parallel techniques for shared-memory
systems(NUMA). I understand that OpenMPI is intelligent to utilize
shared-memory system and it uses processor-affinity. Is the OpenMPI
design of MPI_AllReduce "same" for shared-memory (NUMA) as well as
distributed system? Can someon
You are absolutely correct, sir! Thanks for noticing -- we'll get
that fixed up.
On Jan 15, 2007, at 1:44 PM, Bert Wesarg wrote:
Hello,
I think the last sentence for the use of MPI_IN_PLACE in the new
manual
page is wrong:
Use the variable MPI_IN_PLACE as the value of both sendbuf and
Hello,
I think the last sentence for the use of MPI_IN_PLACE in the new manual
page is wrong:
> Use the variable MPI_IN_PLACE as the value of both sendbuf and recvbuf.
This is the end of line 110 in file
https://svn.open-mpi.org/trac/ompi/browser/trunk/ompi/mpi/man/man3/MPI_Allreduce.3
Beside t
apologies.
-john.
>
> > -Original Message-
> > From: users-boun...@open-mpi.org
> > [mailto:users-boun...@open-mpi.org] On Behalf Of john casu
> > Sent: Thursday, April 06, 2006 6:07 PM
> > To: Open MPI Users
> > Subject: Re: [OMPI users] MPI_Allr
small program?
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of john casu
> Sent: Thursday, April 06, 2006 6:07 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Allreduce error in 1.0.1 and 1.0.2rc1
>
On Thu, 2006-04-06 at 15:48 -0400, George Bosilca wrote:
> The error state that your trying to use an invalid operation. MPI
> define which operation can be applied on which predefined data-types.
> Do you know which operation is used there ? And which predefined data-
> type ?
>
please forg
The error state that your trying to use an invalid operation. MPI
define which operation can be applied on which predefined data-types.
Do you know which operation is used there ? And which predefined data-
type ?
Thanks,
george.
On Apr 6, 2006, at 2:41 PM, john casu wrote:
I'm tryi
I'm trying to work with the sppm code form LLNL:
http://www.llnl.gov/asci_benchmarks/asci/limited/ppm/asci_sppm.html
I built openmpi and sppm on an 8-way shared memory Linux box.
The error I get is:
[ty20:07732] *** An error occurred in MPI_Allreduce
[ty20:07732] *** on communicator MPI_COMM_WOR
82 matches
Mail list logo