When MPI_Bcast and MPI_Reduce are called for the first time, they
are very slow. But after that, they run at normal and stable speed.
Is there anybody out there who have encountered such problem? If you
need any other information, please let me know and I'll provide.
Thanks in advance.
Th
We installed a linux cluster recently. The OS is Ubuntu 8.04. The
network is infiniband. We run a simple MPI program to compute the
value of pi. The program follows three stages: MPI_Bcast, computation
and MPI_Reduce. We measure the elapsed time on computation and
communication, respectivel
Why not do something like this:
double **A=new double*[N];
double *A_data new double [N*N];
for(int i=0;i wrote:
> Hi
>thanks for the quick response. Yes, that is what I meant. I thought
> there was no other way around what I am doing but It is always good to ask a
> expert rather than assum
Hi Dave
I believe you can turn it "off" by setting
-mca coll ^tuned
This will tell the system to consider all collective modules -except- for
tuned.
HTH
Ralph
On Thu, Oct 29, 2009 at 12:13 PM, David Gunter wrote:
> We have a user who's hitting a hang in MPI_Allgather that totalview is
> show
Hi
thanks for the quick response. Yes, that is what I meant. I thought there
was no other way around what I am doing but It is always good to ask a
expert rather than assume!
Cheers,
C.S.N
On Thu, Oct 29, 2009 at 11:25 AM, Eugene Loh wrote:
> Natarajan CS wrote:
>
> Hello all,
>>Fi
We have a user who's hitting a hang in MPI_Allgather that totalview is
showing is in a tuned collective operation.
How do we disable the use of tuned collectives? We have tried setting
the priority to 0 but maybe that wasn't the correct way:
mpirun -mca coll_tuned_priority 0 ...
Should it
Natarajan CS wrote:
Hello all,
Firstly, My apologies for a duplicate post in LAM/MPI list I
have the following simple MPI code. I was wondering if there was a
workaround for sending a dynamically allocated 2-D matrix? Currently I
can send the matrix row-by-row, however, since rows are
This also appears to fix a bug I had reported that did not involve
collective calls.
The code is appended. When run on 64 bit architecture with
iter.cary$ gcc --version
gcc (GCC) 4.4.0 20090506 (Red Hat 4.4.0-4)
Copyright (C) 2009 Free Software Foundation, Inc.
This is free software; see the so
> >>> It seems that the calls to collective communication are not
> >>> returning for some MPI processes, when the number of processes is
> >>> greater or equal to 5. It's reproduceable, on two different
> >>> architectures, with two different versions of OpenMPI (1.3.2 and
> >>> 1.3.3). It was wo
On 2009-10-29, at 10:21AM, Vincent Loechner wrote:
It seems that the calls to collective communication are not
returning for some MPI processes, when the number of processes is
greater or equal to 5. It's reproduceable, on two different
architectures, with two different versions of OpenMPI (1.
> > It seems that the calls to collective communication are not
> > returning for some MPI processes, when the number of processes is
> > greater or equal to 5. It's reproduceable, on two different
> > architectures, with two different versions of OpenMPI (1.3.2 and
> > 1.3.3). It was working corr
On 2009-10-29, at 9:57AM, Vincent Loechner wrote:
[...]
It seems that the calls to collective communication are not
returning for some MPI processes, when the number of processes is
greater or equal to 5. It's reproduceable, on two different
architectures, with two different versions of OpenMPI
Hello to the list,
I came to a problem running a simple program with collective
communications, on a 6-core processors (6 local MPI processes).
It seems that the calls to collective communication are not
returning for some MPI processes, when the number of processes is
greater or equal to 5. It's
Please see my earlier response. This proposed solution will work, but may be
unstable as it (a) removes all of OMPI's internal variables, some of which
are required; and (b) also removes all the variables that might be needed by
your system. For example, envars directing the use of specific transpo
Could your problem is related to the MCA parameter "contamination" problem,
where the child MPI process inherits MCA environment variables from the parent
process still exists.
Back in 2007 I was implementing a program that solves two large interrelated
systems of equations (+200.000.000 eq.)
15 matches
Mail list logo