Hello,
i'm studying the parallelized version of a solving 2D heat equation code in
order to understand cartesian topology and the famous "MPI_CART_SHIFT".
Here's my problem at this part of the code :
I got it. You're right, it might not related to MPI. I need to figure out
what's the possible reason for it.
Again, thanks for your help.
Linbao
On Thu, Oct 21, 2010 at 12:06 PM, Eugene Loh wrote:
> My main point was that, while what Jeff said about the short-comings of
> calling timers after
My main point was that, while what Jeff said about the short-comings of
calling timers after Barriers was true, I wanted to come in defense of
this timing strategy. Otherwise, I was just agreeing with him that it
seems implausible that commenting out B should influence the timing of
A, but I'm
Hi, Eugene,
You said:
" The bottom line here is that from a causal point of view it would seem
that B should not impact the timings. Presumably, some other variable is
actually responsible here."
Could you explain it in more details for the second sentence. Thanks a lot.
Linbao
On Thu, Oct 21,
Thanks a lot.
On Thu, Oct 21, 2010 at 9:21 AM, Jeff Squyres wrote:
> Ah. The original code snipit you sent was:
>
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t1 = clock();
> "code A";
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t2 = clock();
> "code B";
>
> Remember that the t
Jeff Squyres wrote:
Ah. The original code snipit you sent was:
MPI::COMM_WORLD.Barrier();
if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if(rank == master) t2 = clock();
"code B";
Remember that the time that individual processes exit barrier is not guaranteed
to be uni
When you do a make can your add a V=1 to have the actual compile lines
printed out. That will probably show you the line with
-fno-directives-only in it. Which is odd because I think that option is
a gcc'ism and don't know why it would show up in a studio build (note my
build doesn't show it
On 10/20/2010 8:30 PM, Scott Atchley wrote:
We have fixed this bug in the most recent 1.4.x and 1.5.x releases.
Scott
OK, a few more tests. I was using PGI 10.4 as the compiler.
I have now tried OpenMPI 1.4.3 with PGI 10.8 and Intel 11.1. I get the
same results in each case, mpirun seg faul
On 10/21/2010 10:18 AM, Jeff Squyres wrote:
Terry --
Can you file relevant ticket(s) for v1.5 on Trac?
Once I have more information and have proven it isn't due to us using
old compilers or a compiler error itself.
--td
On Oct 21, 2010, at 10:10 AM, Terry Dontje wrote:
I've reproduced Si
Ah. The original code snipit you sent was:
MPI::COMM_WORLD.Barrier();
if if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if if(rank == master) t2 = clock();
"code B";
Remember that the time that individual processes exit barrier is not guaranteed
to be uniform (indeed, it
> I wonder if the error below be due to crap being left over in the
> source tree. Can you do a "make clean". Note on a new checkout from
> the v1.5 svn branch I was able to build 64 bit with the following
> configure line:
linpc4 openmpi-1.5-Linux.x86_64.32_cc 123 make clean
Making clean i
Terry --
Can you file relevant ticket(s) for v1.5 on Trac?
On Oct 21, 2010, at 10:10 AM, Terry Dontje wrote:
> I've reproduced Siegmar's issue when I have the threads options on but it
> does not show up when they are off. It is actually segv'ing in
> mca_btl_sm_component_close on an access
I've reproduced Siegmar's issue when I have the threads options on but
it does not show up when they are off. It is actually segv'ing in
mca_btl_sm_component_close on an access at address 0 (obviously not a
good thing). I am going compile things with debug on and see if I can
track this furt
Thanks a lot for your reply. By commenting code B, I mean if I remove the
code B part, then the time spent on code A seems to run faster. I do have a
lot of communications in code B too. It involves 500 procs. I had thought
code B should have no effect on the time spent on code A if I use
MPI_Barri
Thanks for your suggestion. I am trying MPI_Wtime to see if there is any
difference.
Linbao
On Thu, Oct 21, 2010 at 1:37 AM, jody wrote:
> Hi
>
> I don't know the reason for the strange behaviour, but anyway,
> to measure time in an MPI application you should use MPI_Wtime(), not
> clock()
>
>
On 10/21/2010 06:43 AM, Jeff Squyres (jsquyres) wrote:
Also, i'm not entirely sure what all the commands are that you are
showing. Some of those warnings (eg in config.log) are normal.
The 32 bit test failure is not, though. Terry - any idea there?
The test program is failing in MPI_Finalize w
Also, i'm not entirely sure what all the commands are that you are showing.
Some of those warnings (eg in config.log) are normal.
The 32 bit test failure is not, though. Terry - any idea there?
Sent from my PDA. No type good.
On Oct 21, 2010, at 6:25 AM, "Terry Dontje" wrote:
> I wonder if
I wonder if the error below be due to crap being left over in the
source tree. Can you do a "make clean". Note on a new checkout from
the v1.5 svn branch I was able to build 64 bit with the following
configure line:
../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib
--without-udapl
On Oct 20, 2010, at 5:51 PM, Storm Zhang wrote:
> I need to measure t2-t1 to see the time spent on the code A between these two
> MPI_Barriers. I notice that if I comment code B, the time seems much less the
> original time (almost half). How does it happen? What is a possible reason
> for it?
Hi,
I have built Open MPI 1.5 on SunOS Sparc with the Oracle/Sun Studio C
compiler and gcc-4.2.0 in 32- and 64-bit mode. A small test program
works, but I got some warnings and errors building and checking the
installation as you can see below. Perhaps somebody knows how to fix
these things and ha
Hi,
thank you very much for your reply.
> Can you remove the -with-threads and -enable-mpi-threads options from
> the configure line and see if that helps your 32 bit problem any?
I cannot build the package when I remove these options.
linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config
Hi
I don't know the reason for the strange behaviour, but anyway,
to measure time in an MPI application you should use MPI_Wtime(), not clock()
regards
jody
On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang wrote:
> Dear all,
>
> I got confused with my recent C++ MPI program's behavior. I have an
22 matches
Mail list logo