Re: [OMPI users] How to justify the use MPI codes on multicore systems/PCs?

2011-12-12 Thread amjad ali
Thanking you all very much for the reply. I would request to have some reference about what Tim Prince & Andreas has said. Tim said that OpenMPI has had effective shared memory message passing. Is that anything to do with --enable-MPI-threads switch while installing OpeMPI? regards, AA

[OMPI users] How to justify the use MPI codes on multicore systems/PCs?

2011-12-10 Thread amjad ali
justifying this and comment/modify above two justifications. Better if I you can suggent me to quote some reference of any suitable publication in this regard. best regards, Amjad Ali

[OMPI users] difference between MTL and BTL

2011-06-04 Thread amjad ali
which one to use MTL or BTL? Thank you. Regards, Amjad Ali

Re: [OMPI users] Check whether non-blocking communication has finished?

2011-02-02 Thread amjad ali
Perhaps often it is more useful to use MPI_WAIT rather than MPI_TEST type fucntions, because at MPI_WAIT point it will be taken care of communication completion, automatically, which may be necessary before going ahead. with MPI_TEST it would become the responsibility of the programmer to handle th

Re: [OMPI users] Help on the big picture..

2010-07-22 Thread amjad ali
y model locally at each node, evading unnecesary I/O through the > nwetwork, what do you think? > > > > On Thu, Jul 22, 2010 at 5:27 PM, amjad ali wrote: > > Hi Cristobal, > > > > Note that the pic in http://dl.dropbox.com/u/6380744/clusterLibs.png > > shows t

Re: [OMPI users] Help on the big picture..

2010-07-22 Thread amjad ali
Hi Cristobal, Note that the pic in http://dl.dropbox.com/u/6380744/clusterLibs.png shows that Scalapack is based on what; it only shows which packages Scalapack uses; hence no OpenMP is there. Also be clear about the difference: "OpenMP" is for shared memory parallel programming, while "OpenMPI"

Re: [OMPI users] Open MPI, Segmentation fault

2010-07-01 Thread amjad ali
from the start of your program, after a certain activitiy, say after 10 lines use print statement with STOP/EXIT , also printing processor rank. If u get all the processors than its fine. Move this printing little ahead and get printing again. Repeat this process until u reach the place of fault.

Re: [OMPI users] Open MPI, Segmentation fault

2010-06-30 Thread amjad ali
Based on my experiences, I would FULLY endorse (100% agree with) David Zhang. It is usually a coding or typo mistake. At first, Ensure that array sizes and dimension are correct. I experience that if openmpi is compiled with gnu compilers (not with Intel) then it also point outs the subroutine ex

Re: [OMPI users] MPI Persistent Communication Question

2010-06-30 Thread amjad ali
Dear E. loh, Thank u very much for your help. Actually i was doing the same according to your earlier suggestions ---and now in the program; but error was there. At last i found the blunder made by myself. It was a typo mistake infact of a variable name. i will let u know about the performan

Re: [OMPI users] MPI Persistent Communication Question

2010-06-30 Thread amjad ali
and it's conceivable that you might have better performance with > > CALL MPI_ISEND() > DO I = 1, N > call do_a_little_of_my_work() ! no MPI progress is being made here > CALL MPI_TEST()! enough MPI progress is being made here > that the receiver has something t

Re: [OMPI users] openMPI asychronous communication

2010-06-28 Thread amjad ali
I guess that if the receiver want to ensure that the sender should send data only when the receiver will be able/free to receive data, then use MPI-Barriers. On Mon, Jun 28, 2010 at 12:53 PM, David Zhang wrote: > Use MPI_Iprobe. It's a nonblocking probe that allow you to see if a > message i

Re: [OMPI users] MPI Persistent Communication Question

2010-06-28 Thread amjad ali
Dear E. Loh. ** > > > Another is whether you can overlap communications and computation. This > does not require persistent channels, but only nonblocking communications > (MPI_Isend/MPI_Irecv). Again, there are no MPI guarantees here, so you may > have to break your computation up and insert MP

Re: [OMPI users] MPI Persistent Communication Question

2010-06-28 Thread amjad ali
> > You would break the MPI_Irecv and MPI_Isend calls up into two parts: > MPI_Send_init and MPI_Recv_init in the first part and MPI_Start[all] in the > second part. The first part needs to be moved out of the subroutine... at > least outside of the loop in sub1() and maybe even outside the > 1000

Re: [OMPI users] MPI Persistent Communication Question

2010-06-28 Thread amjad ali
Hi Jeff S. Thank you very much for your reply. I am still feeling some confusion. Please guide. The idea is to do this: > >MPI_Recv_init() >MPI_Send_init() >for (i = 0; i < 1000; ++i) { >MPI_Startall() >/* do whatever */ >MPI_Waitall() >} >for (i = 0;

[OMPI users] MPI Persistent Communication Question

2010-06-28 Thread amjad ali
Hi all, I observed MPI_ISEND & IRECV performing little better than persistenent communication; although I was hoping/desiring the opposite case?? What is the be the best way of using MPI persistent communication in an iterative/repetative kind of code about calling MPI_Free(); Should we call MPI

Re: [OMPI users] Open MPI performance on Amazon Cloud

2010-03-12 Thread amjad ali
Dear Hammad, Can you please have a RUN with "sufficiently Larger Problem Instance"? Then see what happens!! Sufficientmay be 10 times greater than that you used. best regards, Amjad Ali. On Fri, Mar 12, 2010 at 3:10 AM, Hammad Siddiqi wrote: > Dear All, > Is this the c

Re: [OMPI users] MPI Processes and Auto Vectorization

2009-12-01 Thread amjad ali
Hi, thanks T.Prince, Your saying: "I'll just mention that we are well into the era of 3 levels of programming parallelization: vectorization, threaded parallel (e.g. OpenMP), and process parallel (e.g. MPI)." is a really great new learning for me. Now I can perceive better. Can you please expl

[OMPI users] MPI Processes and Auto Vectorization

2009-12-01 Thread amjad ali
Hi, Suppose we run a parallel MPI code with 64 processes on a cluster, say of 16 nodes. The cluster nodes has multicore CPU say 4 cores on each node. Now all the 64 cores on the cluster running a process. Program is SPMD, means all processes has the same workload. Now if we had done auto-vectoriz

[OMPI users] Array Declaration different approaches

2009-11-14 Thread amjad ali
s (10, 20 or 30 may be) in the header while making call to a subroutine. Which way is quite fast and efficient over the other? Thank You for your kind attention. with best regards, Amjad Ali.

[OMPI users] MPI Derived datatype + Persistent communication

2009-11-11 Thread amjad ali
benefit/efficiency? Better if any body can refer to some tutorial/example-code on this. Thank you for you attention. With best regards, Amjad Ali.

[OMPI users] Coding help requested

2009-11-10 Thread amjad ali
/portion in the recv_array. Is this scheme is quite fine and correct. I am in search of efficient one. Request for help. With best regards, Amjad Ali.

Re: [OMPI users] Programming Help needed

2009-11-06 Thread amjad ali
; T. Rosmond > > > > On Fri, 2009-11-06 at 17:44 -0500, amjad ali wrote: > > Hi all, > > > > I need/request some help from those who have some experience in > > debugging/profiling/tuning parallel scientific codes, specially for > > PDEs/CFD. > > >

[OMPI users] Programming Help needed

2009-11-06 Thread amjad ali
Hi all, I need/request some help from those who have some experience in debugging/profiling/tuning parallel scientific codes, specially for PDEs/CFD. I have parallelized a Fortran CFD code to run on Ethernet-based-Linux-Cluster. Regarding MPI communication what I do is that: Suppose that the gri

[OMPI users] Timers

2009-09-11 Thread amjad ali
Hi all, I want to get the elapsed time from start to end of my parallel program (OPENMPI based). It should give same time for the same problem always; irrespective of whether the nodes are running some or programs or they are running only that program. How to do this? Regards.

Re: [OMPI users] Running problem

2009-09-02 Thread amjad ali
ils on your > installation, your system and your shell. > > Greetings, Jakob > My installation is ROCKS-5 (64 bit), 4-nodes with Xeon3085, bash shell. Compilers are GNU 64-bit. Next?? > > amjad ali wrote: > > Hi all, > > A simple program at my 4-node ROCKS cluster

[OMPI users] Running problem

2009-09-01 Thread amjad ali
different location than the ./sphere. Openmpi is installed with GNU compilers. Best Regards, Amjad Ali

Re: [OMPI users] programming qsn??

2009-08-14 Thread amjad ali
Hello Mr. Eugene Loh, THANK YOU VERY MUCH, IT WORKED. I used both ISEND and IRECV and then a combined call to WAITALL with MPI_STATUSES_IGNORE. with best regards, Amjad Ali. On Fri, Aug 14, 2009 at 6:42 AM, Eugene Loh wrote: > amjad ali wrote: > > Please tell me that if have mult

Re: [OMPI users] programming qsn??

2009-08-13 Thread amjad ali
On Fri, Aug 14, 2009 at 1:32 AM, Eugene Loh wrote: > amjad ali wrote: > > I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid > (all triangles) is partitioned among 8 processes using METIS. Each process > has different number of neighboring processes. Suppo

[OMPI users] programming qsn??

2009-08-13 Thread amjad ali
KIND ATTENTION. With best regards, Amjad Ali.