I am a beginner in MPI.
I ran an example code using OpenMPI and it seems work.
And then I tried a parallel example in PETSc tutorials folder (ex5).
mpirun -np 4 ex5
It can do but the results are not as accurate as just running ex5.
Is that thing normal?
After that, send this job to supercomputer
Luis Vitorio Cargnini wrote:
Ok, after all the considerations, I'll try Boost, today, make some
experiments and see if I can use it or if I'll avoid it yet.
But as said by Raimond I think, the problem is been dependent of a
rich-incredible-amazing-toolset but still implementing only MPI-1
What Python version are you using?
I would use 'ctypes' modules (available in recent Python's stdlib) in
order do open the MPI shared library, next call MPI_Init() from Python
code... Of couse, I'm assuming you Fortran code can manage the case of
MPI being initialized (by using MPI_Initialized()).
On Thu, 2009-07-09 at 23:40 -0500, Yin Feng wrote:
> I am a beginner in MPI.
>
> I ran an example code using OpenMPI and it seems work.
> And then I tried a parallel example in PETSc tutorials folder (ex5).
>
> mpirun -np 4 ex5
> It can do but the results are not as accurate as just running ex5.
Hello,
Two questions from an old MPI user who's still learning.
Let's assume mpirun's spawned N copies of program A and M (/=N) copies of
program B. This execution option leads A and B processes to belong to the
same MPI_COMM_WORLD and ranks are ordered consequently from 0 to N-1 for
Dear OpenMPI experts
We are seeing bad scaling of a certain code that uses OpenMPI
non-blocking point-to-point routines,
and would love to hear any suggestions on how to improve the situation.
Details:
We have a small 24-node cluster (Monk) with Infiniband, dual AMD Opteron
quad-core processor
I have my code run on supercomputer.
First, I required allocation and then just run my code using mpirun.
The supercomputer will assign 4 nodes but they are different at each
time of requirement. So, I don't know the machines I will use before
it runs.
Do you know how to figure out under this situa
On Fri, 2009-07-10 at 14:35 -0500, Yin Feng wrote:
> I have my code run on supercomputer.
> First, I required allocation and then just run my code using mpirun.
> The supercomputer will assign 4 nodes but they are different at each
> time of requirement. So, I don't know the machines I will use bef