On Sun, 2009-07-12 at 19:49 -0500, Yin Feng wrote:
> Can you give me a further explanation about why results are different
> when it ran it on mutiprocessors against single processor?
Floating point number are problematical for a number of reasons, they
are only *approximations of real numbers bec
Can you give me a further explanation about why results are different
when it ran it on mutiprocessors against single processor?
Thank you!
On Fri, Jul 10, 2009 at 4:20 AM, Ashley Pittman wrote:
> On Thu, 2009-07-09 at 23:40 -0500, Yin Feng wrote:
>> I am a beginner in MPI.
>>
>> I ran an example
On Fri, 2009-07-10 at 14:35 -0500, Yin Feng wrote:
> I have my code run on supercomputer.
> First, I required allocation and then just run my code using mpirun.
> The supercomputer will assign 4 nodes but they are different at each
> time of requirement. So, I don't know the machines I will use bef
I have my code run on supercomputer.
First, I required allocation and then just run my code using mpirun.
The supercomputer will assign 4 nodes but they are different at each
time of requirement. So, I don't know the machines I will use before
it runs.
Do you know how to figure out under this situa
On Thu, 2009-07-09 at 23:40 -0500, Yin Feng wrote:
> I am a beginner in MPI.
>
> I ran an example code using OpenMPI and it seems work.
> And then I tried a parallel example in PETSc tutorials folder (ex5).
>
> mpirun -np 4 ex5
> It can do but the results are not as accurate as just running ex5.