On Feb 28, 2008, at 5:32 PM, Chembeti, Ramesh (S&T-Student) wrote:


Dear All,

I am a graduate student working on molecular dynamic simulation. My professor/adviser is planning to buy Linux based clusters. But before that he wanted me to parallelize a serial code on molecular dynamic simulations and test it on a intelcore 2 duo machine with fedora 8 on it. I have parallelised my code in fortran 77 using MPI. I have installed OpenMPI and compiling the code using mpif77 - g -o code code.f

I would make sure to always use some sort of optimizer

mpif77 -O2 -o code code.f
atleast, higher (-O3, -fastsse) if it gives the right results, look up your compiler docs.

and running it using
mpirun -np 2 ./code. I have a couple of questions to ask you:
1. Is it possible to use a duo core or any multi core machine for parallel computations?

Yes a core is really another cpu, duel core is just two cpus packed (with some changes) into a single socket so to MPI it is the same as a duel cpu machine. We use duel socket duel core all the time (mpirun -np 4 app) all the time.

2. Is that a a right procedure to run a parallel job as explained above?(using mpif77 -g -o code code.f and running it using
mpirun -np 2 ./code)

Yes this is correct, Once you have more than one node you will need to somehow tell mpirun use host x and host y, but right now it just assumes 'localhost' which is correct.

Check out: http://www.open-mpi.org/faq/?category=running

3. How do I know my code is being run on both the processors.(I am a chemical engineering student and new to computational aspects)

Run 'top' you should see two processes, one for each cpu at 100%, there should be a system summary at the top that gives you a percent for the entire machine make sure idle is 0%.

4. If what I have done is wrong can anyone please explain me how to do it?

Nope looks like a good start,  always check out man pages

man mpirun

If you guys have cluster guys on campus is best not to spend your time being admins, have some Unix SA's run the cluster and you focus on your science. But thats my opinion (and observations).


Here is my CPU details:
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Core(TM)2 Duo CPU     E6750  @ 2.66GHz
stepping        : 11
cpu MHz         : 2000.000
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr lahf_lm
bogomips        : 5322.87
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Core(TM)2 Duo CPU     E6750  @ 2.66GHz
stepping        : 11
cpu MHz         : 2000.000
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr lahf_lm
bogomips        : 5319.97
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:


Thank you
Ramesh


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


Reply via email to