Hello,
I have a Pentium D computer with Solaris 10 installed.
I installed OpenMPI, succesfully compiled my Fortran
program, but when giving
mpirun -np 2 progexe
I receive
[0,1,0]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may resu
It means that your OMPI was compiled to support uDAPL (a type of
infinibad network) but that your computer does not have such a card
installed. Because you dont it will fall back to ethernet. But
because you are just running on a single machine. You will use the
fastest form of communi
The problem is that my executable file runs on the
Pentium D in 80 seconds on two cores and in 25 seconds
on one core.
And on another Sun SMP machine with 20 processors it
runs perfectly (the problem is perfectly scallable).
Victor Marian
Laboratory of Machine Elements and Tribolog
Victor,
Just on a hunch, look in your BIOS to see if Hyperthreading is turned
on. If so, turn it off. We have seen some unusual behavior on some of
our machines unless this is disabled.
I am interested in your progress as I have just begun working with
OpenMPI as well. I have used mpich for
I can't turn it off right now to look in BIOS (the
computer is not at home), but I think the Pentium D
which is dual-core doesn't support hyper-threading.
The program I made relies on an MPI library (it is
not a benchmarking program). I think you are right,
maibe I should run a benchmarking p
Hey Victor!
I just ran the old classic cpi.c just to verify that OpenMPI was
working. Now I need to grab some actual benchmarking code. I may try
the NAS Parallel Benchmarks from here...
http://www.nas.nasa.gov/Resources/Software/npb.html
They were pretty easy to build and run under mpich.
Maybe the "dumb question" of the week, but here goes...
I am trying to compile a piece of code (NPB) under OpenMPI and I am
having a problem with specifying the right library. Possibly something I
need to define in a LD_LIBRARY_PATH statement?
Using Gnu mpich, the line looked like this...
FM
Just remove the -L and -l arguments -- OMPI's "mpif90" (and other
wrapper compilers) will do all that magic for you.
Many -L/-l arguments in MPI application Makefiles are throwbacks to
older versions of MPICH wrapper compilers that didn't always work
properly. Those days are long gone; mos
Not a dumb question at all. :-)
I think the problem is your L flag. Our mpif90 wrapper compiler should
already know where to find the MPI library, which is located in wherever you
installed openmpi. Your flag is trying to overload our settings and I
believe is causing confusion.
So just eliminat
Perfect! Thanks Jeff!
The NAS Parallel Benchmark on a dual core AMD machine now returns this...
[jpummil@localhost bin]$ mpirun -np 1 cg.A.1
NAS Parallel Benchmarks 3.2 -- CG Benchmark
CG Benchmark Completed.
Class =A
Size=14000
It
I am working on an MD simulation algorithm on a shared-memory system
with 4 dual-core AMD 875 opteron processors. I started with MPICH
(1.2.6) and then shifted to OpenMPI and I found very good improvement
with OpenMPI. Even I would be interested in knowing any other
benchmarks with similar comparis
11 matches
Mail list logo