Hi Don,
The first time I ran the program I am working on. It
is perfectly scallable and on 20 processors it ran on
27 seconds (on two processors on 300 seconds).
The I had the curiosity to run it on a pentium D. It
ran in 30 senconds on a single core. On two cores it
ran on 37 seconds (I think som
Victor,
Obviously there are many variables involved with getting the best
performance out of a machine and understanding the 2 environments you
are comparing would be necessary as well as the job. I would not be able
to get my hands on another E10K for validation or projecting possible
gains
Hi Don,
But as I see you must pay for these debuggers.
Victor
--- Don Kerr wrote:
> Victor,
>
> You are right Prism will not work with Open MPI
> which Sun's ClusterTools
> 7 is based on. But Prism was not available for CT 6
> either. Totalview
> and Allinea's dd
Victor,
You are right Prism will not work with Open MPI which Sun's ClusterTools
7 is based on. But Prism was not available for CT 6 either. Totalview
and Allinea's ddt I believe have both been tested to work with Open MPI.
-DON
victor marian wrote:
I can't turn it off right now to look
Hi Don,
Seeing your mail, I suppose you are working at Sun. We
have a Sun 1 Server at our university, and my
program runs almost as fast on 16 UltraSparc2
processors as on a pentium D.The program is perfectly
scallable. I am a little bit dissapointed. Our Sparc
II are at 400MHz, and the Pent
Additionally, Solaris comes with the IB drivers and since the libs are
there OMPI thinks that it is available. You can suppress this message with
--mca btl_base_warn_component_unused 0
or specifically call out the btls you wish to use, example
--mca btl self,sm,tcp
Brock Palen wrote:
It
I am working on an MD simulation algorithm on a shared-memory system
with 4 dual-core AMD 875 opteron processors. I started with MPICH
(1.2.6) and then shifted to OpenMPI and I found very good improvement
with OpenMPI. Even I would be interested in knowing any other
benchmarks with similar comparis
Hey Victor!
I just ran the old classic cpi.c just to verify that OpenMPI was
working. Now I need to grab some actual benchmarking code. I may try
the NAS Parallel Benchmarks from here...
http://www.nas.nasa.gov/Resources/Software/npb.html
They were pretty easy to build and run under mpich.
I can't turn it off right now to look in BIOS (the
computer is not at home), but I think the Pentium D
which is dual-core doesn't support hyper-threading.
The program I made relies on an MPI library (it is
not a benchmarking program). I think you are right,
maibe I should run a benchmarking p
Victor,
Just on a hunch, look in your BIOS to see if Hyperthreading is turned
on. If so, turn it off. We have seen some unusual behavior on some of
our machines unless this is disabled.
I am interested in your progress as I have just begun working with
OpenMPI as well. I have used mpich for
The problem is that my executable file runs on the
Pentium D in 80 seconds on two cores and in 25 seconds
on one core.
And on another Sun SMP machine with 20 processors it
runs perfectly (the problem is perfectly scallable).
Victor Marian
Laboratory of Machine Elements and Tribolog
It means that your OMPI was compiled to support uDAPL (a type of
infinibad network) but that your computer does not have such a card
installed. Because you dont it will fall back to ethernet. But
because you are just running on a single machine. You will use the
fastest form of communi
Hello,
I have a Pentium D computer with Solaris 10 installed.
I installed OpenMPI, succesfully compiled my Fortran
program, but when giving
mpirun -np 2 progexe
I receive
[0,1,0]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may resu
13 matches
Mail list logo