open-mpi.org/nightly/v1.3/) to verify if this has been fixed
--Nysal
On Thu, 2009-02-19 at 16:09 -0600, Jeff Pummill wrote:
I built a fresh version of lammps v29Jan09 against Open MPI 1.3 which
in turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04 x86_64
box. This Open MPI build was ab
I built a fresh version of lammps v29Jan09 against Open MPI 1.3 which in
turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04 x86_64 box.
This Open MPI build was able to generate usable binaries such as XHPL
and NPB, but the lammps binary it generated was not usable.
I tried it with a co
MPI is very unlikely to work with infiniband right now.
Brian
On Mon, Mar 10, 2008 at 6:24 AM, Michael <mailto:mk...@ieee.org>> wrote:
Quick answer, till you get a complete answer, Yes, OpenMPI has long
supported most of the MPI-2 features.
Michael
On Mar 7, 2008, a
Just a quick question...
Does Open MPI 1.2.5 support most or all of the MPI-2 directives and
features?
I have a user who specified MVAPICH2 as he needs some features like
extra task spawning, but I am trying to standardize on Open MPI compiled
against Infiniband for my primary software stack
Is it possible that this could be a problem with /usr/lib64 as opposed
to /usr/lib?
Just a thought...
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
//
Hsieh, Pei-Ying (MED US) wrote:
Hi, Edgar and Galen,
Thanks for the quick reply!
What puzzles me is that, on 32
Brock,
The only thing that came to mind was that possibly on the second dump,
the I/O was substantial enough to cause an overload of the OSS's (I/O
servers) resulting in a process or task hang? Can you tell if your
Lustre environment is getting overwhelmed when the Open MPI / FLASH
combinatio
I'm guessing he means the ASC FLASH code which simulates star explosions...
Brock?
Jeff F. Pummill
University of Arkansas
//
Doug Reeder wrote:
Brock,
Do you mean flash memory, like a USB memory stick. What kid of file
system is on the memory. Is there some filesystem limit you are
bump
Krishna,
When you log in to the remote system, use ssh -X or ssh -Y which will
export the xterm back thru the ssh connection.
Jeff Pummill
University of Arkansas
Krishna Chaitanya wrote:
Hi,
I have been tracing the interactions between the PERUSE
and MPI library,on one
hosts -np 4
--byslot ./cg.C.4
It appears that this does avoid oversubscribing any particular core as I
am not exceeding my core count by running just the two jobs requiring 4
cores each.
Thanks,
Jeff Pummill
George Bosilca wrote:
The cleaner way to define such an environment is by usin
takes care of this detail for me?
Thanks!
Jeff Pummill
SLURM was really easy to build and install, plus it's a project of LLNL
and I love stuff that the Nat'l Labs architect.
The SLURM message board is also very active and quick to respond to
questions and problems.
Jeff F. Pummill
Bill Johnstone wrote:
Hello All.
We are starting to need re
Jeff,
Count us in at the UofA. My initial impressions of Open MPI are very
good and I would be open to contributing to this effort as time allows.
Thanks!
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.ed
27 PM, Jeff Pummill wrote:
I have successfully compiled Open MPI 1.2.3 against Intel 8.1 compiler
suite and old (3 years) mvapi stack using the following configure:
configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort
Do I need to assig
the command line submission
to ensure that it is using the IB network instead of the TCP? Or
possibly disable the Gig-E with ^tcp to see if it still runs successfully?
I just want to be sure that Open MPI is actually USING the IB network
and mvapi.
Thanks!
Jeff Pummill
mvapi
MCA btl: mvapi (MCA v1.0, API v1.0.1, Component v1.2.3)
I have a post-doc that will test some application code in the next day
or so. Maybe the old stuff worked just fine!
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
Jeff Pummill
ximately 3 years old.
Would it be reasonable to expect OpenMPI 1.2.3 to build and run in such
an environment?
Thanks!
Jeff Pummill
University of Arkansas
/faq/?category=slurm
Hope this helps.
Tim
On Wednesday 27 June 2007 14:21, Jeff Pummill wrote:
Hey Jeff,
Finally got my test nodes back and was looking at the info you sent. On
the SLURM page, it states the following:
*Open MPI* <http://www.open-mpi.org/> relies upon SLURM to al
you get from SLURM.
It's on the long-term plan to get "srun -n X my_mpi_application"
model to work; it just hasn't bubbled up high enough in the priority
stack yet... :-\
On Jun 20, 2007, at 1:59 PM, Jeff Pummill wrote:
Just started working with OpenMPI / SLURM combo this morning.
can run the same script without modification no matter how many
cpus/nodes you get from SLURM.
It's on the long-term plan to get "srun -n X my_mpi_application"
model to work; it just hasn't bubbled up high enough in the priority
stack yet... :-\
On Jun 20, 2007, at 1:59 P
Just started working with OpenMPI / SLURM combo this morning. I can
successfully launch this job from the command line and it runs to
completion, but when launching from SLURM they hang.
They appear to just sit with no load apparent on the compute nodes even
though SLURM indicates they are run
Thanks guys!
Setting F77=gfortran did the trick.
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu
"A supercomputer is a device for turning compute-bound
problems into I/O-bound problems." -Seymour Cray
Greetings all,
I downloaded and configured v1.2.2 this morning on an Opteron cluster
using the following configure directives...
./configure --prefix=/share/apps CC=gcc CXX=g++ F77=g77 FC=gfortran
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64
Compilation seemed to go OK and there IS an
were your timings, Jeff, and what processor do
you exactly have?
Mine is a Pentium D at 2.8GHz.
Victor
--- Jeff Pummill wrote:
Victor,
Build the FT benchmark and build it as a class B
problem. This will run
in the 1-2 minute range instead of 2-4 se
Victor
--- Jeff Pummill wrote:
Perfect! Thanks Jeff!
The NAS Parallel Benchmark on a dual core AMD
machine now returns this...
[jpummil@localhost bin]$ mpirun -np 1 cg.A.1
NAS Parallel Benchmarks 3.2 -- CG Benchmark
CG Benchmark Completed.
Class =
application Makefiles are throwbacks to
older versions of MPICH wrapper compilers that didn't always work
properly. Those days are long gone; most (all?) MPI wrapper
compilers do not need you to specify -L/-l these days.
On Jun 10, 2007, at 3:08 PM, Jeff Pummill wrote:
Maybe the &q
Maybe the "dumb question" of the week, but here goes...
I am trying to compile a piece of code (NPB) under OpenMPI and I am
having a problem with specifying the right library. Possibly something I
need to define in a LD_LIBRARY_PATH statement?
Using Gnu mpich, the line looked like this...
FM
the
SunClusterTools.
Victor
--- Jeff Pummill wrote:
Victor,
Just on a hunch, look in your BIOS to see if
Hyperthreading is turned
on. If so, turn it off. We have seen some unusual
behavior on some of
our machines unless this is disabled.
I a
Victor,
Just on a hunch, look in your BIOS to see if Hyperthreading is turned
on. If so, turn it off. We have seen some unusual behavior on some of
our machines unless this is disabled.
I am interested in your progress as I have just begun working with
OpenMPI as well. I have used mpich for
28 matches
Mail list logo