On Friday 03 September 2010, Alexander Kalinin wrote:
> Hello!
>
> I have a problem to run mpi program. My command line is:
> $ mpirun -np 1 ./ksurf
>
> But I got an error:
> [0,0,0] mca_oob_tcp_init: invalid address '' returned for selected oob
> interfaces.
> [0,0,0] ORTE_ERROR_LOG: Error in file
On Wednesday 25 July 2007, Jeff Squyres wrote:
> On Jul 25, 2007, at 7:45 AM, Biagio Cosenza wrote:
> > Jeff, I did what you suggested
> >
> > However no noticeable changes seem to happen. Same peaks and same
> > latency times.
>
> Ok. This suggests that Nagle may not be the issue here.
My guess
On Monday 22 October 2007, Bill Johnstone wrote:
> Hello All.
>
> We are starting to need resource/scheduling management for our small
> cluster, and I was wondering if any of you could provide comments on
> what you think about Torque vs. SLURM? On the basis of the appearance
> of active developm
Could you guys please trim your e-mails. No one wants to scroll by 100K-200K
old context to see the update (not to mention wasting storage space for
people.)
/Peter
signature.asc
Description: This is a digitally signed message part.
On Wednesday 23 September 2009, Rahul Nabar wrote:
> On Tue, Aug 18, 2009 at 5:28 PM, Gerry Creager
wrote:
> > Most of that bandwidth is in marketing... Sorry, but it's not a high
> > performance switch.
>
> Well, how does one figure out what exactly is a "hih performance
> switch"?
IMHO 1G Eth
On Tuesday 29 September 2009, Rahul Nabar wrote:
> On Tue, Sep 29, 2009 at 10:40 AM, Eugene Loh wrote:
> > to know. It sounds like you want to be able to watch some % utilization
> > of a hardware interface as the program is running. I *think* these tools
> > (the ones on the FAQ, including MPE,
On Wednesday 30 September 2009, vighn...@aero.iitb.ac.in wrote:
...
> during
> configuring with Intel 9.0 compiler the installation gives following
> error.
>
> [root@test_node openmpi-1.3.3]# make all install
...
> make[3]: Entering directory `/tmp/openmpi-1.3.3/orte'
> test -z "/share/apps/mpi/op
On Friday 30 October 2009, Konstantinos Angelopoulos wrote:
> good part of the day,
>
> I am trying to run a parallel program (that used to run in a cluster) in my
> double core pc. Could openmpi simulate the distribution of the parallel
> jobs to my 2 processors
If your program is an MPI program
On Wednesday 06 January 2010, Tim Miller wrote:
> Hi All,
>
> I am trying to compile OpenMPI 1.4 with PGI 9.0-3 and am getting the
> following error in configure:
>
> checking for functional offsetof macro... no
> configure: WARNING: Your compiler does not support offsetof macro
> configure: error:
On Thursday 11 March 2010, Matthew MacManes wrote:
> Can anybody tell me if this is an error associated with openmpi, versus an
> issue with the program I am running (MRBAYES,
> https://sourceforge.net/projects/mrbayes/)
>
> We are trying to run a large simulated dataset using 1,000,000 bases
> div
On Monday 05 April 2010, Steve Swanekamp (L3-Titan Contractor) wrote:
> When I try to run the configure utility I get the message that the c++
> compiler can not compile simple c programs. Any ideas?
(at least some) Intel compilers need the gcc-c++ distribution package. Have
you tested icpc with
On Friday 11 June 2010, asmae.elbahlo...@mpsa.com wrote:
> Hello
> i have a problem with parFoam, when i type in the terminal parafoam, it
> lauches nothing but in the terminal i have :
This is the OpenMPI mailling list, not OpenFoam. I suggest you contact the
team behind OpenFoam.
I also sugge
On Friday 09 July 2010, Andreas Schäfer wrote:
> Hi,
>
> I'm evaluating Open MPI 1.4.2 on one of our BladeCenters and I'm
> getting via InfiniBand about 1550 MB/s and via shared memory about
> 1770 for the PingPong benchmark in Intel's MPI benchmark. (That
> benchmark is just an example, I'm seeing
On Friday 09 July 2010, Andreas Schäfer wrote:
> Thanks, those were good suggestions.
>
> On 11:53 Fri 09 Jul , Peter Kjellstrom wrote:
> > On an E5520 (nehalem) node I get ~5 GB/s ping-pong for >64K sizes.
>
> I just tried a Core i7 system which maxes at 6550 MB/s
Problem description:
Elements from all ranks are gathered correctly except for the
element belonging to the root/target rank of the gather operation
when using certain custom MPI-datatypes (see reproducer code).
The bug can be toggled by commenting/uncommenting line 11 in the .F90-file.
Even thou
On Monday 26 January 2009, Jeff Squyres wrote:
> The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
> to bring a question to the Open MPI user and developer communities: is
> anyone interested in having a single MPI job span HCAs or RNICs from
> multiple vendors? (pardon the cros
On Tuesday 27 January 2009, Jeff Squyres wrote:
> It is worth clarifying a point in this discussion that I neglected to
> mention in my initial post: although Open MPI may not work *by
> default* with heterogeneous HCAs/RNICs, it is quite possible/likely
> that if you manually configure Open MPI to
On Thursday 29 January 2009, Ralph Castain wrote:
> It is quite likely that you have IPoIB on your system. In that case,
> the TCP BTL will pickup that interface and use it.
...
Sub 3us latency rules out IPoIB for sure. The test below ran on native IB or
some other very low latency path.
> > # O
On Tuesday 07 April 2009, Bernhard Knapp wrote:
> Hi
>
> I am trying to get a parallel job of the gromacs software started. MPI
> seems to boot fine but unfortunately it seems not to be able to open a
> specified file although it is definitly in the directory where the job
> is started.
Do all the
On Tuesday 07 April 2009, Eugene Loh wrote:
> Iain Bason wrote:
> > But maybe Steve should try 1.3.2 instead? Does that have your
> > improvements in it?
>
> 1.3.2 has the single-queue implementation and automatic sizing of the sm
> mmap file, both intended to fix problems at large np. At np=2, y
On Thursday 07 May 2009, nee...@crlindia.com wrote:
> Thanks Pasha for sharing IB Roadmaps with us. But i am more interested in
> to find out latency figures since they often matter more than bit rate.
>
> Could there be rough if not accurate the latency figures being targeted in
> IB World?
The (
On Tuesday 19 May 2009, Roman Martonak wrote:
...
> openmpi-1.3.2 time per one MD step is 3.66 s
>ELAPSED TIME :0 HOURS 1 MINUTES 25.90 SECONDS
> = ALL TO ALL COMM 102033. BYTES 4221. =
> = ALL TO ALL COMM 7.802 MB/S
On Tuesday 19 May 2009, Roman Martonak wrote:
> On Tue, May 19, 2009 at 3:29 PM, Peter Kjellstrom wrote:
> > On Tuesday 19 May 2009, Roman Martonak wrote:
> > ...
> >> openmpi-1.3.2 time per one MD step is 3.66 s
> >> ELAPSED TIME :
On Tuesday 19 May 2009, Peter Kjellstrom wrote:
> On Tuesday 19 May 2009, Roman Martonak wrote:
> > On Tue, May 19, 2009 at 3:29 PM, Peter Kjellstrom wrote:
> > > On Tuesday 19 May 2009, Roman Martonak wrote:
> > > ...
> > >
> > >> openmpi-1.3.2
On Wednesday 20 May 2009, Rolf Vandevaart wrote:
...
> If I am understanding what is happening, it looks like the original
> MPI_Alltoall made use of three algorithms. (You can look in
> coll_tuned_decision_fixed.c)
>
> If message size < 200 or communicator size > 12
>bruck
> else if message s
On Wednesday 20 May 2009, Pavel Shamis (Pasha) wrote:
> > With the file Pavel has provided things have changed to the following.
> > (maybe someone can confirm)
> >
> > If message size < 8192
> > bruck
> > else
> > pairwise
> > end
>
> You are right here. Target of my conf file is disable basic_lin
On Wednesday 20 May 2009, Pavel Shamis (Pasha) wrote:
> > Disabling basic_linear seems like a good idea but your config file sets
> > the cut-off at 128 Bytes for 64-ranks (the field you set to 8192 seems to
> > result in a message size of that value divided by the number of ranks).
> >
> > In my t
On Wednesday 20 May 2009, Roman Martonak wrote:
> I tried to run with the first dynamic rules file that Pavel proposed
> and it works, the time per one MD step on 48 cores decreased from 2.8
> s to 1.8 s as expected. It was clearly the basic linear algorithm that
> was causing the problem. I will c
On Saturday 25 November 2006 15:31, shap...@isp.nsc.ru wrote:
> Hello,
> i cant figure out, is there a way with open-mpi to bind all
> threads on a given node to a specified subset of CPUs.
> For example, on a multi-socket multi-core machine, i want to use
> only a single core on each CPU.
> Thank
On Tuesday 16 January 2007 15:37, Brian W. Barrett wrote:
> Open MPI will not run on PA-RISC processors.
HPUX runs on IA-64 too.
/Peter
pgpdAr7FqFgzB.pgp
Description: PGP signature
On Thursday 18 January 2007 09:52, Robin Humble wrote:
...
> is ~10Gbit the best I can expect from 4x DDR IB with MPI?
> some docs @HP suggest up to 16Gbit (data rate) should be possible, and
> I've heard that 13 or 14 has been achieved before. but those might be
> verbs numbers, or maybe horsepowe
On Thursday 18 January 2007 13:08, Scott Atchley wrote:
...
> The best uni-directional performance I have heard of for PCIe 8x IB
> DDR is ~1,400 MB/s (11.2 Gb/s)
This is on par with what I have seen.
> with Lustre, which is about 55% of the
> theoretical 20 Gb/s advertised speed.
I think this
Hello
We have been busy this week comparing five different MPI-implementations on a
small test cluster. Several notable differences have been observed but I will
limit myself to one perticular test in this e-mail (64-rank Intel MPI
Benchmark alltoall on 8 dual quad nodes).
Lets start with the
On Monday 23 April 2007, Bert Wesarg wrote:
> Hello all,
>
> Please give a short description of the machine and send the
> cpu-topology.tar.bz2 to the list.
short description in filenames, kernel is 2.6.18-8.1.1.el5
/Peter
cpu-topology-dualOpteron2216HE-rhel5_64.tar.bz2
Description: application
34 matches
Mail list logo