On Jul 27, 2007, at 12:23 PM, Adams, Samuel D Contr AFRL/HEDR wrote:
I set up ompi before I configured Torque. Do I need to recompile ompi
with appropriate torque configure options to get better integration?
If libtorque wasn't present on the machine at configure then yes, you
need to run
I think the problem is that we use MPI_STATUS_IGNORE in the C++
bindings but don't check for it properly in mtl_mx_iprobe,
can you try applying this diff to ompi and having the user try again,
we will also push this into the 1.2 branch.
- Galen
Index: ompi/mca/mtl/mx/mtl_mx_probe.c
==
Good point, this may be affecting overall performance for openib+gm.
But I didn't see any performance improvement for gm+tcp over just
using gm (and there's definitely no memory bandwidth limitation
there).
I wouldn't expect you to see any benefit with GM+TCP, the overhead
costs of TCP are so
Alex,
For OpenIB + GM you are probably going to be limited by the memory bus.
Take the InfiniBand Nic, it peaks at say 900 MBytes/Sec, the Myrinet
2-G will peak at say 250 MBytes/Sec.
Unless you are doing direct DMAs from pre-registered host memory than
you will not see 900 + 250 MBytes/Sec b
What does ifconfig report on both nodes?
- Galen
On Feb 1, 2007, at 2:50 PM, Alex Tumanov wrote:
Hi,
I have kept doing my own investigation and recompiled OpenMPI to have
only the barebones functionality with no support for any interconnects
other than ethernet:
# rpmbuild --rebuild --define=
ah, disregard..
On Jan 19, 2007, at 1:35 AM, Barry Evans wrote:
It's gigabit attached, pathscale is there simply to indicate that ompi
was compiled with ekopath
- Barry
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-bounces@open-
mpi.org] On
Behalf Of
Are you using
-mca pml cm
for pathscale or are you using openib?
- Galen
On Jan 18, 2007, at 4:42 PM, Barry Evans wrote:
Hi,
We tried running with 32 and 16, had some success but after a
reboot of
the cluster it seems to be any DLPOLY run attempted falls over, either
interactively or
The problem is that, when running HPL, he sees failed residuals. When
running HPL under MPICH-GM, he does not.
I have tried running HPCC (HPL plus other benchmarks) using OMPI with
GM on 32-bit Xeons and 64-bit Opterons. I do not see any failed
residuals. I am trying to get access to a couple of
at allows
the windows to open? If I knew that, it would be the fix to my
problem.
Dave
Galen Shipman wrote:
I think this might be as simple as adding "-d" to the mpirun
command line
If I run:
mpirun -np 2 -d -mca pls_rsh_agent "ssh -X" xterm -e gdb ./mpi-
r
(xorg) or with the version of linux, so I am also seeking help
from the person who maintains caos linux. If it matters, the
machine uses myrinet for the interconnects.
Thanks!
Dave
Galen Shipman wrote:
what does your command line look like?
- Galen
On Nov 29, 2006, at 7:53 PM, Dave Grote wr
Looking VERY briefly at the GAMMA API here:
http://www.disi.unige.it/project/gamma/gamma_api.html
It looks like one could create a GAMMA BTL with a minimal amount of
trouble.
I would encourage your group to do this!
There is quite a bit of information regarding the BTL interface, and
for GA
what does your command line look like?
- Galen
On Nov 29, 2006, at 7:53 PM, Dave Grote wrote:
I cannot get X11 forwarding to work using mpirun. I've tried all of
the
standard methods, such as setting pls_rsh_agent = ssh -X, using xhost,
and a few other things, but nothing works in general.
We have found a potential issue with BPROC that may effect Open MPI.
Open MPI by default uses PTYs for I/O forwarding, if PTYs aren't
setup on the compute nodes, Open MPI will revert to using pipes.
Recently (today) we found a potential issue with PTYs and BPROC. A
simple reader/writer usin
what 'pml' is. Or
what ones are available what one is used by default, or how to
switch between them. Is there a paper someplace that describes this?
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Nov 26, 2006, at 11:10 AM, Galen Shipman wrote:
Oh, j
Oh, just noticed you are using GM, PML CM is only available for MX..
sorry..
Galen
On Nov 26, 2006, at 9:08 AM, Galen Shipman wrote:
I would suggest trying Open MPI 1.2b1 and PML CM. You can select
PML CM at runtime via:
mpirun -mca pml cm
Have you tried this?
- Galen
On Nov 21
I would suggest trying Open MPI 1.2b1 and PML CM. You can select PML
CM at runtime via:
mpirun -mca pml cm
Have you tried this?
- Galen
On Nov 21, 2006, at 12:28 PM, Scott Atchley wrote:
On Nov 21, 2006, at 1:27 PM, Brock Palen wrote:
I had sent a message two weeks ago about this probl
Brian,
Are you compiling on a 64 bit platform that has both 64 and 32 bit gm
libraries? If so you probably have a libgm.la that is mucking things
up. Try modifying you configure line as follows:
./configure --with-gm=/opt/gm --with-tm=/usr/pbs --disable-shared
--enable-static CC=pgcc CXX=p
This was my oversight, I am getting to it know, should have something
in just a bit.
- Galen
I can live with that, certainly. Fortunately, there's a couple months
until I have a real /need/ for this.
--
Hi Troy,
I have added max_btls to the openib component on the trunk, try:
mpirun --mca
It looks like we never added similar logic to the Open IB transport.
I'll pass your request on to the developer of our Open IB transport.
Given our timeframe for releasing Open MPI 1.0.2, it's doubtful any
change will make that release. But it should definitely be possible
to add such functionali
Hi Scott,
What is happening is that on creation of the Queue Pair the max inline
data is reported as 0. Open MPI 1.0.1 did not check this and assumed
that data smaller than some threshold could be sent inline :-(. The
Open MPI trunk does check the max inline data QP attribute prior to
using i
On Feb 9, 2006, at 3:03 PM, Jean-Christophe Hugly wrote:
On Thu, 2006-02-09 at 14:05 -0700, Ron Brightwell wrote:
[...]
From an adoption perspective, though, the ability to shine in
micro-benchmarks is important, even if it means using an ad-hoc
tuning.
There is some justification for it af
When do you plan on having the small-msg rdma option available ?
I would expect this in the very near future, we will be discussing
schedules next week.
Thanks,
Galen
J-C
--
Jean-Christophe Hugly
PANTA
I would recommend reading the following tech report, it should shed
some light to how these things work :
http://www.cs.unm.edu/research/search_technical_reports_by_keyword/?
string=infiniband
1 - It does not seem that mvapich does RDMA for small messages. It will
do RDMA for any message
Sorry, more questions to answer:
On the other hand I am not sure it could even work at all, as whenever
I
tried at run-time to limit the list to just one transport (be it tcp or
openib, btw), mpi apps would not start.
you need to specify both the transport and self, such as:
mpirun -mca btl s
Hi Jean,
You probably are not seeing overhead costs so much as you are seeing
the difference between using send/recv for small messages, which Open
MPI uses, and RDMA for small messages. If you are comparing against
another implementation that uses RDMA for small messages then yes, you
will
Hi Jean,
I will be looking at this a bit later today, I don't want you to think
we are ignoring you ;-)
Thanks,
Galen
On Jan 18, 2006, at 2:41 PM, Jean-Christophe Hugly wrote:
More info.
Environment of remotely exec'ed progs (by running mpirun ... env):
===
26 matches
Mail list logo