"btl_openib_verbose 1" is attached. My
application appears to run to completion, but I can't tell if it's just
running on TCP and not using the IB hardware.
I would appreciate any suggestions on how to proceed to fix this error.
Thanks,
Allen
--
Allen Barnett
Transpire, Inc
E-Mail: al
that you add the following
> to the mpirun line "-mca btl openib,sm,self". I believe with that
> specification the code will abort and not run to completion.
>
> What version of the OFED stack are you using? I wonder if srq is
> supported on your system or not?
>
to completion using the IB
hardware.
I guess my question now is: What do these numbers mean? Presumably the
size (or counts?) of buffers to allocate? Are there limits or a way to
tune these values?
Thanks,
Allen
On Mon, 2010-08-02 at 12:49 -0400, Allen Barnett wrote:
> Hi Terry:
> It is
ber of available credits reaches 16, send an explicit
> credit message to the sender
> - Defaulting to ((256 * 2) - 1) / 16 = 31; this many buffers are
> reserved for explicit credit messages
>
> --td
> Allen Barnett wrote:
> > Hi: In response to my own question, b
I just wanted to say "Thank You!" to the OpenMPI developers for the
OPAL_PREFIX option :-) This has proved very helpful in getting my
customers up and running with the least amount of effort on their part.
I really appreciate it.
Thanks,
Allen
--
Allen Barnett
Transpire, Inc.
GCC 4 (gcc4 (GCC) 4.1.1
20070105 (Red Hat 4.1.1-53)), valgrind 3.2.3.
Thanks,
Allen
--
Allen Barnett
Transpire, Inc.
e-mail: al...@transpireinc.com
Ph: 518-887-2930
#include
#include
#include "mpi.h"
int main ( int argc, char* argv[] )
{
int rank, size, c;
MPI_Comm* comms;
iced that job rank 0 with PID 27394 on node exited on
> > signal 11 (Segmentation fault).
> >
> >
> > Maybe I am not doing the xforwading properly, but has anyone ever
> > encountered the same problem, it works fine on one pc, and I read
> > the mailing list but I just don't know if my prob is similiar to
> > their, I even tried changing the DISPLAY env
> >
> >
> > This is what I want to do
> >
> > my mpirun should run on 2 machines ( A and B ) and I should be able
> > to view the output ( on my PC ),
> > are there any specfic commands to use.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Allen Barnett
Transpire, Inc.
e-mail: al...@transpireinc.com
Ph: 518-887-2930
15487== Address 0x7ff0003b4 is on thread 1's stack
After wait: 0x7ff0003b0: 2
Also, if I run this program with the shared memory BTL active, valgrind
reports several "conditional jump or move depends on uninitialized
value"s in the SM BTL and about 24k lost bytes at the end (mostly from
allocations in MPI_Init).
Thanks,
Allen
--
Allen Barnett
Transpire, Inc
E-Mail: al...@transpireinc.com
Skype: allenbarnett
I semantic
> checks, in this case, it just warns the users that they are accessing
> the receive buffer before the receive has finished, which is not allowed
> according to the MPI standard.
>
> For a non-blocking receive, the communication only completes after
> MPI_Wait i
lid: 1
port_lmc: 0x00
I'd appreciate any tips for debugging this.
Thanks,
Allen
--
Allen Barnett
Transpire, Inc
E-Mail: al...@transpireinc.com
Skype: allenbarnett
Ph: 518-887-2930
ompinfo.gz
Description: GNU Zip compressed data
's on by default in
> 1.3.3
>
> Lenny.
>
> On Thu, Aug 13, 2009 at 5:12 AM, Allen Barnett
> wrote:
> Hi:
> I recently tried to build my MPI application against OpenMPI
> 1.3.3. It
> worked fine with OMPI 1.2.9, but with OMPI 1.
: changed from
OFED 1.2 to 1.3)
The output from ompi_info is attached.
I would appreciate any help debugging this.
Thanks,
Allen
--
Allen Barnett
E-Mail: al...@transpireinc.com
Skype: allenbarnett
Ph: 518-887-2930
ompi_info.txt.bz2
Description: application/bzip
_
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Allen Barnett
E-Mail: al...@transpireinc.com
Skype: allenbarnett
Ph: 518-887-2930
ize and the rank is
helpful if, like me, you run several jobs out of the same directory with
different numbers of processors. Say "listing.32.01" for rank 1, -np 32.
(And, as always, padding numbers with zeros makes "ls" behave more
sanely.)
Thanks,
Allen
--
Allen Barne
onstructed immediately before calling
system() like this:
std::stringstream ss;
ss << "partitioner_program " << COMM_WORLD_SIZE;
system( ss.str().c_str() );
Could this behavior related to this admonition?
Also, would MPI_COMM_SPAWN suffer from the same difficulties?
Tha
On Tue, 2009-06-02 at 12:27 -0400, Jeff Squyres wrote:
> On Jun 2, 2009, at 11:37 AM, Allen Barnett wrote:
>
> > std::stringstream ss;
> > ss << "partitioner_program " << COMM_WORLD_SIZE;
> > system( ss.str().c_str() );
> >
>
> You'
OK. I appreciate the suggestion and will definitely try it out.
Thanks,
Allen
On Fri, 2009-06-05 at 10:14 -0400, Jeff Squyres wrote:
> On Jun 2, 2009, at 3:26 PM, Allen Barnett wrote:
> > I
> > guess what I'm asking is if I will have to make my partitioner an
> &g
nning RHEL 4, called c.lan). My test program runs fine between b.lan
and c.lan.
I feel like I must be making an incredibly obvious mistake.
Thanks,
Allen
--
Allen Barnett
Transpire, Inc.
E-Mail: al...@transpireinc.com
Ph: 518-887-2930
but I can't really dictate where a user will install our software. Has
any one succeeded in building a version of OpenMPI which can be
relocated?
Thanks,
Allen
--
Allen Barnett
Transpire, Inc.
E-Mail: al...@transpireinc.com
Ph: 518-887-2930
s helps us prioritize the work.
>
> Thanks!
>
>
> On Dec 13, 2006, at 10:37 AM, Allen Barnett wrote:
>
> > There was a thread back in November started by Patrick Jessee about
> > relocating an installation after it was built (the subject was:
> > remo
20 matches
Mail list logo