tstanding. For
example, the revised mpi.isend does not take a request number; the
function works out one and returns it. And in general the calls do more
than simply call the corresponding C function.
Ross Boylan
>
> Hao
>
>
> Ross Boylan wrote:
> > I changed the c
l now.
Ross
On 4/10/2014 1:06 PM, Ross Boylan wrote:
On 4/10/2014 11:48 AM, Ross Boylan wrote:
On 4/9/2014 5:26 PM, Ross Boylan wrote:
On Fri, 2014-04-04 at 22:40 -0400, George Bosilca wrote:
Ross,
I’m not familiar with the R implementation you are using, but bear
with me and I will explain ho
On 4/10/2014 11:48 AM, Ross Boylan wrote:
On 4/9/2014 5:26 PM, Ross Boylan wrote:
On Fri, 2014-04-04 at 22:40 -0400, George Bosilca wrote:
Ross,
I’m not familiar with the R implementation you are using, but bear
with me and I will explain how you can all Open MPI about the list
of all
On 4/9/2014 5:26 PM, Ross Boylan wrote:
On Fri, 2014-04-04 at 22:40 -0400, George Bosilca wrote:
Ross,
I’m not familiar with the R implementation you are using, but bear with me and
I will explain how you can all Open MPI about the list of all pending requests
on a process. Disclosure: This
might have
> some not-yet-completed requests pending…
>
> George.
>
>
> On Apr 4, 2014, at 22:20 , Ross Boylan wrote:
>
> > On 4/4/2014 6:01 PM, Ralph Castain wrote:
> >> It sounds like you don't have a balance between sends and recvs somewhere
>
to Rmpi are not properly
tracking all completed messages, resulting in it thinking there are
outstanding messages (and passing a positive count to the C-level
MPI_Waitall with associated garbagey arrays). But I haven't isolated
the problem.
Ross
On Apr 4, 2014, at 5:20 PM, Ross Boy
During shutdown of my application the processes issue a waitall, since
they have done some Isends. A couple of them never return from that call.
Could this be the result of some of the processes already being shutdown
(the processes with the problem were late in the shutdown sequence)? If
so
[Main part is at the bottom]
On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> Ross-
>
> On 09:08 Wed 26 Mar , Ross Boylan wrote:
> > On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> > > On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrot
On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
>
> >> Even when "idle", MPI processes use all the CPU. I thought I remember
> >> someone saying that they will be low priority, and so not pose much of
> >> an obstacle to oth
es? In general I try to limit to the
number of physical cores.
Thanks.
Ross Boylan
tp://rbigdata.github.io/packages.html
> >
> > I wasn't sure whether this was really on topic for the list, so I send
> > it privately. Sorry for the extra noise if you've already eliminated
> > pdbR as a possibility.
> >
> > -- bennet
> >
> >
sending over tcp (yet) but maybe I'm running into something
similar.
I had thought the MPI stuff was handled in separate layer or thread that
would magically do all the work of moving messages around; the fact that
top shows all the CPU going to the R processes suggests that's not the
case.
Running OMPI 1.7.4.
Thanks for any help.
Ross Boylan
On 3/21/2014 10:17 AM, Ross Boylan wrote:
On 3/21/2014 10:02 AM, Jeff Squyres (jsquyres) wrote:
So just to be clear, the C interface for MPI_Testsome is:
int MPI_Testsome(int incount, MPI_Request requests[],
int *outcount, int indices[],
MPI_Status statuses
en on 64 bit machines) and MPI uses 64, even though both have
type int. R defines
#define INTEGER(x) ((int *) DATAPTR(x))
What should the integer size be for MPI on 64 bit architectures,
specifically linux gcc (Debian 4.4.5-8) 4.4.5?
Ross
On Mar 21, 2014, at 12:01 PM, Ross Boylan wrot
The allocation of indices is a cheat: the first location is used for the
outcount, and the following locations get the actual indices.
status is a pointer to an array of MPI status objects,
The indices should be small integers, shouldn't they? I'm also getting
some large values back.
Ross
errhandler(MPI_Testsome(countn, request, &INTEGER(indices)[0],
&INTEGER(indices)[1], status));
UNPROTECT(1);
return indices;
}
SEXP is an R structure.
OMPI 1.7.4.
Ross Boylan
the original config
argument to be searched?
Probably doesn't matter too much since I don't want to chance moving the
files after make install
Ross
On Mar 14, 2014, at 5:14 PM, Ross Boylan wrote:
I used this script to launch mpi:
R_PROFILE_USER=~/KHC/sunbelt/R
I exported are not used;
presumably it's the same for LD_LIBRARY_PATH.
I found this surprising.
RTFM disclosed the --prefix argument to orterun, and that seems to do
the trick.
Am I missing anything?
Ross Boylan
On Thu, 2014-03-13 at 14:53 -0700, Ross Boylan wrote:
> On Thu, 2014-03-13 at 13:13 -0700, Ross Boylan wrote:
> > I might just switch to mpi.send, though the fact that something is
> > going
> > wrong makes me nervous.
> I tried using mpi.send, but it fails also.
On Thu, 2014-03-13 at 13:13 -0700, Ross Boylan wrote:
> I might just switch to mpi.send, though the fact that something is
> going
> wrong makes me nervous.
I tried using mpi.send, but it fails also. The failure behavior is
peculiar.
After I launch the processes I can send a messa
rong makes me nervous.
Obviously given the involvement of R it's not clear the problem lies
with the MPI layer, but that seems at least a possibility.
Ross
On Thu, 2014-03-13 at 12:15 -0700, Ross Boylan wrote:
> On Wed, 2014-03-12 at 10:52 -0400, Bennet Fauber wrote:
> > My experienc
On Wed, 2014-03-12 at 10:52 -0400, Bennet Fauber wrote:
> My experience with Rmpi and OpenMPI is that it doesn't seem to do well
> with the dlopen or dynamic loading. I recently installed R 3.0.3, and
> Rmpi, which failed when built against our standard OpenMPI but
> succeeded using the following
> > http://www.open-mpi.org/faq/?category=running#mpirun-prefix.
> >
> > Note the --prefix option that is described in the 3rd FAQ item I cited --
> > that can be a bit easier, too.
> >
> >
> >
> > On Mar 12, 2014, at 2:51 AM, Ross Boylan wrote:
o-path, and
> > http://www.open-mpi.org/faq/?category=running#mpirun-prefix.
> >
> > Note the --prefix option that is described in the 3rd FAQ item I cited --
> > that can be a bit easier, too.
> >
> >
> >
> > On Mar 12, 2014, at 2:51 AM,
ome/ross/install/lib/libmpi.so.1.3.0
R 17634 ross memREG 254,2 106626 152046481
/home/ross/Rlib-3.0.1/Rmpi/libs/Rmpi.so
So libmpi, libopen-pal, and libopen-rte all are opened in two versions and two
locations.
Thanks.
Ross Boylan
uld happen if I try to transmit something big? At least in my
case it was probably under 4G, which might be some kind of boundary
(though it's a 64 bit system).
Ross
On Feb 6, 2014, at 1:23 PM, Ross Boylan wrote:
On 2/6/2014 3:24 AM, Jeff Squyres (jsquyres) wrote:
Have you tried
http://www.open-mpi.org/faq/?category=sysadmin#new-openmpi-version)
seems to say compatibility is broader.
Also, the documents don't seem to address on-the-wire compatibility;
that is, if nodes on are different versions, can they work together
reliably?
Thanks.
Ross
On Feb 5, 2014, at
On 1/31/2014 1:08 PM, Ross Boylan wrote:
I am getting the following error, amidst many successful message sends:
[n10][[50048,1],1][../../../../../../ompi/mca/btl/tcp/btl_tcp_frag.c:118:mca_btl_tcp_frag_send]
mca_btl_tcp_frag_send: writev error (0x7f6155970038, 578659815)
Bad address(1
I am getting the following error, amidst many successful message sends:
[n10][[50048,1],1][../../../../../../ompi/mca/btl/tcp/btl_tcp_frag.c:118:mca_btl_tcp_frag_send]
mca_btl_tcp_frag_send: writev error (0x7f6155970038, 578659815)
Bad address(1)
Any ideas about what is going on or what
ndependent jobs at once. The cluster is running Debian
Lenny -> OMPI 1.2.7rc2.
Thanks for any help you can offer.
Ross Boylan
Let total time on my slot 0 process be S+C+B+I
= serial computations + communication + busy wait + idle
Is there a way to find out S?
S+C would probably also be useful, since I assume C is low.
The problem is that I = 0, roughly, and B is big. Since B is big, the
usual process timing methods don'
I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny
(amd64). My set up has a central calculator and a bunch of slaves to
wich work is distributed.
The slaves wait like this:
mpi.send(as.double(0), doubleType, root, requestCode, comm=comm)
request <- request+1
that we
> are free to change/eliminate it at any time - in fact, you won't find
> that envar in the 1.3.x series at all.
Will it work in the 1.2 series?
Ross
>
>
> On Apr 20, 2009, at 3:53 PM, Ross Boylan wrote:
>
> > How do I determine my rank in a shell script
How do I determine my rank in a shell script under OpenMPI 1.2?
The only thing I've found that looks promising is the environment
variable OMPI_MCA_ns_nds_vpid, and earlier discussion on this list said
that was for "internal use only".
I'm on Debian Lenny, which just relased with openmpi 1.2.7~rc2
34 matches
Mail list logo