On May 13, 2007, at 6:23 AM, Bert Wesarg wrote:
Even better: is there a patch available to fix this in the 1.2.1
tarball, so that
I can set the full path again with CC?
The patch is quite trivial, but requires a rebuild of the build
system
(autoheader, autoconf, automake,...)
see here:
htt
I fixed the OOB. I also mucked some things up with it interface wise
that I need to undo :). Anyway, I'll have a look at fixing up the
TCP component in the next day or two.
Brian
On May 10, 2007, at 6:07 PM, Jeff Squyres wrote:
Brian --
Didn't you add something to fix exactly this probl
On 5/14/07, Brian Barrett wrote:
2) Use MPI_TYPE_CREATE_STRUCT with ADDRESS_KIND arguments
Ah, I knew there was such a routine for address (MPI_GET_ADDRESS), but
I didn't realise that there was another routine
"MPI_TYPE_CREATE_STRUCT" to use for the MPI_ADDRESS_KIND.
Thank you!
Michal.
On May 14, 2007, at 10:21 AM, Nym wrote:
I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm
using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers
9.1.045).
If I try to call MPI_TYPE_STRUCT with the array of displacements that
are of type INTEGER(KIND=MPI_ADDRESS_KIND
On Mon, May 14, 2007 at 11:59:18PM +0530, Jayanta Roy wrote:
> if(myrank = 0 || myrank == 1)
> if(myrank = 2 || myrank == 3)
Just to make clear we're not talking about a typo: Do you mean
assignment or comparison?
For comparisons, better put the constant value to the left, so
if (2 = myrank
Hello Jay,
On Monday 14 May 2007 20:29, Jayanta Roy wrote:
> In my 4 nodes cluster I want to run two MPI_Reduce on two communicators
> (one using Node1, Node2 and other using Node3, Node4).
> Now to create communicator I used ...
> MPI_Comm MPI_COMM_G1, MPI_COMM_G2;
> MPI_Group g0, g1, g2;
> MPI_Co
Hi,
In my 4 nodes cluster I want to run two MPI_Reduce on two communicators (one
using Node1, Node2 and other using Node3, Node4).
Now to create communicator I used ...
MPI_Comm MPI_COMM_G1, MPI_COMM_G2;
MPI_Group g0, g1, g2;
MPI_Comm_group(MPI_COMM_WORLD,&g0);
MPI_Group_incl(g0,g_size,&r_array[0
Hi,
I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm
using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers
9.1.045).
If I try to call MPI_TYPE_STRUCT with the array of displacements that
are of type INTEGER(KIND=MPI_ADDRESS_KIND), then I get a compilation
error:
fo
Also there is profiler called "vampire", I think bought by Intel.. It
create a very extended profile for mpi applications and mpi
communication. It is very useful, I think it is a library, you should
compile your program with vampire option to be able to use it. Also it
has a graphical inter
Hello,
On Monday 14 May 2007 14:59, Jeff Squyres wrote:
> It doesn't give you stats about the underlying transport, though
> (E.g., TCP-level stats). For that, you would need to use PERUSE.
> Rainer -- can you comment on how much info the tcp BTL reports via
> PERUSE?
>
> On May 13, 2007, at 5:14
Hi,
here's the result of my examination. The stack size limit set by
Gridengine is the culprit. Somehow, the h_vmem limit I gave to my
Gridengine job translated into setting the stack size limit to this
value (ulimit -s). I've edited /etc/security/limits.conf on all my
nodes, adding a hard stack
Have a look at MPIP: http://mpip.sf.net/
It will give you simple stats on what MPI functions were invoked.
Quite handy.
It doesn't give you stats about the underlying transport, though
(E.g., TCP-level stats). For that, you would need to use PERUSE.
Rainer -- can you comment on how muc
12 matches
Mail list logo