Sorry for the problem - the issue is a bug in the handling of the pernode
option in 1.4.2. This has been fixed and awaits release in 1.4.3.
On Jun 21, 2010, at 5:27 PM, Riccardo Murri wrote:
> Hello,
>
> I'm using OpenMPI 1.4.2 on a Rocks 5.2 cluster. I compiled it on my
> own to have a threa
riginal Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
> On Behalf Of Iris Pernille Lohmann
> Sent: 04 November 2009 10:20
> To: Open MPI Users
> Subject: Re: [OMPI users] segmentation fault: Address not mapped
>
> Hi Jeff,
>
> Thanks fo
On Mon, 23 Nov 2009 10:39:28 -0800, George Bosilca wrote:
> In the case of Open MPI we use pointers, which are different than int
> on most cases
I just want to comment that Open MPI's opaque (to the user) pointers are
significantly better than int because it offers type safety. That is,
the c
lem. I
hope this description may give you an idea.
Thanks,
Iris Lohmann
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Iris Pernille Lohmann
Sent: 04 November 2009 10:20
To: Open MPI Users
Subject: Re: [OMPI users] segmentation faul
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Iris Pernille Lohmann
Sent: 04 November 2009 10:20
To: Open MPI Users
Subject: Re: [OMPI users] segmentation fault: Address not mapped
Hi Jeff,
Thanks for your reply.
There are no core files associated with the cr
Users
Subject: Re: [OMPI users] segmentation fault: Address not mapped
Many thanks for all this information. Unfortunately, it's not enough
to know what's going on. :-(
Do you know for sure that the application is correct? E.g., is it
possible that a bad buffer is being passed to MPI
Many thanks for all this information. Unfortunately, it's not enough
to know what's going on. :-(
Do you know for sure that the application is correct? E.g., is it
possible that a bad buffer is being passed to MPI_Isend? I note that
it is fairly odd to fail in MPI_Isend itself because t
On Jul 7, 2009, at 8:08 AM, Catalin David wrote:
Thank you very much for the help and assistance :)
Using -isystem /users/cluster/cdavid/local/include the program now
runs fine (loads the correct mpi.h).
This is very fishy.
If mpic++ is in /users/cluster/cdavid/local/bin, and that directory
Thank you very much for the help and assistance :)
Using -isystem /users/cluster/cdavid/local/include the program now
runs fine (loads the correct mpi.h).
Thank you again,
Catalin
On Tue, Jul 7, 2009 at 12:29 PM, Catalin
David wrote:
> #include
> #include
> int main(int argc, char *argv[])
#include
#include
int main(int argc, char *argv[])
{
printf("%d %d %d\n", OMPI_MAJOR_VERSION,
OMPI_MINOR_VERSION,OMPI_RELEASE_VERSION);
return 0;
}
returns:
test.cpp: In function ‘int main(int, char**)’:
test.cpp:11: error: ‘OMPI_MAJOR_VERSION’ was not declared in this scope
test.cpp
Catalin David wrote:
Hello, all!
Just installed Valgrind (since this seems like a memory issue) and got
this interesting output (when running the test program):
==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3
This is the error you get when an invalid communicator handle is passed
to a MPI function, the handle is deferenced so you may or may not get a
SEGV from it depending on the value you pass.
The 0x44a0 address is an offset from 0x4400, the value of
MPI_COMM_WORLD in mpich2, my guess would
Hello, all!
Just installed Valgrind (since this seems like a memory issue) and got
this interesting output (when running the test program):
==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3.2.so)
==4616==by 0
On Mon, Jul 6, 2009 at 3:26 PM, jody wrote:
> Hi
> Are you also sure that you have the same version of Open-MPI
> on every machine of your cluster, and that it is the mpicxx of this
> version that is called when you run your program?
> I ask because you mentioned that there was an old version of Op
Hi
Are you also sure that you have the same version of Open-MPI
on every machine of your cluster, and that it is the mpicxx of this
version that is called when you run your program?
I ask because you mentioned that there was an old version of Open-MPI
present... die you remove this?
Jody
On Mon,
On Mon, Jul 6, 2009 at 2:14 PM, Dorian Krause wrote:
> Hi,
>
>>
>> //Initialize step
>> MPI_Init(&argc,&argv);
>> //Here it breaks!!! Memory allocation issue!
>> MPI_Comm_size(MPI_COMM_WORLD, &pool);
>> std::cout<<"I'm here"<> MPI_Comm_rank(MPI_COMM_WORLD, &myid);
>>
>> When trying to debug via gdb
Hi,
//Initialize step
MPI_Init(&argc,&argv);
//Here it breaks!!! Memory allocation issue!
MPI_Comm_size(MPI_COMM_WORLD, &pool);
std::cout<<"I'm here"<
and your PATH is also okay? (I see that you use plain mpicxx in the
build) ...
Moreover, I wanted to see if the installation is actually
On Aug 1, 2008, at 6:07 PM, James Philbin wrote:
I'm just using TCP so this isn't a problem for me. Any ideas what
could be causing this segfault?
This is not really enough information to diagnose what your problem
is. Can you please send all the information listed here:
http://www.op
Hi,
I'm just using TCP so this isn't a problem for me. Any ideas what
could be causing this segfault?
James
On Jul 30, 2008, at 8:31 AM, James Philbin wrote:
OK, to answer my own question, I recompiled OpenMPI appending
'--with-memory-manager=none' to configure and now things seem to run
fine. I'm not sure how this might affect performance, but at least
it's working now.
If you're not using OpenFabr
Hi,
OK, to answer my own question, I recompiled OpenMPI appending
'--with-memory-manager=none' to configure and now things seem to run
fine. I'm not sure how this might affect performance, but at least
it's working now. Maybe this can be put in the FAQ?
James
On Wed, Jul 30, 2008 at 2:02 AM, Jam
21 matches
Mail list logo