---- From: Matthieu Brucher
> > Sent: Tuesday, November 12, 2013 8:56 PM
> > To: Open MPI Users
> > Subject: Re: [OMPI users] Segmentation fault in MPI_Init when passing
> pointers allocated in main()
> >
> > It seems that argv[argc] should always be NULL ac
vember 12, 2013 8:56 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] Segmentation fault in MPI_Init when passing
> pointers allocated in main()
>
> It seems that argv[argc] should always be NULL according to the
> standard. So OMPI failure is not actually a bug!
>
> Ch
ssage-
From: Matthieu Brucher
Sent: Tuesday, November 12, 2013 8:56 PM
To: Open MPI Users
Subject: Re: [OMPI users] Segmentation fault in MPI_Init when passing
pointers allocated in main()
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actua
I don't think that's true in the case of argv as that is a pointer...but
either way, this isn't an OMPI problem.
On Nov 12, 2013, at 9:09 AM, Matthieu Brucher
wrote:
> I understand why he did this, it's only the main argc/argv values that
> are changed, not the actual system values (my mista
I understand why he did this, it's only the main argc/argv values that
are changed, not the actual system values (my mistake as well, I
overlooked his code, not paying attention to the details!).
Still, keeping different names would be best for code reviews and code
understanding.
The fact that th
On Nov 12, 2013, at 8:56 AM, Matthieu Brucher
wrote:
> It seems that argv[argc] should always be NULL according to the
> standard.
That is definitely true.
> So OMPI failure is not actually a bug!
I think that is true as well, though I suppose we could try to catch it
(doubtful - what if it
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
Cheers,
2013/11/12 Matthieu Brucher :
> Interestingly enough, in ompi_mpi_init, opal_argv_join is called
> without then array length, so I suppose that in the usual argc/argv
> couple,
Interestingly enough, in ompi_mpi_init, opal_argv_join is called
without then array length, so I suppose that in the usual argc/argv
couple, you have an additional value to argv which may be NULL. So try
allocating 3 additional values, the last being NULL, and it may work.
Cheers,
Matthieu
2013/
I tried the following code without CUDA, the error is still there:
#include "mpi.h"
#include
#include
#include
int main(int argc, char **argv)
{
// override command line arguments to make sure cudaengine get the
correct one
char **argv_new = new char*[ argc + 2 ];
for( int i = 0 ;
Hi,
Are you sure this is the correct code? This seems strange and not a good idea:
MPI_Init(&argc,&argv);
// do something...
for( int i = 0 ; i < argc ; i++ ) delete [] argv[i];
delete [] argv;
Did you mean argc_new and argv_new instead?
Do you have the same error without CUDA?
Hi,
I tried to augment the command line argument list by allocating my own list
of strings and passing them to MPI_Init, yet I got a segmentation fault for
both OpenMPI 1.6.3 and 1.7.2, while the code works fine with MPICH2. The
code is:
#include "mpi.h"
#include "cuda_runtime.h"
#include
#inclu
11 matches
Mail list logo