__cplusplus
#include
#define __cplusplus
#else
#include
#endif
in c-code.h, which seems to work but isn't exactly smooth. Is there
another way around this, or has linking C MPI code with C++ never come
up before?
Thanks,
/Patrik Jonsson
Hi everyone,
Thanks for the suggestions.
On Thu, Sep 2, 2010 at 6:41 AM, Jeff Squyres wrote:
> On Aug 31, 2010, at 5:39 PM, Patrik Jonsson wrote:
>
>> It seems a bit presumptuous of mpi.h to just include mpicxx.h just
>> because __cplusplus is defined, since that makes it
Hi all,
I'm seeing performance issues I don't understand in my multithreaded
MPI code, and I was hoping someone could shed some light on this.
The code structure is as follows: A computational domain is decomposed
into MPI tasks. Each MPI task has a "master thread" that receives
messages from the
Replying to my own post, I'd like to add some info:
After making the master thread put more of a premium on receiving the
missing messages, the problem went away. Both tasks now appear to keep
up on the messages sent from the other. However, after about a minute
and ~1.5e6 messages exchanged, both
Hi Yiannis,
On Fri, Dec 9, 2011 at 10:21 AM, Yiannis Papadopoulos
wrote:
> Patrik Jonsson wrote:
>>
>> Hi all,
>>
>> I'm seeing performance issues I don't understand in my multithreaded
>> MPI code, and I was hoping someone could shed some light on this
Hi all,
This question was buried in an earlier question, and I got no replies,
so I'll try reposting it with a more enticing subject.
I have a multithreaded openmpi code where each task has N+1 threads,
the N threads send nonblocking messages that are received by the 1
thread on the other tasks.
Hi,
I'm trying to track down a spurious segmentation fault that I'm
getting with my MPI application. I tried using valgrind, and after
suppressing the 25,000 errors in PMPI_Init_thread and associated
Init/Finalize functions, I'm left with an uninitialized write in
PMPI_Isend (which I saw is not un
On Wed, Mar 14, 2012 at 3:43 PM, Jeffrey Squyres wrote:
> On Mar 14, 2012, at 9:38 AM, Patrik Jonsson wrote:
>
>> I'm trying to track down a spurious segmentation fault that I'm
>> getting with my MPI application. I tried using valgrind, and after
>>