Ralph Castain wrote:
> On Mar 4, 2010, at 7:51 AM, Prentice Bisbal wrote:
>
>>
>> Ralph Castain wrote:
>>> On Mar 4, 2010, at 7:27 AM, Prentice Bisbal wrote:
>>>
Ralph Castain wrote:
> On Mar 3, 2010, at 12:16 PM, Prentice Bisbal wrote:
>
>> Eugene Loh wrote:
>>> Prentice Bi
"Addepalli, Srirangam V" writes:
> It works after creating a new pe and even from the command prompt with
> out using SGE.
You shouldn't need anything special -- I don't. (It's common to run,
say, one process per core for benchmarking.) Running
mpirun -tag-output -np 14 -npernode 7 hostname
On Mar 4, 2010, at 10:52 AM, Anthony Chan wrote:
- "Yuanyuan ZHANG" wrote:
For an OpenMP/MPI hybrid program, if I only want to make MPI calls
using the main thread, ie., only in between parallel sections, can
I just
use SINGLE or MPI_Init?
If your MPI calls is NOT within OpenMP direc
- "Yuanyuan ZHANG" wrote:
> For an OpenMP/MPI hybrid program, if I only want to make MPI calls
> using the main thread, ie., only in between parallel sections, can I just
> use SINGLE or MPI_Init?
If your MPI calls is NOT within OpenMP directives, MPI does not even
know you are using thre
On Mar 4, 2010, at 7:36 AM, Richard Treumann wrote:
A call to MPI_Init allows the MPI library to return any level of
thread support it chooses.
This is correct, insofar as the MPI implementation can always choose
any level of thread support.
This MPI 1.1 call does not let the application say
There is some overhead involved when activating the current C/R functionality
in Open MPI due to the wrapping of the internal point-to-point stack. The
wrapper (CRCP framework) tracks the signature of each message (not the buffer,
so constant time for any size MPI message) so that when we need t
On Mar 4, 2010, at 8:17 AM, Fernando Lemos wrote:
> On Wed, Mar 3, 2010 at 10:24 PM, Fernando Lemos wrote:
>
>> Is there anything I can do to provide more information about this bug?
>> E.g. try to compile the code in the SVN trunk? I also have kept the
>> snapshots intact, I can tar them up an
Thanks Shiqing. I'll checkout a trunk copy and try that.
Damien
On 04/03/2010 7:29 AM, Shiqing Fan wrote:
Hi Damien,
Sorry for late reply, I was trying to dig inside the code and got some
information.
First of all, in your example, it's not correct to define the MPI_Info
as an pointer, i
A call to MPI_Init allows the MPI library to return any level of thread
support it chooses. This MPI 1.1 call does not let the application say what
it wants and does not let the implementation reply with what it can
guarantee.
If you are using only one MPI implementation and your code will never
On Mar 4, 2010, at 7:51 AM, Prentice Bisbal wrote:
>
>
> Ralph Castain wrote:
>> On Mar 4, 2010, at 7:27 AM, Prentice Bisbal wrote:
>>
>>>
>>> Ralph Castain wrote:
On Mar 3, 2010, at 12:16 PM, Prentice Bisbal wrote:
> Eugene Loh wrote:
>> Prentice Bisbal wrote:
>>> Euge
Ralph Castain wrote:
> On Mar 4, 2010, at 7:27 AM, Prentice Bisbal wrote:
>
>>
>> Ralph Castain wrote:
>>> On Mar 3, 2010, at 12:16 PM, Prentice Bisbal wrote:
>>>
Eugene Loh wrote:
> Prentice Bisbal wrote:
>> Eugene Loh wrote:
>>
>>> Prentice Bisbal wrote:
>>>
I
On Mar 4, 2010, at 7:27 AM, Prentice Bisbal wrote:
>
>
> Ralph Castain wrote:
>> On Mar 3, 2010, at 12:16 PM, Prentice Bisbal wrote:
>>
>>> Eugene Loh wrote:
Prentice Bisbal wrote:
> Eugene Loh wrote:
>
>> Prentice Bisbal wrote:
>>
>>> Is there a limit on how many MP
On Thursday 04 March 2010 01:32:39 Yuanyuan ZHANG wrote:
> Hi guys,
>
> Thanks for your help, but unfortunately I am still not clear.
>
> > You are right Dave, FUNNELED allows the application to have multiple
> > threads but only the man thread calls MPI.
>
> My understanding is that even if I u
Hi Damien,
Sorry for late reply, I was trying to dig inside the code and got some
information.
First of all, in your example, it's not correct to define the MPI_Info
as an pointer, it will cause the initialization violation at run time.
The message "LOCAL DAEMON SPAWN IS CURRENTLY UNSUPPORT
Ralph Castain wrote:
> On Mar 3, 2010, at 12:16 PM, Prentice Bisbal wrote:
>
>> Eugene Loh wrote:
>>> Prentice Bisbal wrote:
Eugene Loh wrote:
> Prentice Bisbal wrote:
>
>> Is there a limit on how many MPI processes can run on a single host?
>>
>>> Depending on which OM
Hi,
I have some new discovery about this problem :
It seems that the array size sendable from a 32bit to 64bit machines
is proportional to the parameter "btl_tcp_eager_limit"
When I set it to 200 000 000 (2e08 bytes, about 190MB), I can send an
array up to 2e07 double (152MB).
I didn't found muc
On Wed, Mar 3, 2010 at 10:24 PM, Fernando Lemos wrote:
> Is there anything I can do to provide more information about this bug?
> E.g. try to compile the code in the SVN trunk? I also have kept the
> snapshots intact, I can tar them up and upload them somewhere in case
> you guys need it. I can a
17 matches
Mail list logo