On Wed, Apr 24, 2013 at 05:01:43PM +0400, Derbunovich Andrei wrote:
> Thank you to everybody for suggestions and comments.
>
> I have used relatively small number of nodes (4400). It looks like that
> the main issue that I didn't disable dynamic components opening in my
> openmpi build while kee
: Tuesday, April 23, 2013 10:07 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] OpenMPI at scale on Cray XK7
>
>
> On Apr 23, 2013, at 10:45 AM, Nathan Hjelm wrote:
>
>> On Tue, Apr 23, 2013 at 10:17:46AM -0700, Ralph Castain wrote:
>>>
>>> On A
idn't check suggestion about using debrujin routed component yet.
-Andrei
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Tuesday, April 23, 2013 10:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] OpenMPI at sca
On Apr 23, 2013, at 8:45 PM, Mike Clark wrote:
> Hi,
>
> Just to follow up on this. We have managed to get OpenMPI to run at large
> scale, to do so we had to use aprun instead of using openmpi's mpirun
> command.
In general, using direct launch will be faster than going thru mpirun. However,
Hi,
Just to follow up on this. We have managed to get OpenMPI to run at large
scale, to do so we had to use aprun instead of using openmpi's mpirun
command.
While this has allowed us to now run at the full scale of Titan, we have
found a huge drop in MPI_Alltoall performance when running at 18K
On Apr 23, 2013, at 10:45 AM, Nathan Hjelm wrote:
> On Tue, Apr 23, 2013 at 10:17:46AM -0700, Ralph Castain wrote:
>>
>> On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
>>
>>> On Tue, Apr 23, 2013 at 12:21:49PM +0400,
>>> wrote:
Hi,
Nathan,
On Tue, Apr 23, 2013 at 10:17:46AM -0700, Ralph Castain wrote:
>
> On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
>
> > On Tue, Apr 23, 2013 at 12:21:49PM +0400,
> > wrote:
> >> Hi,
> >>
> >> Nathan, could you please advise what is expected startup time fo
On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
> On Tue, Apr 23, 2013 at 12:21:49PM +0400,
> wrote:
>> Hi,
>>
>> Nathan, could you please advise what is expected startup time for OpenMPI
>> job at such scale (128K ranks)? I'm interesting in
>> 1) time from
On Tue, Apr 23, 2013 at 12:21:49PM +0400,
wrote:
> Hi,
>
> Nathan, could you please advise what is expected startup time for OpenMPI
> job at such scale (128K ranks)? I'm interesting in
> 1) time from mpirun start to completion of MPI_Init()
It takes less than
an Hjelm
Sent: Tuesday, April 23, 2013 2:47 AM
To: Open MPI Users
Subject: Re: [OMPI users] OpenMPI at scale on Cray XK7
On Mon, Apr 22, 2013 at 03:17:16PM -0700, Mike Clark wrote:
> Hi,
>
> I am trying to run OpenMPI on the Cray XK7 system at Oak Ridge National
Lab (Titan), and am ru
On Apr 22, 2013, at 3:46 PM, Nathan Hjelm wrote:
> On Mon, Apr 22, 2013 at 03:17:16PM -0700, Mike Clark wrote:
>> Hi,
>>
>> I am trying to run OpenMPI on the Cray XK7 system at Oak Ridge National Lab
>> (Titan), and am running in an issue whereby MPI_Init seems to hang
>> indefinitely, but th
On Mon, Apr 22, 2013 at 03:17:16PM -0700, Mike Clark wrote:
> Hi,
>
> I am trying to run OpenMPI on the Cray XK7 system at Oak Ridge National Lab
> (Titan), and am running in an issue whereby MPI_Init seems to hang
> indefinitely, but this issue only arises at large scale, e.g., when running
>
Hi,
I am trying to run OpenMPI on the Cray XK7 system at Oak Ridge National Lab
(Titan), and am running in an issue whereby MPI_Init seems to hang
indefinitely, but this issue only arises at large scale, e.g., when running on
18560 compute nodes (with two MPI processes per node). The applicati
13 matches
Mail list logo