Hi,
Nathan, could you please advise what is expected startup time for OpenMPI
job at such scale (128K ranks)? I'm interesting in
1) time from mpirun start to completion of MPI_Init()
2) time from MPI_Init() start to completion of MPI_Init()
>From my experience for 52800 rank job
1) took around 2
Hi,
Am 23.04.2013 um 03:39 schrieb Manee:
> When I copy my OpenMPI installed directory to another computer (the runtime
> files), and point PATH and LD_LIBRARY_PATH to this installed folder (to make
> mpirun point to the copied folder's bin), it does not seem to run (it's not
> supposed to run
Hi Jacky,
I'm a regular reader of this list but seldom a poster. In this case however I
might actually be qualified to answer some questions or provide some insight
given I'm not sure how many other folks here use Boost.Thread. The first
question is really what sort of threading model you wan
On Tue, Apr 23, 2013 at 12:21:49PM +0400,
wrote:
> Hi,
>
> Nathan, could you please advise what is expected startup time for OpenMPI
> job at such scale (128K ranks)? I'm interesting in
> 1) time from mpirun start to completion of MPI_Init()
It takes less than
On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
> On Tue, Apr 23, 2013 at 12:21:49PM +0400,
> wrote:
>> Hi,
>>
>> Nathan, could you please advise what is expected startup time for OpenMPI
>> job at such scale (128K ranks)? I'm interesting in
>> 1) time from
On Tue, Apr 23, 2013 at 10:17:46AM -0700, Ralph Castain wrote:
>
> On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
>
> > On Tue, Apr 23, 2013 at 12:21:49PM +0400,
> > wrote:
> >> Hi,
> >>
> >> Nathan, could you please advise what is expected startup time fo
On Apr 23, 2013, at 10:45 AM, Nathan Hjelm wrote:
> On Tue, Apr 23, 2013 at 10:17:46AM -0700, Ralph Castain wrote:
>>
>> On Apr 23, 2013, at 10:09 AM, Nathan Hjelm wrote:
>>
>>> On Tue, Apr 23, 2013 at 12:21:49PM +0400,
>>> wrote:
Hi,
Nathan,
Hi,
Just to follow up on this. We have managed to get OpenMPI to run at large
scale, to do so we had to use aprun instead of using openmpi's mpirun
command.
While this has allowed us to now run at the full scale of Titan, we have
found a huge drop in MPI_Alltoall performance when running at 18K