ee case #1, above.
> I mention this because your first post mentioned that you're seeing the same
> job run 4 times. This implied to me that you are running into case #2. If I
> misunderstood your problem, then ignore me and forgive the noise.
>
>
>
> On Jun 8, 201
of md.log output should I post?
>> after or before the input description?
>>
>> thanks for all,
>> and sorry
>>
>> De: Carsten Kutzner
>> Para: Open MPI Users
>> Enviadas: Domingo, 6 de Junho de 2010 9:51:26
>> Assunto: Re: [OMPI users] Gr
Hi,
which version of Gromacs is this? Could you post the first lines of
the md.log output file?
Carsten
On Jun 5, 2010, at 10:23 PM, lauren wrote:
> sorry my english..
>
> I want to know how can I run Gromancs in parallel!
> Because when I used
>
> mdrun &
> mpiexec -np 4 mdrun_mpi -v -d
Hi Glen,
what setup have you used for doing the benchmarks? I mean,
what type of Ethernet switch, which network cards, which
linux kernel. I am asking because it looks weird to me that
the 4 CPU OpenMPI job is taking longer than the 2 CPU job,
and that the 8 CPU job is faster again. Maybe the netw
lse). Anyway it should be fixed in the next
> nightly build/tarball.
>
> G
> On Fri, 6 Jan 2006, Carsten Kutzner wrote:
>
> > On Fri, 6 Jan 2006, Graham E Fagg wrote:
> >
> >>> Looks like the problem is somewhere in the tuned collectives?
> >>> Unfortun
On Fri, 6 Jan 2006, Graham E Fagg wrote:
> > Looks like the problem is somewhere in the tuned collectives?
> > Unfortunately I need a logfile with exactly those :(
> >
> > Carsten
>
> I hope not. Carsten can you send me your configure line (not the whole
> log) and any other things your set in y
206d]
[11] func:./cpilog.x(main+0x43) [0x804f325]
[12] func:/lib/i686/libc.so.6(__libc_start_main+0xc7) [0x401eed17]
[13] func:./cpilog.x(free+0x49) [0x804f221]
...
30 additional processes aborted (not shown)
3 processes killed (possibly by Open MPI)
Looks like the problem is somewhere in the tuned coll
gt; 1284096 0.305030 0.1217160.176640 0.496375 !
> > 1288192 0.546405 0.1080070.415272 0.899858 !
> > 128 16384 0.604844 0.0565760.558657 0.843943 !
> > 128 32768 1.235298 0.0979691.094720 1.451241 !
> > 128 65536 2.
On Tue, 3 Jan 2006, Anthony Chan wrote:
> MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the
> number of processes. Could you explain what difficulty or error
> message you encountered when using >32 processes ?
Either my program quits without writing the logfile (and without
nodes you need to reproduce it on or
> if you even have physical access (and opportunity) but popping in another
> decent 16-port switch for a testrun might be interesting.
>
> just my .02 euros,
> Peter
>
> On Tuesday 03 January 2006 18:45, Carsten Kutzner wrote:
> > On Tue, 3
On Tue, 3 Jan 2006, Graham E Fagg wrote:
> Do you have any tools such as Vampir (or its Intel equivalent) available
> to get a time line graph ? (even jumpshot of one of the bad cases such as
> the 128/32 for 256 floats below would help).
Hi Graham,
I have attached an slog file of an all-to-all
Hi Graham,
sorry for the long delay, I was on Christmas holidays. I wish a Happy New
Year!
On Fri, 23 Dec 2005, Graham E Fagg wrote:
>
> > I have also tried the tuned alltoalls and they are really great!! Only for
> > very few message sizes in the case of 4 CPUs on a node one of my alltoalls
> >
On Tue, 20 Dec 2005, George Bosilca wrote:
> On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote:
>
> >> I don't see how you deduct that adding barriers increase the
> >> congestion ? It increase the latency for the all-to-all but for me
> >
> > When I do
ng communication show no congestion for up
to 16 nodes. The problem arises in the 32 CPU case. It should not be due
to the switch, since it has 48 ports and a 96 Gbit/s backplane.
Does all this mean the congestion problem cannot be solved for
Gbit Ethernet?
Carsten
------------
Hello,
I am desparately trying to get better all-to-all performance on Gbit
Ethernet (flow control is enabled). I have been playing around with
several all-to-all schemes and been able to reduce congestion by
communicating in an ordered fashion.
E.g. the simplest scheme looks like
for (i=0; i
15 matches
Mail list logo