Hi, here is the output with "-mca rmaps_base_verbose 10
-mca ess_base_verbose 5". Please see the attached file.
(See attached file: output.txt)
Regards,
Tetsuya Mishima
> Hmm...try adding "-mca rmaps_base_verbose 10 -mca ess_base_verbose 5" to
your cmd line and let's see what it thinks it foun
Hmm...try adding "-mca rmaps_base_verbose 10 -mca ess_base_verbose 5" to your
cmd line and let's see what it thinks it found.
On Dec 18, 2013, at 6:55 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi, I report one more problem with openmpi-1.7.4rc1,
> which is more serious.
>
> For our 32 core
Hi, I report one more problem with openmpi-1.7.4rc1,
which is more serious.
For our 32 core nodes(AMD magny cours based) which has
8 numa-nodes, "-bind-to numa" does not work. Without
this option, it works. For your infomation, at the
bottom of this mail, I added the lstopo information
of the no
Yes, it's very strange. But I don't think there's any chance that
I have < 8 actual cores on the node. I guess that you cat replicate
it with SLURM, please try it again.
I changed to use node10 and node11, then I got the warning against
node11.
Furthermore, just as an information for you, I tri
Very strange - I can't seem to replicate it. Is there any chance that you have
< 8 actual cores on node12?
On Dec 18, 2013, at 4:53 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi Ralph, sorry for confusing you.
>
> At that time, I cut and paste the part of "cat $PBS_NODEFILE".
> I guess I di
Hi Ralph, sorry for confusing you.
At that time, I cut and paste the part of "cat $PBS_NODEFILE".
I guess I didn't paste the last line by my mistake.
I retried the test and below one is exactly what I got when I did the test.
[mishima@manage ~]$ qsub -I -l nodes=node11:ppn=8+node12:ppn=8
qsub:
This was, in fact, a primary point of discussion at last week's OMPI
developer's conference. Bottom line is that we are only a little further along
than we used to be, but are focusing on improving it. You'll find good thread
support for some transports (some of the MTLs and at least the TCP BTL
I removed the debug in #2 - thanks for reporting it
For #1, it actually looks to me like this is correct. If you look at your
allocation, there are only 7 slots being allocated on node12, yet you have
asked for 8 cpus to be assigned (2 procs with 2 cpus/proc). So the warning is
in fact correct
On Dec 18, 2013, at 7:04 PM,
wrote:
> 3) I use PGI compiler. It can not accept compiler switch
> "-Wno-variadic-macros", which is
> included in configure script.
>
> btl_usnic_CFLAGS="-Wno-variadic-macros"
Yoinks. I'll fix (that flag is only intended for our private copy of v1.6 --
tr
Hi Ralph, I found that openmpi-1.7.4rc1 was already uploaded. So I'd like
to report
3 issues mainly regarding -cpus-per-proc.
1) When I use 2 nodes(node11,node12), which has 8 cores each(= 2 sockets X
4 cores/socket),
it starts to produce the error again as shown below. At least,
openmpi-1.7.4a1
I was wondering if the FAQ entry below is considered current opinion or perhaps
a little stale. Is multi-threading still considered to be 'lightly tested'?
Are there known open bugs?
Thank you,
Ed
7. Is Open MPI thread safe?
Support for MPI_THREAD_MULTIPLE (i.e., multiple threads executing
Hi Jeff,
I did with processor binding enabled using both of openmpi-1.7.3
and 1.7.4rc1. But I got the same results as no binding.
In addition, core mapping of 1.7.4rc1 seems to be strange, which
has no relation with tcp slowdown.
Regards,
Tetsuya Mishima
[mishima@node08 OMB-3.1.1]$ mpirun -V
Ah got it ! Thanks
-- Sid
On 18 December 2013 07:44, Jeff Squyres (jsquyres) wrote:
> On Dec 14, 2013, at 8:02 AM, Siddhartha Jana
> wrote:
>
> > Is there a preferred method/tool among developers of MPI-library for
> checking the count of the packets transmitted by the network card during
> tw
Hi,
expanding on Noam's problem a bit ...
On Wed, Dec 18, 2013 at 10:19:25AM -0500, Noam Bernstein wrote:
> Thanks to all who answered my question. The culprit was an interaction
> between
> 1.7.3 not supporting mpi_paffinity_alone (which we were using previously) and
> the new
> kernel. Swi
On Wed, 2013-12-18 at 11:47 -0500, Noam Bernstein wrote:
> Yes - I never characterized it fully, but we attached with gdb to every
> single vasp running process, and all were stuck in the same
> call to MPI_allreduce() every time. It's only happening on a rather large
> jobs, so it's not the easi
My program it is with MPI and OpenMP, and is a sample program take
much memory, I don't know the memory RAM consume for a mpi program and
I want to know if mpi consume a lot of memory when if used together
openmp or I doing something wrong, for take memory Ram of mi program I
used a file /proc/id_p
On Dec 18, 2013, at 10:32 AM, Dave Love wrote:
> Noam Bernstein writes:
>
>> We specifically switched to 1.7.3 because of a bug in 1.6.4 (lock up in some
>> collective communication), but now I'm wondering whether I should just test
>> 1.6.5.
>
> What bug, exactly? As you mentioned vasp, is
hwloc-ps (and lstopo --top) are better at showing process binding but they lack
a nice pseudographical interface with dynamic refresh.
htop uses hwloc internally iirc, so there's hope we'll have everything needed
in htop one day ;)
Brice
Dave Love a écrit :
>John Hearns writes:
>
>> 'Htop' i
John Hearns writes:
> 'Htop' is a very good tool for looking at where processes are running.
I'd have thought hwloc-ps is the tool for that.
Noam Bernstein writes:
> We specifically switched to 1.7.3 because of a bug in 1.6.4 (lock up in some
> collective communication), but now I'm wondering whether I should just test
> 1.6.5.
What bug, exactly? As you mentioned vasp, is it specifically affecting
that?
We have seen apparent deadl
Thanks to all who answered my question. The culprit was an interaction between
1.7.3 not supporting mpi_paffinity_alone (which we were using previously) and
the new
kernel. Switching to --bind-to core (actually the environment variable
OMPI_MCA_hwloc_base_binding_policy=core) fixed the problem
On Dec 14, 2013, at 8:02 AM, Siddhartha Jana wrote:
> Is there a preferred method/tool among developers of MPI-library for checking
> the count of the packets transmitted by the network card during two-sided
> communication?
>
> Is the use of
> iptables -I INPUT -i eth0
> iptables -I OUTPUT -o
Can you re-run these tests with processor binding enabled?
On Dec 16, 2013, at 6:36 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi,
>
> I usually use infiniband network, where openmpi-1.7.3 and 1.6.5 works fine.
>
> The other days, I had a chance to use tcp network(1GbE) and I noticed that
>
23 matches
Mail list logo