Thank you. Anyway, your email contains good amount of info.
Saliya
On Wed, Feb 26, 2014 at 7:48 PM, Ralph Castain wrote:
> I did one "chapter" of it on Jeff's blog and probably should complete it.
> Definitely need to update the FAQ for the new options.
>
> Sadly, outside of that and the mpiru
I did one "chapter" of it on Jeff's blog and probably should complete it.
Definitely need to update the FAQ for the new options.
Sadly, outside of that and the mpirun man page, there isn't much available yet.
I'm woefully far behind on it.
On Feb 26, 2014, at 4:47 PM, Saliya Ekanayake wrote:
Thank you Ralph, this is very insightful and I think I can better
understand performance of our application.
If I may ask, is there a document describing this affinity options? I've
been looking at tuning FAQ and Jeff's blog posts.
Thank you,
Saliya
On Wed, Feb 26, 2014 at 7:34 PM, Ralph Castai
On Feb 26, 2014, at 4:29 PM, Saliya Ekanayake wrote:
> I see, so if I understand correctly, the best scenario for threads would be
> to bind 2 procs to sockets as --map-by socket:pe=4 and use 4 threads in each
> proc.
Yes, that would be the best solution. If you have 4 cores in each socket,
I see, so if I understand correctly, the best scenario for threads would be
to bind 2 procs to sockets as --map-by socket:pe=4 and use 4 threads in
each proc.
Also, as you've mentioned binding threads to get memory locality, I guess
this has to be done at application level and not an option in OMP
Sorry, had to run some errands.
On Feb 26, 2014, at 1:03 PM, Saliya Ekanayake wrote:
> Is it possible to bind to cores of multiple sockets? Say I have a machine
> with 2 sockets each with 4 cores and if I run 8 threads with 1 proc can I
> utilize all 8 cores for 8 threads?
In that scenario, y
Is it possible to bind to cores of multiple sockets? Say I have a machine
with 2 sockets each with 4 cores and if I run 8 threads with 1 proc can I
utilize all 8 cores for 8 threads?
Thank you for speedy replies
Saliya
On Wed, Feb 26, 2014 at 3:21 PM, Ralph Castain wrote:
>
> On Feb 26, 2014,
On Feb 26, 2014, at 12:17 PM, Saliya Ekanayake wrote:
> I have a followup question on this. In our application we have parallel for
> loops similar to OMP parallel for. I noticed that in order to gain speedup
> with threads I've to set --bind-to none, otherwise multiple threads will bind
> to
I have a followup question on this. In our application we have parallel for
loops similar to OMP parallel for. I noticed that in order to gain speedup
with threads I've to set --bind-to none, otherwise multiple threads will
bind to same core giving no increase in performance. For example, I get
fol
Thank you Ralph, I'll check this.
On Wed, Feb 26, 2014 at 10:04 AM, Ralph Castain wrote:
> It means that OMPI didn't get built against libnuma, and so we can't
> ensure that memory is being bound local to the proc binding. Check to see
> if numactl and numactl-devel are installed, or you can tu
It means that OMPI didn't get built against libnuma, and so we can't ensure
that memory is being bound local to the proc binding. Check to see if numactl
and numactl-devel are installed, or you can turn off the warning using "-mca
hwloc_base_mem_bind_failure_action silent"
On Feb 25, 2014, at
Hi,
I tried to run an MPI Java program with --bind-to core. I receive the
following warning and wonder how to fix this.
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.
12 matches
Mail list logo