e if it helps. With threads, you
> don't really want to bind to a core, but binding to a socket should help.
> Try adding --bind-to-socket to your mpirun cmd line (you can't do this if
> you run it as a singleton - have to use mpirun).
>
>
> On Oct 25, 2011, at 2:45 AM, 吕
to account?
>
>
> On Oct 24, 2011, at 8:47 PM, 吕慧伟 wrote:
>
> No. There's a difference between "mpirun -np 1 ./my_hybrid_app..."
> and "mpirun -np 2 ./...".
>
> Run "mpirun -np 1 ./my_hybrid_app..." will increase the performance with
> mor
No. There's a difference between "mpirun -np 1 ./my_hybrid_app..." and "mpirun
-np 2 ./...".
Run "mpirun -np 1 ./my_hybrid_app..." will increase the performance with
more number of threads, but run "mpirun -np 2 ./..." decrease the
performance.
--
Huiwei Lv
On Tue, Oct 25, 2011 at 12:00 AM, wro
Dear List,
I have a hybrid MPI/Pthreads program named "my_hybrid_app", this program is
memory-intensive and take advantage of multi-threading to improve memory
throughput. I run "my_hybrid_app" on two machines, which have same hardware
configuration but different OS and GCC. The problem is: when I
;m not entirely sure that XRC is supported on OMPI 1.4, but I'm
> >> sure it is on later version of the 1.4 series (1.4.3).
> >>
> >> BTW, I do know that the command line is extremely user friendly
> >> and completely intuitive... :-)
> >> I'll have a
Dear all,
I have encounted a problem concerns running large jobs on SMP cluster with
Open MPI 1.4.
The application need all-to-all communication, each process send messages to
all other processes via MPI_Isend. It goes fine when running 256 processes,
the problems occurs when process number >=512.