Hi everyone, I've been having a pretty odd issue with Slurm and
openmpi the last few days. I just set up a heterogeneous cluster with
Slurm consisting of P4 32 bit machines and a few new i7 64 bit
machines, all running the latest version of Ubuntu linux. I compiled
the latest OpenMPI 1.3.3 with the
On Fri, 28 Aug 2009 10:16 -0700, "Eugene Loh"
wrote:
> Big topic and actually the subject of much recent discussion. Here are
> a few comments:
>
> 1) "Optimally" depends on what you're doing. A big issue is making
> sure each MPI process gets as much memory bandwidth (and cache and other
>
Big topic and actually the subject of much recent discussion. Here are
a few comments:
1) "Optimally" depends on what you're doing. A big issue is making
sure each MPI process gets as much memory bandwidth (and cache and other
shared resources) as possible. This would argue that processes
Hello all.
I apologize if this has been addressed in the FAQ or on the mailing
list, but I spent a fair amount of time searching both and found no
direct answers.
I use OpenMPI, currently version 1.3.2, on an 8-way quad-core AMD
Opteron machine. So 32 cores in total. The computer runs a modern
Well from what I know Boost.MPI contains only MPI-1 functions
(but refer to the boost mailing list for support
http://lists.boost.org/mailman/listinfo.cgi/boost-users);
so Intercommunicators are not managed by the Boost.MPI library, and you
have to use the stardard MPI functions.
So, by now, I th
Greetings all,
I wanted to send come complex user defined types between MPI processes
and found out that Boost.MPI is quite easy to use for my requirement.So
far it worked well and I received my object model in every process
without problems.
Now I am going to spawn processes (using MPI_Comm_