On Oct 21, 2013, at 12:25 PM, Patrick Begou
wrote:
> kareline (front-end) is a R720XD and the nodes are C6100 sleds from DELL. All
> is running with Rocks-Cluster (based on RHEL6).
Are these AMD- or Intel-based systems? (I don't follow the model/series of
non-Cisco servers, sorry...)
> The
Jeff Squyres (jsquyres) wrote:
Can you manually install a recent version of hwloc
(http://www.open-mpi.org/projects/hwloc/) on kareline, and run lstopo on it?
Send the output here.
What kind of machine is kareline?
On Oct 21, 2013, at 11:09 AM, Patrick Begou
kareline (front-end) is a R720
Can you manually install a recent version of hwloc
(http://www.open-mpi.org/projects/hwloc/) on kareline, and run lstopo on it?
Send the output here.
What kind of machine is kareline?
On Oct 21, 2013, at 11:09 AM, Patrick Begou
wrote:
> Thanks Ralph for this answer. May be I wasn't very cl
Thanks Ralph for this answer. May be I wasn't very clear (my English is not so
good...)
I do not want the binding-to-core be the default. For hybrid codes (OpenMP +
MPI) I need a bind to the socket. But at this time, I am unable to request a
--bind-to-core option:
[begou@kareline OARTEST]$
Hi,
On 15:58 Mon 21 Oct , MM wrote:
> Would you suggest to modify the loop to do a MPI_ISend after x iterations
> (for the clients) and MPI_IRecv on the root?
sounds good. Don't forget to call MPI_Cancel for all pending status
update communications (MPI_Isend and MPI_Irecv).
Best
-Andreas
On 21 October 2013 15:19, Andreas Schäfer wrote:
> Hi,
>
> the solution depends on the details of your code. Will all clients
> send their progress updates simultaneously? Are you planning for few
> or many nodes?
>
> For few nodes and non-simultaneous updates you could loop on the root
> while r
We never set binding "on" by default, and there is no configure option that
will do so. Never has been, to my knowledge.
If you truly want it to bind by default, then you need to add that directive to
your default MCA param file:
/etc/openmpi-mca-params.conf
On Oct 21, 2013, at 3:17 AM, Patri
Hi,
the solution depends on the details of your code. Will all clients
send their progress updates simultaneously? Are you planning for few
or many nodes?
For few nodes and non-simultaneous updates you could loop on the root
while receiving from MPI_ANY. Clients could send out their updates via
M
Hello,
I have a n-variable function optimization task that I programmed with a
scatter, each mpi process evaluates my function in part of the space, then
a reduce to get the maximum at the root process. Most wall time is spent in
the function evaluations done inside every mpi process.
I would lik
I am compiling OpenMPI 1.7.3 and 1.7.2 with GCC 4.8.1 but I'm unable to activate
some binding policy at compile time.
ompi_info -a shows:
MCA hwloc: parameter "hwloc_base_binding_policy" (current value: "", data
source: default, level: 9 dev/all, type: string)
Policy f
Hi Dave,
Is it MPI_ALLTOALL or MPI_ALLTOALLV that runs slower? If it is the latter,
the reason could be that the default implementation of MPI_ALLTOALLV in
1.6.5 is different from that in 1.5.4. To switch back to the previous one,
use:
--mca coll_tuned_use_dynamic_rules 1 --mca coll_tuned_alltoal
11 matches
Mail list logo