Hi,
I am currently configuring a GPU cluster. The cluster has 8 K20 GPUs per
node on two sockets, 4 PCIe bus (2 K20 per bus, 4 K20 per socket), with
a single QDR InfiniBand card on each node. We have the latest NVidia
drivers and Cuda 6.0.
I am wondering if someone could tell me if all the de
Hmmm...okay, good news and bad news :-)
Good news: this works fine on 1.8, so I'd suggest updating to that release
series (either 1.8.1 or the nightly 1.8.2)
Bad news: if one proc is going to exit without calling Finalize, they all need
to do so else you will hang in Finalize. The problem is th
I'll check to see - should be working
On May 23, 2014, at 8:07 AM, Iván Cores González wrote:
>> I assume you mean have them exit without calling MPI_Finalize ...
>
> Yes, thats my idea, exit some processes while the others continue. I am
> trying to
> use the "orte_allowed_exit_without_sync"
Very useful!
Thank you Ralph.
Albert
On Fri 23 May 2014 15:26:58 BST, Ralph Castain wrote:
On May 23, 2014, at 7:14 AM, Albert Solernou
wrote:
Well,
the problem is that I don't know how to do any of these things
Ah! You might want to read this:
http://www.open-mpi.org/faq/?category=tuni
> I assume you mean have them exit without calling MPI_Finalize ...
Yes, thats my idea, exit some processes while the others continue. I am trying
to
use the "orte_allowed_exit_without_sync" flag in the next code (note that the
code
is different):
int main( int argc, char *argv[] )
{
M
On May 23, 2014, at 7:21 AM, Iván Cores González wrote:
> Hi Ralph,
> Thanks for your response.
> I see your point, I try to change the algorithm but some processes finish
> while the others are still calling MPI functions. I can't avoid this
> behaviour.
> The ideal behavior is the processe
On May 23, 2014, at 7:14 AM, Albert Solernou
wrote:
> Well,
> the problem is that I don't know how to do any of these things
Ah! You might want to read this:
http://www.open-mpi.org/faq/?category=tuning#mca-params
> , so more explicitly:
> - does OpenMPI accept any environment variable that
Hi Ralph,
Thanks for your response.
I see your point, I try to change the algorithm but some processes finish while
the others are still calling MPI functions. I can't avoid this behaviour.
The ideal behavior is the processes go to sleep (or don't use the 100% of load)
when the MPI_Finalize is
Well,
the problem is that I don't know how to do any of these things, so more
explicitly:
- does OpenMPI accept any environment variable that prevents binding
like MV2_ENABLE_AFFINITY does on MVAPICH2?
- what is the default mca param file? Is it a runtime file? or a
configuration file? how do
On May 23, 2014, at 6:58 AM, Albert Solernou
wrote:
> Hi,
> thanks a lot for your quick answers, and I see my error, it is "--bind-to
> none" instead of "--bind-to-none".
>
> However, I need to be able to run "mpirun -np 2" without any binding argument
> and get a "--bind-to none" behaviour.
Hi,
thanks a lot for your quick answers, and I see my error, it is
"--bind-to none" instead of "--bind-to-none".
However, I need to be able to run "mpirun -np 2" without any binding
argument and get a "--bind-to none" behaviour. I don't know if I can
export an environment variable to do that,
Sorry, I assumed you were working with a group of machines (Different
computer with their own resources, connected through network). I am not
sure, if this would work in your situation. But you can still give it a
try, if you keep process 0 in waiting for receiving data, It may consume
less cpu tim
In my codes, I am using MPI_Send and MPI_Recv functions to notify P0 that
every other process have finished their own calculations. Maybe you cal
also use the same method and keep P0 in waiting until it receives some data
from other processes?
On Fri, May 23, 2014 at 4:39 PM, Ralph Castain wrote
Hmmm...that is a bit of a problem. I've added a note to see if we can turn down
the aggressiveness of the MPI layer once we hit finalize, but that won't solve
your immediate problem.
Our usual suggestion is that you have each proc call finalize before going on
to do other things. This avoids th
Note that the lama mapper described in those slides may not work as it hasn't
been maintained in a while. However, you can use the map-by and bind-to options
to do the same things.
If you want to disable binding, you can do so by adding "--bind-to none" to the
cmd line, or via the MCA param "hw
Albert,
Actually doing affinity correctly for hybrid got easier in OpenMPI 1.7 and
newer, In the past you had to make a lot of assumptions, stride by node etc,
Now you can define a layout:
http://blogs.cisco.com/performance/eurompi13-cisco-slides-open-mpi-process-affinity-user-interface/
Broc
Hi,
after compiling and installing OpenMPI 1.8.1, I find that OpenMPI is
pinning processes onto cores. Although this may be
desirable on some cases, it is a complete disaster when runnning hybrid
OpenMP-MPI applications. Therefore, I want to disable this behaviour,
but don't know how.
I confi
Hi all,
I have a performance problem with the next code.
int main( int argc, char *argv[] )
{
MPI_Init(&argc, &argv);
int myid;
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
//Imagine some important job here, but P0 ends first.
if (myid != 0)
{
Here the output of ifconfig
*-bash-3.2$ ssh compute-0-15 /sbin/ifconfig*
eth0 Link encap:Ethernet HWaddr 78:E7:D1:61:C6:F4
inet addr:10.1.255.239 Bcast:10.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::7ae7:d1ff:fe61:c6f4/64 Scope:Link
UP BROADCAST RUNNING MULTI
19 matches
Mail list logo