Re: [OMPI users] Open MPI and SLURM_CPUS_PER_TASK

2011-12-01 Thread Igor Geier
Hi Ralph, thanks a lot, including it in the next release would be great. Best regards Igor On Wed, 30 Nov 2011 14:30:25 -0700 Ralph Castain wrote: > Hi Igor > > As I recall, this eventually traced back to a change in slurm at some point. > I believe the latest interpretatio

[OMPI users] Open MPI and SLURM_CPUS_PER_TASK

2011-11-28 Thread Igor Geier
lots_max = 0; -node->slots = slots[i] / cpus_per_task; +/* Don't divide by cpus_per_task */ +node->slots = slots[i]; opal_list_append(nodelist, &node->super); } free(slots); Are there situations where this might n

Re: [OMPI users] Cannot launch slots on more than 2 remote machines

2011-03-28 Thread Igor
all size of the cluster, add "-mca routed direct" to your > command line. This will tell mpirun to talk directly to each daemon. However, > note that your job may still fail as the procs won't be able to open sockets > to their peers to send MPI messages, if you use

[OMPI users] Cannot launch slots on more than 2 remote machines

2011-03-28 Thread Igor
LD_LIBRARY_PATH :) Adding "/usr/lib/openmpi/lib" to the otherwise empty LD_LIBRARY_PATH produces same results. Can someone suggest a possible solution or at least a direction in which I should continue my troubleshooting? -- Thank you all for your time, Igor

Re: [OMPI users] Tuned collectives: How to choose them dynamically? (-mca coll_tuned_dynamic_rules_filename dyn_rules)"

2009-07-23 Thread Igor Kozin
similarly for other collectives. Best, Igor 2009/7/23 Gus Correa : > Dear OpenMPI experts > > I would like to experiment with the OpenMPI tuned collectives, > hoping to improve the performance of some programs we run > in production mode. > > However, I could not find any do

Re: [OMPI users] Lower performance on a Gigabit node compared toinfiniband node

2009-03-12 Thread Igor Kozin
Hi Sangamesh, I'd look into making sure that the node you are using is not running anything in parallel. Make sure you allocate a whole node and it is clean from previous jobs. Best, INK

Re: [OMPI users] Lower performance on a Gigabit node compared toinfiniband node

2009-03-10 Thread Igor Kozin
Hi Sangamesh, As far as I can tell there should be no difference if you run CPMD on a single node whether with or without ib. One easy thing that you could do is to repeat your runs on the infiniband node(s) with and without infiniband using --mca btl ^tcp and --mca btl ^openib respectively. But si

Re: [OMPI users] Asynchronous behaviour of MPI Collectives

2009-01-23 Thread Igor Kozin
procs * 8 procs/node = 8 GB/node plus you need to double because of buffering. I was told by Mellanox (our cards are ConnectX cards) that they introduced XRC in OFED 1.3 in addition to Share Receive Queue which should reduce memory foot print but I have not tested this yet. HTH, Igor 2009/1/23 Gab

Re: [OMPI users] Asynchronous behaviour of MPI Collectives

2009-01-23 Thread Igor Kozin
what is your message size and the number of cores per node? is there any difference using different algorithms? 2009/1/23 Gabriele Fatigati > Hi Jeff, > i would like to understand why, if i run over 512 procs or more, my > code stops over mpi collective, also with little send buffer. All > proce

Re: [OMPI users] problem with alltoall with ppn=8

2008-08-16 Thread Kozin, I (Igor)
> - per the "sm" thread, you might want to try with just IB (and not > shared memory), just to see if that helps (I don't expect that it > will, but every situation is different). Try running "mpirun --mca > btl openib ..." (vs. "--mca btl ^tcp"). Unfortunately you were right- it did not help. Sm

[OMPI users] problem with alltoall with ppn=8

2008-08-15 Thread Kozin, I (Igor)
this task never completes… Thanks in advance. Sorry for the long post. Igor PS I’m following the discussion on slow sm btl but not sure if this particular problem is related or not. BTW the Open MPI build I’m using is for Intel compiler. PPS MVAPICH and MVAPICH2 behave much better but not

Re: [OMPI users] TCP Latency

2008-07-29 Thread Kozin, I (Igor)
IMB PingPong). Of course commercial MPI libraries offer low latency too e.g. Scali MPI. Best, Igor > > -- > > Dresden University of Technology > Center for Information Services > and High Performance Computing (ZIH) > D-01062 Dresden > Germany > > e-mail: andy.g

[OMPI users] opal_init_Segmentation Fault

2007-07-17 Thread Igor Miskovski
Hello, When i try to install OpenMPI on Linux Suse 10.2 on AMDX2 Dual Core processor i get the following message: make[3]: Entering directory `/home/igor/openmpi-1.2.3/opal/libltdl' if /bin/sh ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -D LT_CONFIG_H=''