Bonjour John,
 Thanks for your feedback, but my investigations so far did not help:
the memlock limit on the compute nodes are actually set to unlimited.
This most probably means that even if the btl_openib hits some memory allocation
limit, the message I got is inaccurate because the memlock resource is indeed 
already unlimited.

 Then, the btl allocation mechanism is most probably stopped 
by the memlock resource being exhausted because the former is
attempting to create too many buffers, for example. I tried to explore this
kind of assumption by decreasing :
- btl_ofud_rd_num down to 32 or even 16
- btl_openib_cq_size down to 256 or even 64
but to no avail.

 So, I am asking for help about which other parameter could lead to (locked ?) 
memory exhaustion,
knowing that the current memlock wall shows up 
- when I run with 4096 or 8192 cores (for 2048, that's fine)
- there are 4GB of RAM available for each core
- each core is communicating with no more than 8 neighbours, and they
stay the same along the whole job life.

 Does this triggers some idea for anyone ?


 Thanks in advance,           Best,    Gilbert.


Le 20 nov. 10 à 19:27, John Hearns a écrit :

      On 20 November 2010 16:31, Gilbert Grosdidier <gro...@mail.cern.ch> wrote:
            Bonjour,


      Bonjour Gilbert.

      I manage ICE clusters also.

      Please could you have look at /etc/init.d/pbs on the compute blades?



      Do you have something like:

         if [ "${PBS_START_MOM}" -gt 0 ] ; then
           if check_prog "mom" ; then
             echo "PBS mom already running."
           else
             check_maxsys
             site_mom_startup
             if [ -f /etc/sgi-release -o -f /etc/sgi-compute-node-release ] ; 
then
                 MEMLOCKLIM=`ulimit -l`
                 NOFILESLIM=`ulimit -n`
                 STACKLIM=`ulimit -s`
                 ulimit -l unlimited
                 ulimit -n 16384
                 ulimit -s unlimited
             fi
      _______________________________________________
      users mailing list
      us...@open-mpi.org
      http://www.open-mpi.org/mailman/listinfo.cgi/users


--
*---------------------------------------------------------------------*
  Gilbert Grosdidier                 gilbert.grosdid...@in2p3.fr
  LAL / IN2P3 / CNRS                 Phone : +33 1 6446 8909
  Faculté des Sciences, Bat. 200     Fax   : +33 1 6446 8546
  B.P. 34, F-91898 Orsay Cedex (FRANCE)
*---------------------------------------------------------------------*





Reply via email to