Hi,
I found this thread from before Christmas, and I wondered what the
status of this problem is. We experience the same problems since our
upgrade to Scientific Linux 6.4, kernel 2.6.32-431.1.2.el6.x86_64, and
OpenMPI 1.6.5.
Users have reported severe slowdowns in all kinds of applications
On 2/27/14 16:47 PM, Dave Love wrote:
Bernd Dammann writes:
Hi,
I found this thread from before Christmas, and I wondered what the
status of this problem is. We experience the same problems since our
upgrade to Scientific Linux 6.4, kernel 2.6.32-431.1.2.el6.x86_64, and
OpenMPI 1.6.5
On 2/27/14 14:06 PM, Noam Bernstein wrote:
On Feb 27, 2014, at 2:36 AM, Patrick Begou
wrote:
Bernd Dammann wrote:
Using the workaround '--bind-to-core' does only make sense for those jobs, that
allocate full nodes, but the majority of our jobs don't do that.
Why ?
We still
On 3/2/14 0:44 AM, Tru Huynh wrote:
On Fri, Feb 28, 2014 at 08:49:45AM +0100, Bernd Dammann wrote:
Maybe I should say, that we moved from SL 6.1 and OMPI 1.4.x to SL
6.4 with the above kernel, and OMPI 1.6.5 - which means a major
upgrade of our cluster.
After the upgrade, users reported those
Hi David,
On 03/02/2022 00:03 , David Perozzi wrote:
Helo,
I'm trying to run a code implemented with OpenMPI and OpenMP (for
threading) on a large cluster that uses LSF for the job scheduling and
dispatch. The problem with LSF is that it is not very straightforward to
allocate and bind the r