How is the file read? From stdin? Or do they open it directly?
If the latter, then it sure sounds like a CGNS issue to me - looks like they
are slurping in the entire file, and then forget to free the memory when they
close it. I can’t think of any solution short of getting them to look at the
Thanks!
On Jun 17, 2015, at 3:08 PM, Rolf vandeVaart wrote:
> There is no short-term plan but we are always looking at ways to improve
> things so this could be looked at some time in the future.
>
> Rolf
>
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao
> Sent: Wedne
There is no short-term plan but we are always looking at ways to improve things
so this could be looked at some time in the future.
Rolf
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao
Sent: Wednesday, June 17, 2015 1:48 PM
To: Open MPI Users
Subject: Re: [OMPI users] CUDA-a
Hi,
the message in question belongs to MXM and it is warning (silenced in
latter releases of MXM).
To select specific device in MXM, please pass:
mpirun -x MXM_IB_PORTS=mlx4_0:2 ...
M
On Wed, Jun 17, 2015 at 9:38 PM, Na Zhang wrote:
> Hi all,
>
> I am trying to launch MPI jobs (with version o
Hi all,
I am trying to launch MPI jobs (with version openmpi-1.6.5) on a node with
multiple InfiniBand HCA cards (pls. see ibstat info below). I just want to
use the only active port: mlx4_0 port 2. Thus I issued
mpirun *-mca btl_openib_if_include "mlx4_0:2"* -np...
I thought this command would
Hi Rolf,
Thank you very much for clarifying the problem. Is there any plan to support
GPU RDMA for reduction in the future?
On Jun 17, 2015, at 1:38 PM, Rolf vandeVaart wrote:
> Hi Fei:
>
> The reduction support for CUDA-aware in Open MPI is rather simple. The GPU
> buffers are copied into
Hi Fei:
The reduction support for CUDA-aware in Open MPI is rather simple. The GPU
buffers are copied into temporary host buffers and then the reduction is done
with the host buffers. At the completion of the host reduction, the data is
copied back into the GPU buffers. So, there is no use o
Hi there,
I am doing benchmarks on a GPU cluster with two CPU sockets and 4 K80 GPUs each
node. Two K80 are connected with CPU socket 0, another two with socket 1. An IB
ConnectX-3 (FDR) is also under socket 1. We are using Linux’s OFED, so I know
there is no way to do GPU RDMA inter-node commu
Hi,
While checking for memory issues related with CGNS 2.5.5, on a test
machine, the following output is display when just opening and closing CGNS
file.
Can anybody please help me on this?
This machine is Centos 7 (minimal installation) with GCC 4.8.3 and CentOS
7. The gcc compiler is used. The