Hi,
where is the command ofed_info located? I searched from / but didn't find
it.
Subhra.
On Tue, Apr 21, 2015 at 10:43 PM, Mike Dubman
wrote:
> cool, progress!
>
> >>1429676565.124664] sys.c:719 MXM WARN Conflicting CPU
> frequencies detected, using: 2601.00
>
> means that cpu gove
Hi Howard,
> Could you double check that on the linux box you are using an ompi install
> which has java support?
Yes, I have a script file that I call with the Open MPI version that I want
to build so that I can't forget to use an empty directory, to remove the
last installation before installin
/usr/bin/ofed_info
So, the OFED on your system is not MellanoxOFED 2.4.x but smth else.
try #rpm -qi libibverbs
On Thu, Apr 23, 2015 at 7:47 AM, Subhra Mazumdar
wrote:
> Hi,
>
> where is the command ofed_info located? I searched from / but didn't find
> it.
>
> Subhra.
>
> On Tue, Apr 21, 201
Hi Jack,
Are you using a system at LANL? Maybe I could try to reproduce the problem
on the system you are using. The system call stuff adds a certain bit of
zest to the problem. does the app make fortran system calls to do the
copying and pasting?
Howard
On Apr 22, 2015 4:24 PM, "Galloway, Jack
Can you send your full Fortran test program?
> On Apr 22, 2015, at 6:24 PM, Galloway, Jack D wrote:
>
> I have an MPI program that is fairly straight forward, essentially
> "initialize, 2 sends from master to slaves, 2 receives on slaves, do a bunch
> ofsystem calls for copying/pasting then ru
On Apr 22, 2015, at 1:57 PM, Jerome Vienne wrote:
>
> While looking at performance and control variables provided by the MPI_T
> interface, I was surprised by the impressive number of control variables
> (1,087 if I am right (with 1.8.4)) but I was also disappointed to see that I
> was able to
I am using a “homecooked” cluster at LANL, ~500 cores. There are a whole bunch
of fortran system calls doing the copying and pasting. The full code is
attached here, a bunch of if-then statements for user options. Thanks for the
help.
--Jack Galloway
From: users [mailto:users-boun...@open-m
Hi all
I am install mpi (version 1.6.5) at ubuntu 14.04. I am teach parallel
programming in undergraduate course.
I wnat use rsh instead ssh (default).
I change the file "openmpi-mca-params.conf" and put there
plm_rsh_agent = rsh .
The mpi application work, but a message appear for each pro
Use “orte_rsh_agent = rsh” instead
> On Apr 23, 2015, at 10:48 AM, rebona...@upf.br wrote:
>
> Hi all
>
> I am install mpi (version 1.6.5) at ubuntu 14.04. I am teach parallel
> programming in undergraduate course.
> I wnat use rsh instead ssh (default).
> I change the file "openmpi-mca-params
Jeff
this is kind of a lanl thing. Jack and I are working offline. any
suggestions about openib and fork/exec may be useful however...and don't
say no to fork/exec not at least if you dream of mpi in the data center.
On Apr 23, 2015 10:49 AM, "Galloway, Jack D" wrote:
> I am using a “homecooke
Disable the memory manager / don't use leave pinned. Then you can fork/exec
without fear (because only MPI will have registered memory -- it'll never leave
user buffers registered after MPI communications finish).
> On Apr 23, 2015, at 9:25 PM, Howard Pritchard wrote:
>
> Jeff
>
> this is k
I changed my downloaded MOFED version to match the one installed on the
node and now the error goes away and it runs fine. But I still have a
question, I get the exact same performance on all the below 3 cases:
1) mpirun --allow-run-as-root --mca mtl mxm -mca mtl_mxm_np 0 -x
MXM_TLS=self,shm,rc,u
12 matches
Mail list logo