Hi again.
I am using /etc/modprobe.d/mofed.conf, otherwise I get:
WARNING: Deprecated config file /etc/modprobe.conf, all config files belong
into /etc/modprobe.d/
But I am still getting the memory errors after making the changes and rebooting:
$ cat /etc/modprobe.d/mofed.conf
options mlx4_co
You can also set these parameters in /etc/modprobe.conf:
options mlx4_core log_num_mtt=24 log_mtts_per_seg=1
-- YK
On 11/30/2012 2:12 AM, Yevgeny Kliteynik wrote:
> On 11/30/2012 12:47 AM, Joseph Farran wrote:
>> I'll assume: /etc/modprobe.d/mlx4_en.conf
>
> Add these to /etc/modprobe.d/mofed
Greetings Ladies and gentlemen,
There is one alternative approach and this a psuedo-cloud based MPI. The
idea is that MPI node list is adjusted via the cloud similar to the way
Xgrid's Bonjour used to do it for Xgrid.
In this case, it is applying an MPI notion to the OpenCL codelets. There
are o
On 11/30/2012 12:47 AM, Joseph Farran wrote:
> I'll assume: /etc/modprobe.d/mlx4_en.conf
Add these to /etc/modprobe.d/mofed.conf:
options mlx4_core log_num_mtt=24
options mlx4_core log_mtts_per_seg=1
And then restart the driver.
You need to do it on all the machines.
-- YK
>
> On 11/29/2012 0
Hi YK:
Yes, I have those installed but they are newer versions:
# rpm -qa | grep rdma
librdmacm-1.0.15-1.x86_64
librdmacm-utils-1.0.15-1.x86_64
librdmacm-devel-1.0.15-1.x86_64
# locate librdmacm.la
#
Here are the RPMs the Mellanox build created for kernel:
2.6.32-279.14.1.el6.x86_64
# ls *rdm
Joseph,
On 11/29/2012 11:50 PM, Joseph Farran wrote:
> make[2]: Entering directory
> `/data/apps/sources/openmpi-1.6.3/ompi/mca/mtl/mxm'
> CC mtl_mxm.lo
> CC mtl_mxm_cancel.lo
> CC mtl_mxm_component.lo
> CC mtl_mxm_endpoint.lo
> CC mtl_mxm_probe.lo
> CC mtl_mxm_recv.lo
> CC mtl_mxm_send.lo
> CCLD
I'll assume: /etc/modprobe.d/mlx4_en.conf
On 11/29/2012 02:34 PM, Joseph Farran wrote:
Where do change those mellanox settings?
On 11/29/2012 02:23 PM, Jeff Squyres wrote:
See http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem.
On Nov 29, 2012, at 5:21 PM, Joseph Farran wrote:
Where do change those mellanox settings?
On 11/29/2012 02:23 PM, Jeff Squyres wrote:
See http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem.
On Nov 29, 2012, at 5:21 PM, Joseph Farran wrote:
See http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem.
On Nov 29, 2012, at 5:21 PM, Joseph Farran wrote:
> Hi All.
>
> In compiling a simple Hello world with OpenMPI 1.6.3 and mpirun the hello
> program, I am getting:
>
> $ ulimit -l unlimited
> $ mpirun -np 2 hello
> --
Hi All.
In compiling a simple Hello world with OpenMPI 1.6.3 and mpirun the hello
program, I am getting:
$ ulimit -l unlimited
$ mpirun -np 2 hello
--
WARNING: It appears that your OpenFabrics subsystem is configured to onl
On 11/28/2012 10:53 AM, Mike Dubman wrote:
You need mxm-1.1.3a5e745-1.x86_64-rhel6u3.rpm
On Wed, Nov 28, 2012 at 7:44 PM, Joseph Farran mailto:jfar...@uci.edu>> wrote:
mxm-1.1.3a5e745-1.x86_64-rhel6u3.rpm
After installing MLNX_OFED_LINUX-1.5.3-3.1.0-rhel6.3-x86_64, removing the old
mx
All,
I have a Fortran code that works quite well with OpenMPI 1.4.3 where I create
a handle using:
call MPI_TYPE_CREATE_F90_INTEGER(9, COMM_INT4, ierror)
and then do a reduction with:
call MPI_ALLREDUCE(send_buffer, buffer, count, COMM_INT4, MPI_SUM,
communicator,
ierror)
Howev
Ah, thanks - I was curious as Greenplum is about to release the full port of
Hadoop to OpenMPI so it can be run anywhere and support MPI as well. I'm at
least a little familiar with this one, but didn't realize it had been
distributed.
On Nov 29, 2012, at 8:00 AM, Howard Pritchard wrote:
> H
No problem! Glad to help.
I added you to the ticket about not being able to turn off the C++ compiler
checks (https://svn.open-mpi.org/trac/ompi/ticket/2999), in case that ever gets
fixed. It's somewhat of a low priority.
On Nov 29, 2012, at 11:17 AM, Ray Sheppard wrote:
> Thanks Jeff,
> O
Thanks Jeff,
Of course you were right. I had thought the lost function was
something internal to y'alls build. It is pretty scary that they have
been building and porting for weeks (while I was running around SC and
the holidays) and it takes an old fortran guy to notice they don't have
a wo
Hi Ralph,
mrmpi is an mpi based map reduce implementation developed at sandia labs.
Howard
On 11/28/2012 09:20 PM, Ralph Castain wrote:
On Nov 28, 2012, at 12:21 PM, Mariana Vargas Magana
wrote:
Hi openmpi'users
I now trying to install mrmpi in a cluster to use it with openmpi, I install
On Nov 28, 2012, at 11:20 PM, Ralph Castain wrote:
>> libibverbs: Warning: no userspace device-specific driver found for
>> /sys/class/infiniband_verbs/uverbs0
>
> Looks like OMPI was built with Infiniband support, but we aren't finding the
> required support libraries wherever the process is r
I recieve following error while running an application
Does this represent any hardware issue?
[compute-01-01.private.dns.zone][[60090,1],10][btl_tcp_frag.c:216:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed: Connection timed out (110)
[compute-01-01.private.dns.zone][[60090,1],13][btl
18 matches
Mail list logo