Am 06.09.2012 um 13:21 schrieb Schmidt U.:
>>
> If h_vmem is defined in the script, what sense is then an additional vf
> option in the script ? The h_vmem has per default higher value than vf, so it
> must fit first to let the job run.
If you want to avoid swapping, both should have the same
I built open-mpi 1.6.1 using the open-mx libraries.
This worked previously and now I get the following
error. Here is my system:
kernel: 2.6.32-279.5.1.el6.x86_64
open-mx: 1.5.2
BTW, open-mx worked previously with open-mpi and the current
version works with mpich2
$ mpiexec -np 8 -machinefile
Thanks Jeff. I will definitely do the failure analysis. But just
wanted to confirm this isn't something special in OMPI itself, e.g.,
missing some configuration settings, etc.
On Thu, Sep 6, 2012 at 5:01 AM, Jeff Squyres wrote:
> If you run into a segv in this code, it almost certainly means tha
Dear mpi users and developers,
I am having some trouble with MPI_Allreduce. I am using MinGW (gcc 4.6.2)
with OpenMPI 1.6.1. The MPI_Allreduce in c version works fine, but the
fortran version failed with error. Here is the simple fortran code to
reproduce the error:
program ma
Hi Siegmar,
Glad to hear that it's working for you.
The warning message is because the loopback adapter is excluded by
default, but this adapter is actually not installed on Windows.
One solution might be installing the loopback adapter on Windows. It
very easy, only a few minutes.
Or it m
John --
This cartesian stuff always makes my head hurt. :-)
You seem to have hit on a bona-fide bug. I have fixed the issue in our SVN
trunk and will get the fixed moved over to the v1.6 and v1.7 branches.
Thanks for the report!
On Aug 29, 2012, at 5:32 AM, Craske, John wrote:
> Hello,
>
On Sep 4, 2012, at 3:09 PM, mariana Vargas wrote:
> I 'am new in this, I have some codes that use mpi for python and I
> just installed (openmpi, mrmpi, mpi4py) in my home (from a cluster
> account) without apparent errors and I tried to perform this simple
> test in python and I get the fo
Your question is somewhat outside the scope of this list. Perhaps people may
chime in with some suggestions, but that's more of a threading question than an
MPI question.
Be warned that you need to call MPI_Init_thread (not MPI_Init) with
MPI_THREAD_MULTIPLE in order to get true multi-threaded
If you run into a segv in this code, it almost certainly means that you have
heap corruption somewhere. FWIW, that has *always* been what it meant when
I've run into segv's in any code under in opal/mca/memory/linux/. Meaning: my
user code did something wrong, it created heap corruption, and t
On 9/3/2012 4:14 AM, Randolph Pullen wrote:
> No RoCE, Just native IB with TCP over the top.
Sorry, I'm confused - still not clear what is "Melanox III HCA 10G card".
Could you run "ibstat" and post the results?
What is the expected BW on your cards?
Could you run "ib_write_bw" between two machin
Hi Shiqing,
I have solved the problem with the double quotes in OPENMPI_HOME but
there is still something wrong.
set OPENMPI_HOME="c:\Program Files (x86)\openmpi-1.6.1"
mpicc init_finalize.c
Cannot open configuration file "c:\Program Files
(x86)\openmpi-1.6.1"/share/openmpi\mpicc-wrapper-data.t
Hi,
While debugging a mysterious crash of a code, I was able to trace down
to a SIGSEGV in OMPI 1.6 and 1.6.1. The offending code is in
opal/mca/memory/linux/malloc.c. Please see the following gdb log.
(gdb) c
Continuing.
Program received signal SIGSEGV, Segmentation fault.
opal_memory_ptmalloc2
12 matches
Mail list logo