Thanks for reporting this.
I have committed the fixes to the v1.0 and v1.1 branches; they will show
up in all of the snapshots for tomorrow.
From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Bernard Knaepen
Sent:
Ok -- let me know what you find. I just checked and the code *looks*
right to me, but that doesn't mean that there isn't some deeper
implication that I'm missing.
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens
Hi George,
I have no firewall running on that system. We have an external
firewall that sits outside or departmental network, so communications between
the nodes in the cluster is unrestricted. I do not have any problems
connecting to darwin (the head node) from fisher or any other node v
My test codes compile fine but I'm fairly certain the logical is
being handled incorrectly. When I merge two comm's with one having
high=.false. and the other high=.true., the latter should go into the
higher ranks and the former should contain rank 0.
I'll work it over again tomorrow and
Hi:
I am trying to run some performance tests and I want to adjust some of the
tcp and sm btl parameters. However, I have had difficulty finding any good
documentation on them. I am assuming my best bet is to look through the
code, but I thought I would see if there is anything I missed. The o
Hi,
are there any known problems with Open MPI (SVN rev. 9792) and Parallel
NetCDF (version 1.0.1)?
I'm unable to make this combination work.
The tests distributed with the source of pnetcdf fail when building with
Open MPI.
It looks as if the problem is not really due to Open MPI, but to the
The problems are related to the bindings.
I run nm against the libraries and it clearly shows that, among all
different aliases used, the ones used by the compiler are not there.
For example, for the FORTAN function called MPI_COMM_RANK, this is what
is defined in the libraries:
.MPI_Comm_rank
.P
Dear Brian,
yes I would be interested to test again when the patch is pushed in
the nightly snapshots.
Thanks,
Bernard.
On 5/2/06, Brian Barrett wrote:
On Apr 28, 2006, at 1:39 PM, Bernard Knaepen wrote:
> I am trying to install/run open-mpi on a Macbook Pro running MacOSX
> 10.4.6, *with*
Do you have a firewall on the node called darwin ? Look like fisher
is unable to create a TCP connection to darwin, and the firewall
seems to be one of the most common problems...
Thanks,
george.
On May 2, 2006, at 5:19 AM, Ali Soleimani wrote:
Hello all,
I recently got Open
On Apr 28, 2006, at 1:39 PM, Bernard Knaepen wrote:
I am trying to install/run open-mpi on a Macbook Pro running MacOSX
10.4.6, *with* fortran support.
I am using Intel Fortran Compiler 9.1 (professional edition).
Compilation/installation went fine, except that the ifort compiler was
not recogn
Thanks for the email. There was a problem with the setup of the
Intel C++ compiler. I had to add the following line to the icpc.cfg
file:
-gcc-version=400
Doug
On May 2, 2006, at 12:24 AM, Jeff Squyres ((jsquyres)) wrote:
Can you send the config.log file as well? (please compress)
T
On May 1, 2006, at 7:16 PM, Jeffrey Fox wrote:
I get openmpi-1.0.2 to compile on a (small) G5 cluster. The C and C+
+ compilers work fine so far, but the mpif77 and mpif90 scripts
send the wrong flags to the f77 and f90 compilers.
Side note I got the Absoft compilers to work using "./conf
Hello all,
I recently got OpenMPI 1.0.2 (rev 9571) compiled and running on a
small EM64T-based cluster. Everything works fine when running on a single
host, or when running simple commands or testscripts on multiple hosts. But
when I try and run a major program (cosmomc), I get the follo
Hi:
I am trying to run some performance tests and I want to adjust some of the
tcp and sm btl parameters. However, I have had difficulty finding any good
documentation on them. I am assuming my best bet is to look through the
code, but I thought I would see if there is anything I missed. The
Can you send the config.log file as well? (please compress)
That file contains a bunch of data we need to see to verify the problem.
>From a quick glance at your config.out, we typically see this kind of
output when the C++ compiler is not installed properly or is otherwise
unable to compile C++
Can you run this program through a debugger and see if you can produce a
backtrace where the error is occurring? (the OSX error message suggests
putting a breakpoint in "szone_debug" to track it down)
It looks like it's trying to malloc a massive amount of memory, which
shouldn't be happening. H
Can you run "mpif77 --showme"? This will show the underlying command
that mpif77 is issuing. We can verify that it is linking against the
right Open MPI libraries, etc.
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Brignone,
17 matches
Mail list logo