Ok, I am investigating -- I think I know what the problem is, but the
guy who did the bulk of the F90 work in OMPI is out traveling for a few
days (making these fixes take a little while).
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Be
I'm having trouble running apps with multiple inputs using the xgrid backend to
mpirun. I can't find any options to send files to the nodes as I would be able
to do via simple xgrid commandline options. In addition, the output files
don't show up. E.g. when I run LAMMPS locally, I get a dump
On Apr 26, 2006, at 2:49 PM, sdamjad wrote:
Brain
I changed lstubs in gcc compiler . I am enclosing tar file that as
output of
condig.log.confog.out,make.out
and makeinstall.out
Are you trying to report a problem? From your logs, everything
looked ok.
Brian
--
Brian Barrett
Open
I made another test and the problem does not occur with --with-mpi-
f90-size=medium.
Michael
On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote:
Open MPI 1.2a1r9704
Summary: configure with --with-mpi-f90-size=large and then make.
/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such
Correction on this, this problem only occurs (with OpenMPI 1.2) when
I don't use mpirun to launch my process.
I know seems strange to most mpi users, it turns out that when using
OpenMPI and only needing one process (because I spawn everything else
I need), I had found it quicker just to la
A thing to look at is how much bandwidth the models require compared to
the CPU load. You can redline gigabit ethernet with a 1GHz PIII and a
64-bit PCI bus. Opterons on a decent motherboard will definitely keep a
gigabit line chock full. With dual-core you get the advantage of very
fast process
Brain
I changed lstubs in gcc compiler . I am enclosing tar file that as output of
condig.log.confog.out,make.out
and makeinstall.out
ask.tar
Description: Unix tar archive
You might want to take this question over to the Beowulf list -- they
talk a lot more about cluster configurations than we do -- and/or the
mm4 and wein2k support lists (since they know the details of those
applications -- if you're going to have a cluster for a specific set of
applications, it can
Hi,
I want to build an hpc cluster for running mm5 and wien2k
scientific applications for my physics coledge. both of them
use MPI.
Interconnection between nodes: GigEth (Cisco 24 port GigEth)
It seems I have two choices for nodes:
* 32 dual core opteron processors (1 GB ram for each node)
* 6
Open MPI 1.2a1r9704
Summary: configure with --with-mpi-f90-size=large and then make.
/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such file or
directory
I doubt this one is system specific
---
my details:
Building OpenMPI 1.2a1r9704 with g95 (Apr 23 2006) on OS X 10.4.6 using
./c
Blast! I could have sworn that I posted the Wednesday slides already;
I'll go do that right now.
I have pinged Mellanox and Myricom for their Thursday slides; they both
indicated that they needed to get some final approvals before they
posted.
> -Original Message-
> From: users-boun...@
Just wondering if/when the slides from Wednesday and Thursday of the
"Open MPI Developer's Workshop" will be posted.
Thanks
-DON
On Apr 24, 2006, at 12:32 PM, sdamjad wrote:
Brain
sorry i am enclosing my config.log file tar file here
I can not reach to make step
Hence can not include it
It looks like you are trying to use the IBM XLF compiler for your
Fortran compiler on OS X 10.4 There are some special things you hav
13 matches
Mail list logo