Hi. I recently encountered this error and can not really understand
what this means. I googled and could not find any relevant
information. Could somebody tell me what might cause this error?
Our systems: Rocks 4.3 x86_64, openmpi-1.2.5, scalapack-1.8.0,
Barcelona, Gigabit interconnections.
T
first job seemed to dominate and
use most of the 400% CPU.
Thank you.
On Mon, Feb 25, 2008 at 11:36 PM, Steven Truong wrote:
> Dear, all. We just finished installing the first batch of nodes with
> the following configurations.
> Machines: Dual Quad core AMD 2350 + 16 Gig of RAMs
>
Dear, all. We just finished installing the first batch of nodes with
the following configurations.
Machines: Dual Quad core AMD 2350 + 16 Gig of RAMs
OS + Apps: Rocks 4.3 + Torque (2.1.8-1) + Maui (3.2.6p19-1) + Openmpi
(1.1.1-8) + VASP
Interconnections: Gigabit Ethernet ports + Extreme Summit x4
.nanostellar.com
Thank you.
Steven.
On 5/18/07, Steven Truong wrote:
Hi, Jeff. Thanks so very much for all your helps so far. I decided
that I needed to go back and check whether openmpi even works for
simple cases, so here I am.
So my shell might have exited when it detect that I ran
non
ecutable:
Host: node07.nanostellar.com
Executable: node07
Cannot continue.
--
On 5/18/07, Jeff Squyres wrote:
On May 18, 2007, at 4:38 PM, Steven Truong wrote:
> [struong@neptune 4cpu4npar10nsim]$ mpirun --mca bt
Hi, all. Once again, I am ver y frustrated with what I have run into so far.
My system is CentOS 4.4 x86_64, ifort 9.1.043, torque, maui.
I configured openmpi 1.2.1 with this command.
./configure --prefix=/usr/local/openmpi-1.2.1
--with-tm=/usr/local/pbs --enable-static
Now I just tried to run
Thank Jeff very much for your efforts and helps.
On 5/9/07, Jeff Squyres wrote:
I have mailed the VASP maintainer asking for a copy of the code.
Let's see what happens.
On May 9, 2007, at 2:44 PM, Steven Truong wrote:
> Hi, Jeff. Thank you very much for looking into this issue
d, it would be most helpful if we could
reproduce the error (and therefore figure out how to fix it).
Thanks!
On May 9, 2007, at 2:19 PM, Steven Truong wrote:
> Oh, no. I tried with ACML and had the same set of errors.
>
> Steven.
>
> On 5/9/07, Steven Truong wrote:
>> H
Oh, no. I tried with ACML and had the same set of errors.
Steven.
On 5/9/07, Steven Truong wrote:
Hi, Kevin and all. I tried with the following:
./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
--with-tm=/usr/local/pbs --enable-mpirun-prefix-by-default
--enable-mpi-f90 --with
r.
I forgot to mention that our environment has Intel MKL 9.0 or 8.1 and
my machines are dual proc dual core Xeon 5130 .
Well, I am going to try acml too.
Attached is my makefile for VASP and I am not sure if I missed anything again.
Thank you very much for all your helps.
On 5/9/07, Steven T
$(CPP)
$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
to the end of the make file, It doesn't look like it is in the example
makefiles they give, but I compiled this a while ago.
Hope this helps.
Cheers,
Kevin
On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
> Hi, al
Hi, all. I am new to OpenMPI and after initial setup I tried to run
my app but got the followign errors:
[node07.my.com:16673] *** An error occurred in MPI_Comm_rank
[node07.my.com:16673] *** on communicator MPI_COMM_WORLD
[node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
[node07.my.c
12 matches
Mail list logo