Dear Ompi users,

I am new to HPC, but I am helping a friend to compile and run WRF (Weather 
research and forcasting) on our simple cluster: Intel Xeon PCs connected via 1G 
ethernet.

(1) First I would like to clarify the problem connected to open-mpi. It was 
compiled with intel suit:

ifort --version
ifort (IFORT) 10.0 20070613
Copyright (C) 1985-2007 Intel Corporation.  All rights reserved.

using configuraton

./configure --prefix=/data/horvat/rahela/openmpi CC=icc CXX=icpc F77=ifort 
FC=ifort --without-memory-manager

The flag "--without-memory-manager" is used as I had problems with some "opal 
wrapper" at compiling stage. Perhaps it is important to mention that I have 
compiled open_mpi as a normal user on the cluster with maui/torque schedule 
manager.

By running "ompi_info -a" in by bash shell i get first number numerious such 
messages

mca: base: component_find: unable to open ***

after that I guess I get the normal status of the open-mpi that I have 
installed. I attach the whole output in a.dat.gz.

(2) By test running of successfuly compiler WRF programs I get also some 
similar errors as described above, which are all connected to open-mpi. The 
list of errors ends with the line

[asgard:18655] [NO-NAME] ORTE_ERROR_LOG: Not found in file 
runtime/orte_init_stage1.c at line 182

before giving some general remarks about MPI status. I attach the errors in 
"b.dat.gz". 

I would greatly appreciate if I/We could solve at least the first OPEN-MPI 
question as I don't know much about WRF anyway.

Thank in advance,

Martin


Attachment: a.dat.gz
Description: GNU Zip compressed data

Attachment: b.dat.gz
Description: GNU Zip compressed data

Reply via email to