[OMPI users] p4_error: latest msg from perror: Bad file descriptor

2006-10-09 Thread Vadivelan Ranjith
Hi
I thank you for helping to all.
Today i got a error message by sumbitting job. First i
ran the code 
using explict method. I got result accurately, and no
problem occured when 
i sumbit job. Now i changed my code to implict method.
I got error when 
i sumbit job.
I checked correctly, it reading all files and
iteration starts. after 
one iteration it gives the following error. The same
code is running on 
other machine, giving result correctly. So please help
me how to fix 
it.

Advance thanks
Velan


job.e file:
p4_error: latest msg from perror: Bad file
descriptor
p4_error: latest msg from perror: Bad file
descriptor
p4_error: latest msg from perror: Bad file
descriptor
p4_error: latest msg from perror: Bad file
descriptor
-
job.o file:
3
node18.local
node19.local
node17.local
# Allocating   5 nodes to block  1
# Allocating   1 nodes to block  2
# Require mxb >=   97
# Require mxa >=   26 mya >=   97 and mza >=   75
# Maximum load imbalance =  71.69%
# Navier-Stokes Simulation
# Implicit Full Matrix DP-LUR
# Reading restart files...( 0.34 seconds)
# Freestream Mach Number =  6.50

 1   0.3670E+01   0.7803E+05   16   1572  
0.1222E-08
p5_2609:  p4_error: interrupt SIGx: 13
bm_list_17559: (3.666982) wakeup_slave: unable to
interrupt slave 0 pid 
17542
rm_l_1_18696: (2.738297) net_send: could not write to
fd=6, errno = 9
rm_l_1_18696:  p4_error: net_send write: -1
rm_l_2_2605: (2.614927) net_send: could not write to
fd=6, errno = 9
rm_l_4_18718: (2.373120) net_send: could not write to
fd=6, errno = 9
rm_l_4_18718:  p4_error: net_send write: -1
rm_l_2_2605:  p4_error: net_send write: -1
rm_l_3_17584: (2.496277) net_send: could not write to
fd=6, errno = 9
rm_l_3_17584:  p4_error: net_send write: -1
rm_l_5_2626: (2.249144) net_send: could not write to
fd=5, errno = 32
p5_2609: (2.251356) net_send: could not write to fd=5,
errno = 32
---
job file:
#!/bin/bash
#PBS -l nodes=3:ppn=1

cd $PBS_O_WORKDIR
n=`/usr/local/bin/pbs.py $PBS_NODEFILE hosts`
echo $n
cat hosts
/opt/mpich/intel/bin/mpirun -nolocal -machinefile
hosts -np 6 pg3d.exe
---
Machine configuration:
 CPU: Intel(R) Dual Processor Xeon(R) CPU 3.2GHz
Installation using rocks4.1





__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/


[OMPI users] ERROR: gfortran compiler is not in PATH for driver: mpif90

2007-02-02 Thread Vadivelan Ranjith
Hi All
I used mpich2-1.0.3 to compile our code. Our code compiled fine. But when I try 
to test our code in intel mpi, It gave the following error

ERROR: gfortran compiler is not in PATH for driver: mpif90

my .bashrc having following path
source /opt/intel/fc/9.1.037/bin/ifortvars.sh
source /opt/intel/mpi/3.0/bin32/mpivars.sh
export PATH=/opt/intel/mpi/3.0/bin32:$PATH

Then i changed the path to 
#export PATH=/opt/ofed1.1/mpi/intel/mvapich-0.9.7-mlx2.20/bin:$PATH
#export 
LD_LIBRARY_PATH=/opt/ofed1.1/mpi/intel/mpvapich-0.9.7-mlx2.20/lib/shared:$LD_LIBRARY_PATH

Now i got different error while compiling our code

ld: skipping incompatible 
///opt/ofed1.1/mpi/intel/mvapich-0.9.7-mlx2.2.0/lib/libmpichf90.a when 
searching for -lmpichf90
ld: cannot find -lmpichf90

I dont know what is the error mean. Our code is working fine with opensource 
mpich.
Can anyone please help me to compile our code. 

Advance Thanks
Velan



-
 Here’s a new way to find what you're looking for - Yahoo! Answers 

[OMPI users] Error using MPI_WAITALL

2007-02-10 Thread Vadivelan Ranjith
Hi
I am using mpich2-1.0.3 to compiling our code. Our code is calling MPI_WAITALL. 
We ran the case in intel-Dual core without any problem and solution was fine. I 
tried to ran the code in intel quad-core. Compilation using mpif90 is fine. I 
started running the executable file, i got the following error.
---
Fatal error in MPI_Waitall: Invalid MPI_Request, error stack:
MPI_Waitall(241): MPI_Waitall(count=250, req_array=0x23e52e0, 
status_array=0x7fbfffe3a0) failed
MPI_Waitall(109): Invalid MPI_Request
---

So i removed all the lines where MPI-WAITALL is using. Again i compiled to code 
using mpif90(mpich) and ran it. Now its running without any problem. Can you 
please explain me what is happen here.

Thanks
Velan


-
 Here’s a new way to find what you're looking for - Yahoo! Answers