[OMPI users] C++ error: static object marked for destruction more than once

2006-08-16 Thread Andrew J Caird


Hello,

I have a short code segment that, when compiled with mpiCC fails 
at run-time with the error:


C++ runtime abort: internal error: static object marked for 
destruction more than once


If I compile the same code with mpicc, it works fine.  If I 
compile the same code with LAM's mpiCC it works fine.


The code is in the attached zip file (along with the output of 
ompi_info and config.log) and also below.


It could very well be that I'm simply doing something wrong, so 
if someone can point that out, that's fine with me (in that case, 
though, if someone can explain why it works with LAM's mpiCC I'd 
find that interesting).


We're running the latest PGI compilers:
   $ pgCC -V
   pgCC 6.1-6 64-bit target on x86-64 Linux

and v1.1.0 of OpenMPI.

Thanks for any advice.
--andy


#include 
#include 
#include "mpi.h"

int main(int argc,char **argv) {

  int ThisThread=0;
  int TotalThreadsNumber=1;

  printf("asdasdas\n");

  MPI_Init(&argc,&argv);
  MPI_Comm_rank(MPI_COMM_WORLD,&ThisThread);
  MPI_Comm_size(MPI_COMM_WORLD,&TotalThreadsNumber);

  MPI_Finalize();
  return 1;
}
*
** **
** WARNING:  This email contains an attachment of a very suspicious type.  **
** You are urged NOT to open this attachment unless you are absolutely **
** sure it is legitimate.  Opening this attachment may cause irreparable**
** damage to your computer and your files.  If you have any questions  **
** about the validity of this message, PLEASE SEEK HELP BEFORE OPENING IT. **
** **
** This warning was added by the IU Computer Science Dept. mail scanner.   **
*

<>


[OMPI users] Dual core Intel CPU

2006-08-16 Thread Allan Menezes

Hi AnyOne,
 I have an 18 node cluster of heterogenous machines. I used fc5 smp 
kernel and ocsar 5.0 beta.
I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4 
versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card 
2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR 
3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine 
and HPL (xhpl executable) and ran the following experiment twice:

content of my "hosts" file1 for this machine for 1st experiment:
a8.lightning.net slots=2
content of my "hosts" file2 for this machine for 2nd experiment:
a8.lightning.net

On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram 
N = sqrt(0.75* ((1024-32 video overhead)/2 )*100*1/8)=approx 6840; 
512MB Ram per CPU otherwise the OS uses the hard drive for virtaul 
memory. This way it resides totally in Ram.
I ran this command twice for the two different hosts files above in two 
experiments:
# mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self  
-np 1 ./xhpl
In both cases the performance remains the same around 4.040 GFlops I 
would expect since I am running slots =2 as two CPU's I would get a 
performance  increase from expt 2 by 100 -50%

But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes