Re: [OMPI users] OpenMPI and SLURM

2013-01-13 Thread Beat Rubischon
ith-pmi CFLAGS="-I/usr/include/slurm" Thanks for your help, you made my day! Beat -- \|/ Beat Rubischon ( 0-0 ) http://www.0x1b.ch/~beat/ oOO--(_)--OOo--- Meine Erlebnisse, Ge

[OMPI users] OpenMPI and SLURM

2013-01-12 Thread Beat Rubischon
shut down cleanly. Reason: Before MPI_INIT completed Local host: node30.cluster PID:42204 -- srun: error: node30: tasks 0-1: Exited with exit code 1 salloc: Relinquishing job allocation 74 salloc: Job allocation 74 ha

Re: [OMPI users] IMB-OpenMPI on Centos 6

2012-02-27 Thread Beat Rubischon
wc/document?cc=us&lc=en&dlc=en&tmp_geoLoc=true&docname=c03113904 HTH Beat -- \|/ Beat Rubischon ( 0-0 ) http://www.0x1b.ch/~beat/ oOO--(_)--OOo--- Meine Erlebnisse,

Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-23 Thread Beat Rubischon
a clean and free MPI on your cluster with easy interfaces to your job scheduler. Or buy a commercial MPI, invest a lot of manpower for a tight integration and win an improved latency and/or throughput. Beat -- \|/ Beat Rubischon ( 0-0 )