Re: [OMPI users] GM + OpenMPI bug ...

2010-06-11 Thread José Ignacio Aliaga Estellés
Hi, We have made different tests to locate the problem. Some nodes don't work correctly when we use gm_allsize -v and we have isolated them. On the good nodes, we have executed our broadcast test with MPICH-1 and it works correctly. But If we use OpenMPI 1.4.2 it still fails. We would like

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-31 Thread José Ignacio Aliaga Estellés
Hi, We have made different tests to locate the problem. Some nodes don't work correctly when we use gm_allsize -v and we have isolated them. On the good nodes, we have executed our broadcast test with MPICH-1 and it works correctly. But If we use OpenMPI 1.4.2 it still fails. We would like

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-21 Thread Patrick Geoffray
Hi Jose, On 5/21/2010 6:54 AM, José Ignacio Aliaga Estellés wrote: We have used the lspci -vvxxx and we have obtained: bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit Ethernet Controller (Copper) (rev 02) This is the output for the Intel GigE NIC, you should look at the o

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-21 Thread José Ignacio Aliaga Estellés
Hi, We have used the lspci -vvxxx and we have obtained: bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit Ethernet Controller (Copper) (rev 02) bi00: Subsystem: Intel Corporation PRO/1000 XT Server Adapter bi00: Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoo

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-20 Thread Patrick Geoffray
Hi Jose, On 5/12/2010 10:57 PM, Jos? Ignacio Aliaga Estell?s wrote: I think that I have found a bug on the implementation of GM collectives routines included in OpenMPI. The version of the GM software is 2.0.30 for the PCI64 cards. I obtain the same problems when I use the 1.4.1 or the 1.4.2