On May 21, 2010, at 3:13 AM, Olivier Riff wrote:
> -> That is what I was thinking about to implement. As you mentioned, and
> specifically for my case where I mainly send short messages, there might not
> be much win. By the way, are their some benchmarks testing sequential
> MPI_ISend versus M
Hello,
I am resending this because I am not sure if it was sent out to the OMPI
list.
Any help would be greatly appreciated.
best
Michael
On 05/19/10 13:19, Michael E. Thomadakis wrote:
Hello,
I would like to build OMPI V1.4.2 and make it available to our users at the
Supercomputing C
Hi Jose,
On 5/21/2010 6:54 AM, José Ignacio Aliaga Estellés wrote:
We have used the lspci -vvxxx and we have obtained:
bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit
Ethernet Controller (Copper) (rev 02)
This is the output for the Intel GigE NIC, you should look at the o
Hi,
Am 21.05.2010 um 17:19 schrieb Eloi Gaudry:
> Hi Reuti,
>
> Yes, the openmpi binaries used were build after having used the --with-sge
> during configure, and we only use those binaries on our cluster.
>
> [eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info
> MCA ras: gridengine
Hi Reuti,
Yes, the openmpi binaries used were build after having used the --with-sge
during configure, and we only use those binaries on our cluster.
[eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info
Package: Open MPI root@moe Distribution
Open MPI: 1.3.3
Open MPI
Your fortran call to 'mpi_bcast' needs a status parameter at the end of
the argument list. Also, I don't think 'MPI_INT' is correct for
fortran, it should be 'MPI_INTEGER'. With these changes the program
works OK.
T. Rosmond
On Fri, 2010-05-21 at 11:40 +0200, Pankatz, Klaus wrote:
> Hi folks,
>
Pankatz, Klaus wrote:
Hi folks,
openMPI 1.4.1 seems to have another problem with my machine, or something on it.
This little program here (compiled with mpif90) startet with mpiexec -np 4
a.out produces the following output:
Suriprisingly the same thing written in C-Code (compiled with mpiC
Hi,
Am 21.05.2010 um 14:11 schrieb Eloi Gaudry:
> Hi there,
>
> I'm observing something strange on our cluster managed by SGE6.2u4 when
> launching a parallel computation on several nodes, using OpenMPI/SGE tight-
> integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are
>
On Tue, May 18, 2010 at 3:53 PM, Josh Hursey wrote:
>> I've noticed that ompi-restart doesn't support the --rankfile option.
>> It only supports --hostfile/--machinefile. Is there any reason
>> --rankfile isn't supported?
>>
>> Suppose you have a cluster without a shared file system. When one node
Hi there,
I'm observing something strange on our cluster managed by SGE6.2u4 when
launching a parallel computation on several nodes, using OpenMPI/SGE tight-
integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are
not used by OpenMPI, as if OpenMPI was doing is own round-robi
Hi,
We have used the lspci -vvxxx and we have obtained:
bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit
Ethernet Controller (Copper) (rev 02)
bi00: Subsystem: Intel Corporation PRO/1000 XT Server Adapter
bi00: Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoo
Hi folks,
openMPI 1.4.1 seems to have another problem with my machine, or something on
it.
This little program here (compiled with mpif90) startet with mpiexec -np 4
a.out produces the following output:
Suriprisingly the same thing written in C-Code (compiled with mpiCC) works
without a probl
Hello Jeff,
thanks for your detailed answer.
2010/5/20 Jeff Squyres
> You're basically talking about implementing some kind of
> application-specific protocol. A few tips that may help in your design:
>
> 1. Look into MPI_Isend / MPI_Irecv for non-blocking sends and receives.
> These may be p
13 matches
Mail list logo