16 20:39
To: Open MPI Users
Subject: RE: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Hi Gilles,
Yes that’s correct – one node with 3 cores is about 1.5 minutes for a 10 second
simulation, this turns into 4 minutes when I send the job to 36 cores on 9 IP
connected nodes.
I haven’t
er J"
mailto:spencer-k...@uiowa.edu>>
Reply-To: Open MPI Users mailto:us...@open-mpi.org>>
Date: Monday, 25 January 2016 at 14:47
To: Open MPI Users mailto:us...@open-mpi.org>>
Subject: Re: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Steve,
I am curious as to how you s
the master and slaves.
Thanks,
Steve
From: Gilles Gouaillardet [mailto:gilles.gouaillar...@gmail.com]
Sent: 24 January 2016 13:26
To: Open MPI Users
Subject: Re: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Steve,
if I understand correctly, running on one node with 4 MPI task
...@gmail.com]
Sent: 24 January 2016 13:26
To: Open MPI Users
Subject: Re: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Steve,
if I understand correctly, running on one node with 4 MPI tasks is three times
faster than running on 10 nodes with 40 (10 ?) tasks.
did you try this test on a x86
January 2016 09:28
To: Open MPI Users
Subject: Re: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Hi Steve.
Regarding Step 3, have you thought of using some shared storage?
NFS shared drive perhaps, or there are many alternatives!
On 23 January 2016 at 20:47, Steve O'Hara
mailt
Steve,
if I understand correctly, running on one node with 4 MPI tasks is three
times faster than running on 10 nodes with 40 (10 ?) tasks.
did you try this test on a x86 cluster and with tcp interconnect, and
did you get better performance when increasing the number of nodes ?
can you try to ru
Hi Steve.
Regarding Step 3, have you thought of using some shared storage?
NFS shared drive perhaps, or there are many alternatives!
On 23 January 2016 at 20:47, Steve O'Hara
wrote:
> Hi,
>
>
>
> I’m afraid I’m pretty new to both OpenFOAM and openMPI so please excuse me
> if my questions are eit
Hi,
I'm afraid I'm pretty new to both OpenFOAM and openMPI so please excuse me if
my questions are either stupid or badly framed.
I've created a 10 Raspberry pi beowulf cluster for testing out MPI concepts and
see how they are harnessed in OpenFOAM. After a helluva lot of hassle, I've
got the