On Wed, Jul 14, 2010 at 04:27:11PM -0400, Jeff Squyres wrote:
> On Jul 9, 2010, at 12:43 PM, Douglas Guptill wrote:
>
> > After some lurking and reading, I plan this:
> > Debian (lenny)
> > + fai - for compute-node operating system install
> > + Torque- job
Simone Pellegrini wrote:
Dear Open MPI community,
I would like to know from expert system administrator if they know any
"standardized" way for tuning Open MPI runtime parameters.
I need to tune the performance on a custom cluster so I would like to
have some hints in order to proceed in the
On Thu, 15 Jul 2010 13:03:31 -0400, Jeff Squyres wrote:
> Given the oversubscription on the existing HT links, could contention
> account for the difference? (I have no idea how HT's contention
> management works) Meaning: if the stars line up in a given run, you
> could end up with very little/n
Gabriele Fatigati wrote:
Dear OpenMPI users,
is it possible to define some set of parameters for a range number of
processors and message size in openmpi-mca-params.conf ? For example:
if nprocs<256
some mca parameters...
if nprocs > 256
other mca parameters..
and the same related to mes
Given the oversubscription on the existing HT links, could contention account
for the difference? (I have no idea how HT's contention management works)
Meaning: if the stars line up in a given run, you could end up with very
little/no contention and you get good bandwidth. But if there's a bi
There is a slightly newer version available, 8.2.1c at
http://www.oracle.com/goto/ompt
You should be able to install side by side without interfering with a
previously installed version.
If that does not alleviate the issue additional information as Scott
asked would be useful. The full mpir
On 7/15/2010 10:18 AM, Eloi Gaudry wrote:
> hi edgar,
>
> thanks for the tips, I'm gonna try this option as well. the segmentation
> fault i'm observing always happened during a collective communication
> indeed...
> does it basically switch all collective communication to basic mode, right ?
>
hi edgar,
thanks for the tips, I'm gonna try this option as well. the segmentation fault
i'm observing always happened during a collective communication indeed...
does it basically switch all collective communication to basic mode, right ?
sorry for my ignorance, but what's a NCA ?
thanks,
élo
On Thu, 15 Jul 2010 09:36:18 -0400, Jeff Squyres wrote:
> Per my other disclaimer, I'm trolling through my disastrous inbox and
> finding some orphaned / never-answered emails. Sorry for the delay!
No problem, I should have followed up on this with further explanation.
> Just to be clear -- you
you could try first to use the algorithms in the basic module, e.g.
mpirun -np x --mca coll basic ./mytest
and see whether this makes a difference. I used to observe sometimes a
(similar ?) problem in the openib btl triggered from the tuned
collective component, in cases where the ofed libraries
On Jul 7, 2010, at 2:53 PM, Jeremiah Willcock wrote:
> The Open MPI FAQ shows how to add libraries to the Open MPI wrapper
> compilers when building them (using configure flags), but I would like to
> add flags for a specific run of the wrapper compiler. Setting OMPI_LIBS
> overrides the necessar
(still trolling through the history in my INBOX...)
On Jul 9, 2010, at 8:56 AM, Andreas Schäfer wrote:
> On 14:39 Fri 09 Jul , Peter Kjellstrom wrote:
> > 8x pci-express gen2 5GT/s should show figures like mine. If it's pci-express
> > gen1 or gen2 2.5GT/s or 4x or if the IB only came up with
Lydia,
Which interconnect is this running over?
Scott
On Jul 15, 2010, at 5:19 AM, Lydia Heck wrote:
> We are running Sun's build of Open Mpi 1.3.3r21324-ct8.2-b09b-r31
> (HPC8.2) and one code that runs perfectly fine under
> HPC8.1 (Open MPI) 1.3r19845-ct8.1-b06b-r21 and before fails with
>
On Jul 15, 2010, at 9:27 AM, Gabriele Fatigati wrote:
> Mm at the momento no,
>
> but i think it's a good idea to insert this feature in future OpenMPI release
> :)
Agreed.
> We can have parameter set that works well with a precise numbers of procs,
> and not with a more large number ( or mor
Per my other disclaimer, I'm trolling through my disastrous inbox and finding
some orphaned / never-answered emails. Sorry for the delay!
On Jun 2, 2010, at 4:36 PM, Jed Brown wrote:
> The nodes of interest are 4-socket Opteron 8380 (quad core, 2.5 GHz),
> connected
> with QDR InfiniBand. Th
Mm at the momento no,
but i think it's a good idea to insert this feature in future OpenMPI
release :)
We can have parameter set that works well with a precise numbers of procs,
and not with a more large number ( or more small number) . The same for
message size.
Thanks for the quick reply! :D
On Jun 2, 2010, at 10:14 AM, John Cary wrote:
> It seems that the rpath arg is something that bites me over and again.
> What are your thoughts about making this automatic?
I'm trolling through the disaster that is my inbox and finding some orphaned
email threads -- sorry for the delay, folks!
We don't have any kind of logic language like that for the params files.
Got any suggestions / patches?
On Jul 15, 2010, at 8:37 AM, Gabriele Fatigati wrote:
> Dear OpenMPI users,
>
> is it possible to define some set of parameters for a range number of
> processors and message size in openmp
Dear OpenMPI users,
is it possible to define some set of parameters for a range number of
processors and message size in openmpi-mca-params.conf ? For example:
if nprocs<256
some mca parameters...
if nprocs > 256
other mca parameters..
and the same related to message size?
Thanks in advance.
This usually means that you have mis-matched versions of Open MPI across your
machines. Double check that you have the same version of Open MPI installed on
all the machines that you'll be running (e.g., perhaps birg-desktop-10 has a
different version?).
On Jul 15, 2010, at 5:18 AM, TH Chew w
We are running Sun's build of Open Mpi 1.3.3r21324-ct8.2-b09b-r31
(HPC8.2) and one code that runs perfectly fine under
HPC8.1 (Open MPI) 1.3r19845-ct8.1-b06b-r21 and before fails with
[oberon:08454] *** Process received signal ***
[oberon:08454] Signal: Segmentation Fault (11)
[oberon:08454]
Hi all,
I am setting up a 7+1 nodes cluster for MD simulation, specifically using
GROMACS. I am using Ubuntu Lucid 64-bit on all machines. Installed gromacs,
gromacs-openmpi, and gromacs-mpich from the repository. MPICH version of
gromacs runs fine without any error. However, when I ran OpenMPI ve
hi Rolf,
unfortunately, i couldn't get rid of that annoying segmentation fault when
selecting another bcast algorithm.
i'm now going to replace MPI_Bcast with a naive implementation (using MPI_Send
and MPI_Recv) and see if that helps.
regards,
éloi
On Wednesday 14 July 2010 10:59:53 Eloi Gaud
Hi Miguel,
Cygwin is not actively supported, as we are now focusing on native
Windows build using CMake and Visual Studio. But I remember there were
emails some time ago, that people has done Cygwin build with 1.3 series,
see here: http://www.open-mpi.org/community/lists/users/2008/11/7294.ph
Somebody helps please? I am sorry to spam the mailing list but I really need
your help.
Thanks in advance.
Best Regards,
Nguyen Toan
On Thu, Jul 8, 2010 at 1:25 AM, Nguyen Toan wrote:
> Hello everyone,
> I have a question concerning the checkpoint overhead in Open MPI, which is
> the difference
25 matches
Mail list logo