et in my program both
> > MPI_Send()
> > and MPI_Ssend() reproducably perform quicker than SSend(). Is there
> > something
> > obvious I'm missing?
> >
> > Regards,
> > Jeremias
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
e of a blocking send, there the library
> do not return until the data is pushed on the network buffers, i.e.
> the library is the one in control until the send is completed.
>
>Thanks,
> george.
>
> On Oct 15, 2007, at 2:23 PM, Eric Thibodeau wrote:
>
> &
evant
(well..intelligible for me that is ;P ) cause of the failure in
config.log. Any help would be appreciated.
Thanks,
Eric Thibodeau
ompi-output.tar.gz
Description: application/gzip
world"
;
return 0;
}
You should probably check with Intel support for more details.
On Dec 6, 2007, at 11:25 PM, Eric Thibodeau wrote:
Hello all,
I am unable to get past ./configure as ICC fails on C++ tests (see
attached ompi-output.tar.gz). Configure was called without and the
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6-r1 to be the latest stable version of openMPI).
Prasanna, do a sync, 1.2.7 is in portage and report back.
Eric
I do still get the following error message when running my test helloWo
Prasanna, also make sure you try with USE=-threads ...as the ebuild
states, it's _experimental_ ;)
Keep your eye on:
https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport
Eric
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6
Prasanna Ranganathan wrote:
Hi Eric,
Thanks a lot for the reply.
I am currently working on upgrading to 1.2.7
I do not quite follow your directions; What do you refer to when you say say
"try with USE=-threads..."
I am referring to the USE variable which is used to set global package
speci
ll there to be noticed
and logged ;)
On Sep 10, 2008, at 7:52 PM, Eric Thibodeau wrote:
Prasanna, also make sure you try with USE=-threads ...as the ebuild
states, it's _experimental_ ;)
Keep your eye on:
https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport
Eric
Prasanna
Jeff,
In short:
Which of the 3 options is the one known to be unstable in the following:
--enable-mpi-threadsEnable threads for MPI applications (default:
disabled)
--enable-progress-threads
Enable threads asynchronous communication progre
Jeff Squyres wrote:
On Sep 11, 2008, at 2:38 PM, Eric Thibodeau wrote:
In short:
Which of the 3 options is the one known to be unstable in the following:
--enable-mpi-threadsEnable threads for MPI applications (default:
disabled)
--enable-progress-threads
Jeff Squyres wrote:
On Sep 11, 2008, at 3:27 PM, Eric Thibodeau wrote:
Ok, added to the information from the README, I'm thinking none of
the 3 configure options have an impact on the said 'threaded TCP
listener' and the MCA option you suggested should still work, is this
Prasanna,
I opened up a bug report to enable a better control over the
threading options (http://bugs.gentoo.org/show_bug.cgi?id=237435). In
the meanwhile, if your helloWorld isn't too fluffy, could you send it
over (off list if you prefer) so I can take a look at it, the
Segmentation faul
Prasanna,
Please send me your /etc/make.conf and the contents of
/var/db/pkg/sys-cluster/openmpi-1.2.7/
You can package this with the following command line:
tar -cjf data.tbz /etc/make.conf /var/db/pkg/sys-cluster/openmpi-1.2.7/
And simply send me the data.tbz file.
Thanks,
Eric
Prasa
Enrico Barausse wrote:
Hello,
I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:
Sorry about that, I had misinterpreted your original post as being the
pair of send-receive. The example you give below does seem correct
indeed, which means you might have to show us the code that doesn't
work. Note that I am in no way a Fortran expert, I'm more versed in C.
The only hint I'd
could be a bit lengthy since the entire system (or at
least all libs openmpi links to) needs to be rebuilt.
Eric
Eric Thibodeau wrote:
Prasanna,
Please send me your /etc/make.conf and the contents of
/var/db/pkg/sys-cluster/openmpi-1.2.7/
You can package this with the following command
Hello all,
I am currently profiling a simple case where I replace multiple S/R
calls with Allgather calls and it would _seem_ the simple S/R calls are
faster. Now, *before* I come to any conclusion on this, one of the
pieces I am missing is more details on how /if/when the tuned coll MCA
i
Hello all,
(this _might_ be related to https://svn.open-mpi.org/trac/ompi/ticket/1505)
I just compiled and installed 1.3.3 ins a CentOS 5 environment and we
noticed the
processes would deadlock as soon as they would start using TCP communications.
The
test program is one that has been run
Jeff Squyres wrote:
On Oct 18, 2008, at 9:19 PM, Mostyn Lewis wrote:
Can OpenMPI do like Scali and MVAPICH2 and utilize 2 IB HCAs per machine
to approach double the bandwidth on simple tests such as IMB PingPong?
Yes. OMPI will automatically (and aggressively) use as many active
ports as y
1'
'--datadir=/usr/share/openmpi/1.0.2-gcc-4.1' '--program-suffix=-1.0.2-gcc-4.1'
'--sysconfdir=/etc/openmpi/1.0.2-gcc-4.1' '--enable-pretty-print-stacktrace'
'--build=i686-pc-linux-gnu' '--cache-file' 'config.cache' 'CFLAGS=-march=nocona
-O2 -pipe -fomit-frame-pointer' 'CXXFLAGS=-march=nocona -O2 -pipe
-fomit-frame-pointer' 'LDFLAGS= -Wl,-z,-noexecstack'
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu'
--enable-ltdl-convenience\"
Any help would be greatly appreciated.
Thanks.
[1] http://gridengine.sunsource.net/servlets/ReadMsg?list=users&msgNo=15775
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
> Hi, I noticed your prefix set to the lib dir, can you try without the
> lib64 part and rerun?
>
> Eric Thibodeau wrote:
> > Hello everyone,
> >
> > Well, first off, I hope this problem I am reporting is of some validity,
> > I tried finding simmilar situations of
make a different either.
>
> I am not sure if others can help you on this.
>
> Eric Thibodeau wrote:
> > Hello,
> >
> > I don't want to get too much off topic in this reply but you're brigning
> > out a point here. I am unable to run mpi apps on the
Hello Jeff,
Fristly, don't worry about jumping in late, I'll send you a skid rope
;) Secondly, thanks for your nice little artilces on clustermonkey.net (good
refresher on MPI). And finally, down to my issues, thanks for clearing out the
--prefix LD_LIBRARY_PATH and all. The ebuild I ma
CFLAGS to compile Open MPI in32 bit mode on
configure: WARNING: Sparc processors
configure: error: Can not continue.
Is Sparc support put aside for the moment or am-I doing something wrong?
Thanks,
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
Thanks for the pointer, it WORKS!! (yay)
Le mardi 20 juin 2006 12:21, Brian Barrett a écrit :
> On Jun 19, 2006, at 12:15 PM, Eric Thibodeau wrote:
>
> > I checked the thread with the same title as this e-mail and tried
> > compiling openmpi-1.1b4r10418 with:
> >
MCA sds: env (MCA v1.0, API v1.0, Component v1.1)
MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1)
MCA sds: seed (MCA v1.0, API v1.0, Component v1.1)
MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1)
Le mardi 20 juin 2006 17:06, Eric Thibode
Yeah bummers, but something tells me it might not be OpenMPI's fault. Here's
why:
1- The tech that takes care of these machines told me that he gets RTC errors
on bootup (the cpu borads are apprantly "out of sync" since the clocks aren't
set correctly).
2- There is also a possibility that the p
; Also, FWIW, it's not necessary to specify --enable-ltdl-convenience; that
> > should be automatic.
> >
> > If you had a clean configure, we *suspect* that this might be due to
> > alignment issues on Solaris 64 bit platforms, but thought that we might
> &g
penMPI issue
> or possibly some type of platform problem.
>
> There is another thread with Eric Thibodeau that I am unsure if it is
> the same issue
> as either of our situation.
>
> --td
[...snip...]
27;re not
> running the
> same thing, as of yet.
>
> I have a cluster of two v440 that have 4 cpus each running Solaris 10.
> The tests I
> am running are np=2 one process on each node.
>
> --td
>
> Eric Thibodeau wrote:
>
> >Terry,
> >
> &
signments ;)
> Thank you for your help in advance,
>
> Regards,
>
> Manal
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
pecific points to add,
>
> Thanks again, I appreciate it,
>
> Manal
>
> On Mon, 2006-07-03 at 23:17 -0400, Eric Thibodeau wrote:
> > See comments below:
> >
> > Le lundi 3 juillet 2006 23:01, Manal Helal a écrit :
> > > Hi
> > >
> >
penMPI) to dynamically ADD or remove processes
from the parallel task pool. Of course, any documentation concerning these new
features would also be greatly appreciated ;)
Thanks,
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
ill be part
> >of the Open MPI 1.1.1 release.
> >
> >Thanks, Brian
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
t; greetings, Marcin
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
Gabriel a écrit :
> Eric Thibodeau wrote:
> > Hello all,
> >
> > Before I embark on a train that will run out of tracks, I wanted to
> > get a WFF concerning the spwaning mechanisme in OpenMPI. The intent
> > is that I would program a "simple" parallel
l on open-mpi?
> Thank you ;)
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
config.log.x86_64 <--config log for the Opteron build (works locally)
config.log_node0<--config log for the Athlon build (on the node)
ompi_info.i686 <--ompi_info on the Athlon node
ompi_info.x86_64<--ompi_info on the Opteron Master
Thanks,
--
Eric Thibode
nfs really doesn't like (if
anyone else on the mailling list is using OpenMPI with Unionfs, would be
nice to know).
Eric
Le dimanche 16 juillet 2006 14:31, Brian Barrett a écrit :
> On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote:
> > But, for some reason, on the Athlon nod
llet 2006 14:31, Brian Barrett a écrit :
> On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote:
> > But, for some reason, on the Athlon node (in their image on the
> > server I should say) OpenMPI still doesn't seem to be built
> > correctly since it crashes as
Thanks, now all makes more sense to me. I'll try the hard way, multiple builds
for multiple envs ;)
Eric
Le dimanche 16 juillet 2006 18:21, Brian Barrett a écrit :
> On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
> > Now that I have that out of the way, I'd li
Hello everyone,
I am having a hard time getting OpenMPI (1.1.2) to run in a
heterogeneous environment. In short, here is my command line:
orterun --prefix ~/openmpi_x86_64/ -hostfile head -np 2 mandelbrot-mpi_x86_64
1 400 400 0 : --prefix ~/openmpi_i686/ -hostfile nodes -np `wc -l
igure: error: Could not find MPI library
Anyone can help me with this one...?
Note that LAM-MPI is also installed on these systems...
Eric Thibodeau
h as extracting flags via showme) to find
> the Right shared libraries.
>
> Let us know if that works for you.
>
> FWIW, we do recommend using the wrapper compilers over extracting the
> flags via --showme whenever possible (it's just simpler and should do
> what you nee
will
try those to simplify the build process!
Eric
Le jeudi 15 février 2007 19:51, Anthony Chan a écrit :
>
> As long as mpicc is working, try configuring mpptest as
>
> mpptest/configure MPICC=/bin/mpicc
>
> or
>
> mpptest/configure --with-mpich=
>
> A.Chan
d a
> shared memory race condition, for example:
>
> http://www.open-mpi.org/nightly/v1.2/
>
>
> On Feb 16, 2007, at 12:12 AM, Eric Thibodeau wrote:
>
> > Hello devs,
> >
> > Thought I would let you know there seems to be a problem with
> > 1.
library to be
# used. The variable MPdir is only used for defining MPinc and MPlib.
#
MPdir= /usr/local/mpi
MPinc= -I$(MPdir)/include
MPlib= $(MPdir)/lib/libmpich.a
--
Eric Thibodeau
CNRS
>Batiment 506
> BP 167
>F - 91403 ORSAY Cedex
> Site Web :http://www.idris.fr
> **
>
> Eric Thibodeau a écrit :
> > Hello all,
> >
> > As we all know, compiling OpenMPI is not a matter
Hi George,
Would you say this is preferred to changing the default CC + LINKER?
Eric
Le mercredi 21 février 2007 12:04, George Bosilca a écrit :
> You should use something like this
> MPdir = /usr/local/mpi
> MPinc = -I$(MPdir)/include
> MPlib = -L$(MPdir)/lib -lmpi -lopen-rte -lopen-pal
>
> v1.2 because someone else out in the Linux community uses "libopal".
>
> I typically prefer using "mpicc" as CC and LINKER and therefore
> letting the OMPI wrapper handle everything for exactly this reason.
>
>
> On Feb 21, 2007, at 12:39 PM, Eric Th
ron:14074] [ 5] /lib/libc.so.6(__libc_start_main+0xe3) [0x6fcbd823]
[kyron:14074] *** End of error message ***
Eric Thibodeau
#include
#include
#include
#include
#define V_LEN 10 //Vector Length
#define E_CNT 10 //Element count
MPI_Op MPI_MySum; //Custom Sum function
MPI_Datatype MPI_MyTyp
14027
OPAL: 1.2
OPAL SVN revision: r14027
Prefix: /home/kyron/openmpi_i686
Configured architecture: i686-pc-linux-gnu
Configured by: kyron
Configured on: Wed Apr 4 10:21:34 EDT 2007
Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit :
&g
0];
I'm attaching the functionnal code so that others can maybe see this one as an
example ;)
Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit :
> Hello all,
>
> First off, please excuse the attached code as I may be naïve in my
> attempts to implement my own MPI_OP.
53 matches
Mail list logo