On Tue, 2008-03-18 at 12:28 -0700, Michael Jennings wrote:
> On Tuesday, 18 March 2008, at 12:15:34 (-0700),
> Christopher Irving wrote:
>
> > Now, if you removed line 651 and 653 from the new spec file it works
> > for both cases. You wont get the files listed twice error because
> > although y
Dear All,
I was parallelising the serial molecular dynamic simulation code as
given below:
I have only two processors. My system is a duacore system.
c--
SERIAL CODE
c..
Hi Greg,
Siekas, Greg wrote:
Is it possible to get the same blocking behavior with openmpi? I'm
having a difficult time getting this to work properly. The application
is spinning on sched_yield which takes up a cpu core.
Per its design, OpenMPI cannot block. sched_yield is all it can do to
On Tuesday, 18 March 2008, at 12:15:34 (-0700),
Christopher Irving wrote:
> Now, if you removed line 651 and 653 from the new spec file it works
> for both cases. You wont get the files listed twice error because
> although you have the statement %dir %{_prefix} on line 649 you
> never have a lin
On Tue, 2008-03-18 at 08:32 -0400, Jeff Squyres wrote:
> On Mar 17, 2008, at 2:34 PM, Christopher Irving wrote:
>
> > Well that fixed the errors for the case prefix=/usr but after
> > looking at
> > the spec file I suspected it would cause a problem if you used the
> > install_in_opt option. So
On 10:51 Tue 18 Mar , Jeff Squyres wrote:
> The upcoming v1.3 series doesn't suffer from this issue; we revamped
> our transport system to distinguish between early and normal
> completions. The pml_ob1_use_eager_completion MCA param was added to
> v1.2.6 to allow correct MPI apps to avo
On Mar 18, 2008, at 10:32 AM, George Bosilca wrote:
Jeff hinted the real problem in his email. Even if the program use
the correct MPI functions, it is not 100% correct.
I think we disagree here -- the sample program is correct according to
the MPI spec. It's an implementation artifact tha
As indicated in the FAQ you should add the directory where Open MPI
was installed to the LD_LIBRARY_PATH.
george.
On Mar 18, 2008, at 8:57 AM, Giovani Faccin wrote:
Ok, I uninstalled the previous version. Then downloaded the pre-
release version. Unpacked it, configure, make, make install.
Jeff hinted the real problem in his email. Even if the program use the
correct MPI functions, it is not 100% correct. It might pass in some
situations, but can lead to fake "deadlocks" in others. The problem
come from the flow control. If the messages are small (which is the
case in the tes
Ok, I uninstalled the previous version. Then downloaded the pre-release
version. Unpacked it, configure, make, make install
When running MPICC I get this:
mpiCC: error while loading shared libraries: libopen-pal.so.0: cannot open
shared object file: No such file or directory
$whereis libope
On Mar 18, 2008, at 8:38 AM, Giovani Faccin wrote:
Yep, setting the card manually did not solve it.
I would not think that it would. Generally, if OMPI can't figure out
your network configuration, it'll be an "all or nothing" kind of
failure. The fact that your program runs for a long wh
Yep, setting the card manually did not solve it.
I'm compiling the pre-release version now. Let's see if it works.
Giovani
Giovani Faccin escreveu: Hi Mark
Compiler and flags:
sys-devel/gcc-4.1.2 USE="doc* fortran gtk mudflap nls (-altivec) -bootstrap
-build -d -gcj (-hardened) -ip28 -ip32r
On Mar 17, 2008, at 2:34 PM, Christopher Irving wrote:
Well that fixed the errors for the case prefix=/usr but after
looking at
the spec file I suspected it would cause a problem if you used the
install_in_opt option. So I tried it and got the following errors:
RPM build errors:
Instal
Hi Mark
Compiler and flags:
sys-devel/gcc-4.1.2 USE="doc* fortran gtk mudflap nls (-altivec) -bootstrap
-build -d -gcj (-hardened) -ip28 -ip32r10k -libffi% (-multilib) -multislot
(-n32) (-n64) -nocxx -objc -objc++ -objc-gc -test -vanilla"
Network stuff:
sonja gfaccin # ifconfig
loLin
Giovani:
Which compiler are you using?
Also, you didn't mention this, but does "mpirun hostname" give the
expected response? I (also new) had a hang similar to what you are
describing due to ompi getting confused as to which of two network
interfaces to use - "mpirun hostname" would hang when st
OK, this is strange. I've rerun the test and got it to block,
too. Although repeated tests show that those are rare (sometimes the
program runs smoothly without blocking, but in about 30% of the cases
it hangs just like you said).
On 08:11 Tue 18 Mar , Giovani Faccin wrote:
> I'm using openmpi
Two notes for you:
1. Your program does necessarily guarantee what you might expect:
since you use ANY_SOURCE/ANY_TAG in both the receives, you might
actually get two receives from the same sender in a given iteration.
The fact that you're effectively using yield_when_idle (which OMPI
wi
Hey Balaji
I'm new at it too, but might be able to help you a bit.
A sigsegv error occurs usually when you try to access something in memory
that's not actually there. Like using a pointer that points to nothing. In my
short experience with MPI so far, I got this kind of message when I made
so
On Mar 17, 2008, at 10:16 PM, balaji srinivas wrote:
I am new to MPI. The outline of my code is
if(r==0)
function1()
else if(r==1)
function2()
where r is the rank and functions are included in the .h files.
There are no compilation errors. I get the SIGSEGV error while
running.
Pls help.
Hi Andreas, thanks for the reply!
I'm using openmpi-1.2.5. It was installed using my distro's (Gentoo) default
package:
sys-cluster/openmpi-1.2.5 USE="fortran ipv6 -debug -heterogeneous -nocxx -pbs
-romio -smp -threads"
I've tried setting the mpi_yield_when_idle parameter as you asked. Howev
Hmm, strange. It doesn't hang for me and AFAICS it shouldn't hang at
all. I'm using 1.2.5. Which version of Open MPI are you using?
Hanging with 100% CPU utilization often means that your processes are
caught in a busy wait. You could try to set mpi_yield_when_idle:
> gentryx@hex ~ $ cat .openmp
21 matches
Mail list logo