Ok, I'm totally flummoxed here.
I'm an ISV delivering a C program that can use MPI for it's inter-node
communications. It has been deployed on a number (dozens) of small
clusters and has been working pretty over the last few months. That is,
until someone tried to change the static IP address
Hi Jeff,
Jeff Squyres wrote:
On Nov 10, 2008, at 6:41 AM, Jed Brown wrote:
With #define's and compiler flags, I think that can be easily done --
was wondering if this is something that developers using MPI do and
whether AC/AM supports it.
AC will allow you to #define whatever you want --
Hi Jed,
Thank you for your post; I have to admit that I never thought of this as
an option. As the "other way" [which Jeff has posted] is more natural
to me, I will probably try for that first -- but I'll keep what you
posted in the back of my mind.
Thanks a lot!
Ray
Jed Brown wrote:
I rebuilt without the memory manager, now ompi_info crashes with this
output:
./configure --prefix=/usr/local/openmpi --disable-mpi-f90 --disable-
mpi-f77 --without-memory-manager
localhost:~/openmpi> ompi_info
Open MPI: 1.2.8
Open MPI SVN revision: r19718
My goal is to run some software that uses MPI so for now I the most standard
setup.
> Are you saying that you have libmpi_f90.so available and
> when you try to run, you get missing symbol errors? Or are
> you failing to compile/link at all?
Linking stage fails. When I use mpif90 to produc
Hi Erin,
> I have a dual core laptop and I would like to have both cores running.
>
> Here is the following my-hosts file:
> localhost slots=2
Be warned that at least in default config running more MPI threads than
you have cores results in dog slow code.
Single core machine:
$ cat my-hosts
lo
Jeff Squyres wrote:
That is odd. Is your user's app crashing or being forcibly killed? The
ORTE daemon that is silently launched in v1.2 jobs should ensure that
files under /tmp/openmpi-sessions-@ are removed.
It looks like I see orphaned directories under /tmp/openmpi* as well.
--
Ra
yeah if that gets full it is not going to work,
We use /dev/shm for some FEA apps that have bad IO patters, I tend to
keep it to just the most educated users. It just impacts others to
much if not treated with respect.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@um
I got "htop" and it's wonderful.
Thanks for the suggestion.
Erin M. Hodgess, PhD
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: hodge...@uhd.edu
-Original Message-
From: users-boun...@open-mpi.org on behalf of Jeff Squyres
That is odd. Is your user's app crashing or being forcibly killed?
The ORTE daemon that is silently launched in v1.2 jobs should ensure
that files under /tmp/openmpi-sessions-@ are removed.
On Nov 10, 2008, at 2:14 PM, Ray Muno wrote:
Brock Palen wrote:
on most systems /dev/shm is limite
On Nov 10, 2008, at 2:18 PM, Oleg V. Zhylin wrote:
Right -- OMPI builds shared libraries by default.
What is the proper way to build static libraries from RPM? Or
tarball is the only option to accomplish this?
You can pass any options to OMPI's configure script through the
rpmbuild inter
> Right -- OMPI builds shared libraries by default.
What is the proper way to build static libraries from RPM? Or tarball is the
only option to accomplish this?
> Really? That's odd -- our mpif90 simply links against
> -lmpi_f90, not specifically .a or .so. You can run
> "mpif90 --showme" to s
There's also a great project at SourceForge called "htop" that is a
"better" version of top. It includes the ability to query for and set
processor affinity for abitrary processes, colorized output, tree-
based output (showing process hierarchies), etc. It's pretty nice
(IMHO):
http
Brock Palen wrote:
on most systems /dev/shm is limited to half the physical ram. Was the
user someone filling up /dev/shm so there was no space?
The problem is there is a large collection of stale files left in there
by the users that have run on that node (Rocks based cluster).
I am tryi
on most systems /dev/shm is limited to half the physical ram. Was
the user someone filling up /dev/shm so there was no space?
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Nov 10, 2008, at 1:25 PM, Ray Muno wrote:
We are running OpenMPI 1
If you're not using OpenFabrics-based networks, try configuring Open
MPI --without-memory-manager and see if that fixes your problems.
On Nov 8, 2008, at 5:31 PM, Robert Kubrick wrote:
George, I have warning when running under debugger 'Lowest section
in system-supplied DSO at 0xe000 is
On Nov 10, 2008, at 8:27 AM, Oleg V. Zhylin wrote:
I would like to to build OpenMPI from openmpi-1.2.8-1.src.rpm. I've
tried plain rpmbuild and rpmbuild ... --define
'build_all_in_one_rpm 1' but resulting rpm doesn't conain any *.a
libraries.
Right -- OMPI builds shared libraries by def
We are running OpenMPI 1.2.7. Now that we have been running for a
while, we are getting messages of the sort.
node: Unable to allocate shared memory for intra-node messaging.
node: Delete stale shared memory files in /dev/shm.
MPI process terminated unexpectedly
If the user deletes the stale f
On Nov 10, 2008, at 6:41 AM, Jed Brown wrote:
With #define's and compiler flags, I think that can be easily done
--
was wondering if this is something that developers using MPI do and
whether AC/AM supports it.
AC will allow you to #define whatever you want -- look at the
documentation f
Hi,
I would like to to build OpenMPI from openmpi-1.2.8-1.src.rpm. I've tried
plain rpmbuild and rpmbuild ... --define 'build_all_in_one_rpm 1' but
resulting rpm doesn't conain any *.a libraries.
I think this is a problem because I've straced mpif90 and discovered that ld
invoked from gf
On Mon 2008-11-10 12:35, Raymond Wan wrote:
> One thing I was wondering about was whether it is possible, though the
> use of #define's, to create code that is both multi-processor
> (MPI/mpic++) and single-processor (normal g++). That is, if users do
> not have any MPI installed, it compiles it
you can also press "f" while"top" is running and choose option "j"
this way you will see what CPU is chosen under column P
Lenny.
On Mon, Nov 10, 2008 at 7:38 AM, Hodgess, Erin wrote:
> great!
>
> Thanks,
> Erin
>
>
> Erin M. Hodgess, PhD
> Associate Professor
> Department of Computer and Math
great!
Thanks,
Erin
Erin M. Hodgess, PhD
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: hodge...@uhd.edu
-Original Message-
From: users-boun...@open-mpi.org on behalf of Brock Palen
Sent: Sun 11/9/2008 11:21 PM
To: Open MP
Run 'top' For long running applications you should see 4 processes
each at 50% (4*50=200% two cpus).
You are ok, your hello_c did what it should, each of thoese 'hello's
could have came from any of the two cpus.
Also if your only running on your local machine, you don't need a
hostfile,
This sounds great!
Thanks for your help!
Sincerely,
Erin
Erin M. Hodgess, PhD
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: hodge...@uhd.edu
-Original Message-
From: users-boun...@open-mpi.org on behalf of Raymond Wan
Sent
Dear Erin,
I'm nowhere near a guru, so I hope you don't what I have to say (it
might be wrong...).
But what I did was just put a long loop into the program and while it
was running, I opened another window and looked at the output of "top".
Obviously, without the loop, the program would te
Dear Open MPI gurus:
I have just installed Open MPI this evening.
I have a dual core laptop and I would like to have both cores running.
Here is the following my-hosts file:
localhost slots=2
and here is the command and output:
mpirun --hostfile my-hosts -np 4 --byslot hello_c |sort
Hello, wor
27 matches
Mail list logo