Hi Valmor,
What is happening here is that when Open MPI tries to create MX endpoint for
communication, mx returns code 20, which is MX_BUSY.
At this point we should gracefully move on, but there is a bug in Open MPI 1.2
which causes a segmentation fault in case of this type of error. This will
Hi all,
I'm somewhat new to OpenMPI, but I'm currently evaluating it as a
communications mechanism between Windows and Unix servers.
I noticed that under your FAQs (
http://www.open-mpi.org/faq/?category=supported-systems), it says:
There are plans to support Microsoft Windows in the not-
Hello,
I am getting this error any time the number of processes requested per
machine is greater than the number of cpus. I suspect it is something on
the configuration of mx / ompi that I am missing since another machine I
have without mx installed runs ompi correctly with oversubscription.
Tha
Sidenote -- maybe I should create a "I used to be a LAM user" section
of the FAQ...
Actually a migration FAQ would be a good idea. I am another former
LAM user and had lots of questions about parameter syntax and "I did
it in LAM this way, how do I do it here?" I had the luxury of time
The mpip-help mail list is mpip-help at lists.sourceforge.net.
-Chris
Chris Chambreau wrote:
Hi Folks,
It's great to hear that people are interested in mpiP!
Currently, I am configuring mpiP on x86_64 with gcc 3.4.4 with -O2 and
without libunwind.
When running some simple tests, I'm havin
Hi Folks,
It's great to hear that people are interested in mpiP!
Currently, I am configuring mpiP on x86_64 with gcc 3.4.4 with -O2 and
without libunwind.
When running some simple tests, I'm having good luck using both mpiP
stack walking and libunwind when compiling with gcc and -O2. However
P.s. I just found out you have to recompile/relink the MPI code with -g in
order for the File/Address field to show non-garbage.
On 3/30/07 2:43 PM, "Heywood, Todd" wrote:
> George,
>
> It turns out I didn't have libunwind either, but didn't notice since mpiP
> compiled/linked without it (OK,
George,
It turns out I didn't have libunwind either, but didn't notice since mpiP
compiled/linked without it (OK, so I should have checked the config log).
However, once I got it it wouldn't compile on my RHEL system.
So, following this thread:
http://www.mail-archive.com/libunwind-devel@nongnu.
Hello,
I have Torque as the batch manager and Open MPI (1.0.1) as the MPI
library. Initially I request for 'n' processors through Torque. After
the Open MPI jobs starts, based on certain conditions, I want to acquire
more processors outside of the initially assigned nodes by Torque. Is
this a prob
Generating core files is not a feature of Open MPI but of the
operating system. Based on the shell script you're using there is a
different way to reach this goal. Usually via limit (or ulimit). This
webpage can give you more information about this (http://www.faqs.org/
faqs/hp/hpux-faq/sect
You should be able to get a core dump pretty easily by doing
something like this:
{ char *foo = 0; *foo = 13; }
Ensure that your coredumpsize limit is set to "unlimited" in the
shell on all nodes where you are running your MPI processes. It's
also helpful to set Linux (I'm assuming yo
Short version: just don't list that host in the OMPI hostfile.
Long version:
In LAM, we had the constraint that you *had* to include the local
host in the hostfile that you lambooted. This was undesirable in
some cases, such as lambooting from a cluster's head node (where you
didn't want
In LAM/MPI, I can use "portal.private schedule=no" if I want to
launch a job from a specific node but not schedule it for any work. I
can't seem to find reference to an equivalent in Open MPI.
Thanks.
-Warner
Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
I'm using OpenMPI, and the documentation says that only a totalview
style of debugger can be used. With that in mind, all I want to do is
get a core-dump when a process crashes. I can then just load the core
into GDB. Is there any easy way to do this?
I tried calling signal(SIGSEGV, SIG_DFL); sig
On Mar 29, 2007, at 1:08 PM, Jens Klostermann wrote:
In reply to
http://www.open-mpi.org/community/lists/users/2006/12/2286.php
I recently switched to openmpi1.2 unfortunately the password problem
still persists! I generated new rsa keys and made passwordless ssh
available. This was tested by l
George Bosilca wrote:
> I used it on a IA64 platform, so I supposed x86_64 is supported, but
> I never use it on an AMD 64. On the mpiP webpage they claim they
> support the Cray XT3, which as far as I know are based on AMD Opteron
> 64 bits. So, there is at least a spark of hope in the dark
16 matches
Mail list logo