Thank you for the diagnosis.
Saadat.
On 7/6/06, Ralph Castain wrote:
Hi Saadat
That's the problem, then – you need to run comm_spawn applications using
mpirun, I'm afraid. We plan to fix this in the near future, but for now we
can only offer that workaround.
Ralph
On 7/6/06 5:30 PM, "
On Jul 6, 2006, at 8:27 PM, Manal Helal wrote:
I am trying to debug my mpi program, but printf debugging is not doing
much, and I need something that can show me variable values, and which
line of execution (and where it is called from), something like gdb
with
mpi,
is there anything like th
hi
I am trying to debug my mpi program, but printf debugging is not doing
much, and I need something that can show me variable values, and which
line of execution (and where it is called from), something like gdb with
mpi,
is there anything like that?
thank you very much for your help,
Manal
On Jul 5, 2006, at 8:54 AM, Marcin Skoczylas wrote:
I saw some posts ago almost the same question as I have, but it didn't
give me satisfactional answer.
I have setup like this:
GUI program on some machine (f.e. laptop)
Head listening on tcpip socket for commands from GUI.
Workers waiting for c
Hi Saadat
That¹s the problem, then you need to run comm_spawn applications using
mpirun, I¹m afraid. We plan to fix this in the near future, but for now we
can only offer that workaround.
Ralph
On 7/6/06 5:30 PM, "s anwar" wrote:
> Ralph:
>
> I am running the application without mpirun, i
Ralph:
I am running the application without mpirun, i.e. ./foobar. So, according to
you definition of singleton above, I am calling comm_spawn from a singleton.
Thanks.
Saadat.
On 7/6/06, Ralph Castain wrote:
Thanks Saadat
Could you clarify how you are running this application? We have a
Thanks Saadat
Could you clarify how you are running this application? We have a known
problem with comm_spawn from a singleton (i.e., if you just did a.out
instead of mpirun np 1 a.out) - the errors look somewhat like what you are
showing here, hence our curiousity.
Thanks
Ralph
On 7/6/06 3:1
Ralph:
I am using Fedora Core 4 (Linux turkana 2.6.12-1.1390_FC4smp #1 SMP Tue Jul
5 20:21:11 EDT 2005 i686 athlon i386 GNU/Linux). The machine is a dual
processor Athlon based machine. No, cluster resource manager, just an
rsh/ssh based setup.
Thanks.
Saadat.
On 7/6/06, Ralph H Castain wrote:
With 1.0.3a1r10670 the same problem is occuring. Again the same configure
arguments
as before. For clarity, the Myrinet drive we are using is 2.0.21
node90:~/src/hpl/bin/ompi-xl-1.0.3 jbronder$ gm_board_info
GM build ID is "2.0.21_MacOSX_rc20050429075134PDT
r...@node96.meldrew.clusters.umaine.e
Hi Saadat
Could you tell us something more about the system you are using? What type
of processors, operating system, any resource manager (e.g., SLURM, PBS),
etc?
Thanks
Ralph
On 7/6/06 10:49 AM, "s anwar" wrote:
> Good Day:
>
> I am getting the following error messages every time I run a
Yes, that output was actually cut and pasted from an OS X run. I'm about to
test
against 1.0.3a1r10670.
Justin.
On 7/6/06, Galen M. Shipman wrote:
Justin,
Is the OS X run showing the same residual failure?
- Galen
On Jul 6, 2006, at 10:49 AM, Justin Bronder wrote:
Disregard the failure on
Justin,
Is the OS X run showing the same residual failure?
- Galen
On Jul 6, 2006, at 10:49 AM, Justin Bronder wrote:
Disregard the failure on Linux, a rebuild from scratch of HPL and
OpenMPI
seems to have resolved the issue. At least I'm not getting the
errors during
the residual checks
Good Day:
I am getting the following error messages every time I run a very simple
program that spawns child processes:
[turkana:27949] [0,0,0] ORTE_ERROR_LOG: Not found in file
base/soh_base_get_proc_soh.c at line 80
[turkana:27949] [0,0,0] ORTE_ERROR_LOG: Not found in file
base/oob_base_xcast.c
Disregard the failure on Linux, a rebuild from scratch of HPL and OpenMPI
seems to have resolved the issue. At least I'm not getting the errors
during
the residual checks.
However, this is persisting under OS X.
Thanks,
Justin.
On 7/6/06, Justin Bronder wrote:
For OS X:
/usr/local/ompi-xl/b
For OS X:
/usr/local/ompi-xl/bin/mpirun -mca btl gm -np 4 ./xhpl
For Linux:
ARCH=ompi-gnu-1.1.1a
/usr/local/$ARCH/bin/mpiexec -mca btl gm -np 2 -path /usr/local/$ARCH/bin
./xhpl
Thanks for the speedy response,
Justin.
On 7/6/06, Galen M. Shipman wrote:
Hey Justin,
Please provide us your mca
Hey Justin,
Please provide us your mca parameters (if any), these could be in a
config file, environment variables or on the command line.
Thanks,
Galen
On Jul 6, 2006, at 9:22 AM, Justin Bronder wrote:
As far as the nightly builds go, I'm still seeing what I believe to be
this problem in
As far as the nightly builds go, I'm still seeing what I believe to be
this problem in both r10670 and r10652. This is happening with
both Linux and OS X. Below are the systems and ompi_info for the
newest revision 10670.
As an example of the error, when running HPL with Myrinet I get the
follo
Check out "Windows Compute Cluster Server 2003",
http://www.microsoft.com/windowsserver2003/ccs/default.mspx.
From the FAQ: "Windows Compute Cluster Server 2003 comes with the
Microsoft Message Passing Interface (MS MPI), an MPI stack based on the
MPICH2 implementation from Argonne National La
Thanks for looking into this!
I'm going to file a feature enhancement for OMPI to add this option once
the PGI debugger works with Open MPI (I don't want to add it before
then, because it may be misleading to users).
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:user
Ick. This isn't a helpful error message, is it? :-)
Can you try upgrading to the recently-released v1.1 and see if the error
is still occurring?
Have you tried running your application through a memory-checking
debugger such as valgrind, perchance?
Usually with this kind of error, more information is shown regarding the
failure. Are there any other failure messages displayed? Please see
http://www.open-mpi.org/community/help/.
Also be aware that "test" is the name of a Unix executable (e.g.,
/bin/test). I'm assuming that you're trying to
Open MPI has periodically worked under Cygwin, but I would not call it
anywhere near production-quality.
There are various commercial MPI implementations available for Windows.
A simple Google search for "MPI windows" turns up a bunch, including the
free MPICH-based Windows package.
___
Dear openmpi users,
I am using openmpi-1.0.2 on Redhat linux. I am not able to succussfully run
mpirun in single PC with 2 np.
Can you give me some advices? thank you very much in advance.
$ mpirun -np 2 test
ERROR: A daemon on node wolf46 failed to start as
expected.
ERROR: There may
hello
I'll be glad to know if an MPI is available On WINDOWS Platform.
Regards
usha
I still get the BUS error with openmpi-1.1.1a1r10643
Eric
Le mardi 4 juillet 2006 08:50, Terry D. Dontje a écrit :
> Brian, it looks like the Address alignment error I saw might have been
> resolved
> with one of the last set of bug fixes to go into v1.1. The gold version
> of v1.1 worked
> o
25 matches
Mail list logo