Doh - yes it should! I'll fix it right now.
Thanks!
On Jul 26, 2010, at 9:28 PM, Philippe wrote:
> Ralph,
>
> i was able to test the generic module and it seems to be working.
>
> one question tho, the function orte_ess_generic_component_query in
> "orte/mca/ess/generic/ess_generic_component.c
Ralph,
i was able to test the generic module and it seems to be working.
one question tho, the function orte_ess_generic_component_query in
"orte/mca/ess/generic/ess_generic_component.c" calls getenv with the
argument "OMPI_MCA_enc", which seems to cause the module to fail to
load. shouldnt it be
Hello,
When I compile and run this code snippet:
1 program test
2
3 use mpi
4
5 implicit none
6
7 integer :: ierr, nproc, myrank
8 integer, parameter :: dp = kind(1.d0)
9 real(kind=dp) :: inside(5), outside(5)
10
11 call mpi_
In case my previous e-mail is too vague for anyone to address, here's a
backtrace from my application. This version, compiled with Intel
11.1.064 (OpenMPI 1.4.2 w/ gcc 4.4.2), hangs during MPI_Alltoall
instead. Running on 16 CPUs, Opteron 2427, Mellanox Technologies
MT25418 w/ OFED 1.5
strace on
FWIW, the stack trace is telling you that it segv'ed in a printf in the main()
function of your application. If it dumped core, you can just attach to the
core file and see exactly where it died.
On Jul 25, 2010, at 10:08 PM, Jack Bryan wrote:
> Dear All,
>
> I run a 6 parallel processes on
What do you mean short and long? Do you have the ability to control
the execution time of your program without changing a single line of
your code?
On 7/25/10, Jack Bryan wrote:
>
> Dear All,
> I run a 6 parallel processes on OpenMPI.
> When the run-time of the program is short, it works well.
>
No problem at all - glad it works!
On Jul 26, 2010, at 7:58 AM, Grzegorz Maj wrote:
> Hi,
> I'm very sorry, but the problem was on my side. My installation
> process was not always taking the newest sources of openmpi. In this
> case it hasn't installed the version with the latest patch. Now I
>
Hi,
I'm very sorry, but the problem was on my side. My installation
process was not always taking the newest sources of openmpi. In this
case it hasn't installed the version with the latest patch. Now I
think everything works fine - I could run over 130 processes with no
problems.
I'm sorry again t
Hi Jack
Yes to both questions. Best to download it directly from their page:
http://www.valgrind.org/downloads/current.html
then you are sure to get the newest version.
Another way to manage your output is to use the '-output-filename' of
mpirun (or mpiexec)
which will redirect the outputs (std
Thanks
It can be installed on linux and work with gcc ?
If I have many processes, such as 30, I have to open 30 terminal windows ?
thanks
Jack
> Date: Mon, 26 Jul 2010 08:23:57 +0200
> From: jody@gmail.com
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] OpenMPI Segmentation fault (11)
Hi Jack
Have you tried to run your aplication under valgrind?
Even though applications generallay run slower under valgrind,
it may detect memory errors before the actual crash happens.
The best would be to start a terminal window for each of your processes
so you can see valgrind's output for ea
On Sun, 25 Jul 2010 19:10:42 -0700, wrote:
I recall you said you had machines numbered 192.168.10.1xx ?
If so, then 192.168.10.0/24 ("slash 24") would be slightly better
for you than "slash 8" as that at least narrows things down to all
numeric addresses starting with:
192.168.10.
If you jus
12 matches
Mail list logo