Dear all,
there's been some discussions about this already, but the issue is still there
(in 1.4.4). When running SLURM jobs with the --cpus-per-task parameter set
(e.g. when running Open MPI-OpenMP jobs, so that --cpus-per-tasks corresponds
to the number of OpenMP threads per rank), you get th
Dear people,
In my MPI application, all the processes call the
MPI_Finalize (all processes reach there) but the rank 0 process could not
finish with MPI_Finalize and the application remains running. Please suggest
what can be the cause of that.
regards,
Mudassar
Do you have any outstanding MPI requests (e.g., uncompleted isends or irecvs)?
On Nov 28, 2011, at 9:00 AM, Mudassar Majeed wrote:
>
> Dear people,
> In my MPI application, all the processes call the
> MPI_Finalize (all processes reach there) but the rank 0 process could
Are all the other processes gone? What version of OMPI are you using?
On 11/28/2011 9:00 AM, Mudassar Majeed wrote:
Dear people,
In my MPI application, all the processes call
the MPI_Finalize (all processes reach there) but the rank 0 process
could not finish with MPI_F
No, I am using MPI_Ssend and MPI_Recv everywhere.
regards,
Mudassar
From: Jeff Squyres
To: Mudassar Majeed ; Open MPI Users
Cc: "anas.alt...@gmail.com"
Sent: Monday, November 28, 2011 3:05 PM
Subject: Re: [OMPI users] Deadlock at MPI_FInalize
Do you ha
Hi!
Josh Hursey open-mpi.org> writes:
>
> I wonder if the try_compile step is failing. Can you send a compressed
> copy of your config.log from this build?
>
No need for that anymore ..
you simply have to add the params "--enable-ft-thread" "--with-ft=cr"
"--enable-mpi-threads" and th
+1 on Terry's questions.
Have you modified Open MPI? You were asking before about various
checkpoint/migration stuff; I'm not sure/don't remember if you were adding your
own plugins to Open MPI.
On Nov 28, 2011, at 9:07 AM, TERRY DONTJE wrote:
> Are all the other processes gone? What version
(off list)
Are you sure about OMPI_MCA_* params not being treated specially? I know for a
fact that they *used* to be. I.e., we bundled up all env variables that began
with OMPI_MCA_* and sent them with the job to back-end nodes. It allowed
sysadmins to set global MCA param values without ed
On Nov 28, 2011, at 5:39 PM, Jeff Squyres wrote:
> (off list)
Hah! So much for me discretely asking off-list before coming back with a
definitive answer... :-\
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
I'm afraid that example is incorrect - you were running under slurm on your
cluster, not rsh. If you look at the actual code, you will see that we slurp up
the envars into the environment of each app_context, and then send that to the
backend.
In environments like slurm, we can also apply those
On Nov 28, 2011, at 6:56 PM, Ralph Castain wrote:
> I'm afraid that example is incorrect - you were running under slurm on your
> cluster, not rsh.
Ummm... right. Duh.
> If you look at the actual code, you will see that we slurp up the envars into
> the environment of each app_context, and th
On Nov 28, 2011, at 5:32 PM, Jeff Squyres wrote:
> On Nov 28, 2011, at 6:56 PM, Ralph Castain wrote:
> Right-o. Knew there was something I forgot...
>
>> So on rsh, we do not put envar mca params onto the orted cmd line. This has
>> been noted repeatedly on the user and devel lists, so it real
On Nov 28, 2011, at 7:39 PM, Ralph Castain wrote:
>> Meaning that per my output from above, what Paul was trying should have
>> worked, no? I.e., setenv'ing OMPI_, and those env vars should
>> magically show up in the launched process.
>
> In the -launched process- yes. However, his problem wa
Unfortunately, this is a known issue. :-\
I have not found a reliable way to deduce that MPI_IN_PLACE has been passed as
the parameter to MPI_REDUCE (and friends) on OS X. There's something very
strange going on with regards to the Fortran compiler and common block
variables (which is where w
I'm afraid we don't have any contacts left at QLogic to ask them any more... do
you have a support contract, perchance?
On Nov 27, 2011, at 3:11 PM, Arnaud Heritier wrote:
> Hello,
>
> I run into a stange problem with qlogic OFED and openmpi. When i submit
> (through SGE) 2 jobs on the same no
I do have a contract and i tried to open a case, but their support is
..Anyway. I'm stii working on the strange error message from mpirun
saying it can't allocate memory when at the same time it also reports that
the memory is unlimited ...
Arnaud
On Tue, Nov 29, 2011 at 4:23 AM, Jeff Squyre
16 matches
Mail list logo