Hi,
I am encountering the problem that a working child process is frozen in
the middle of its work, and continues only when its parent process (
which spawned it earlier on ) calls some MPI function.
The issue here is, that in order to accept client socket communication
the parent process is, at th
), the default non-exception-throwing handler
was installed.
Thanks!
Murat
Jeff Squyres schrieb:
> On Nov 7, 2007, at 7:43 PM, Murat Knecht wrote:
>
>
>> when MPI_Spawn cannot launch an application for whatever reason, the
>> entire job is cancelled with some messa
Greetings,
when MPI_Spawn cannot launch an application for whatever reason, the
entire job is cancelled with some message like the following.
Is there a way to handle this nicely, e.g. by throwing an exception? I
understand, this does not work, when the job is first started with
mpirun, as there is
Jeff Squyres schrieb:
> On Oct 31, 2007, at 1:18 AM, Murat Knecht wrote:
>
>
>> Yes I am, (master and child 1 running on the same machine).
>> But knowing the oversubscribing issue, I am using
>> mpi_yield_when_idle which should fix precisely this problem, right
oversubscribe
> your nodes. In this case, OMPI will be aggressively polling in all
> processes, not realizing that the node is now oversubscribed and it
> should be yielding the processor so that other processes can run.
>
> On Oct 30, 2007, at 10:57 AM, Murat Knecht wrote:
Hi,
does someone know whether there is a special requirement on the order of
spawning processes and the consequent merge of the intercommunicators?
I have two hosts, let's name them local and remote, and a parent process
on local that goes on spawning one process on each one of the two nodes.
Afte
;
> Thanks,
> george.
>
> On Oct 23, 2007, at 9:17 AM, Murat Knecht wrote:
>
>> Hi,
>> thanks for answering. Unfortunately, I did try that, too. The point
>> is that i don't understand the ressource consumption. Even if the
>> processor is yielded,
nning#force-aggressive-degraded
>
> To get what you want, you need to force Open MPI to yield the processor rather
> than be aggressively waiting for a message.
>
> On 10/23/07, Murat Knecht wrote:
>
>> Hi,
>> Testing a distributed system locally, I couldn't he
Hi,
Testing a distributed system locally, I couldn't help but notice that a
blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
and run-time) the shared memory bt-layer, and specified "tcp, self" to
be used. Still one core busy. Even on a distributed system I intend to
perform w
Hi,
I have a question regarding merging intracommunicators.
Using MPI_Spawn, I create on designated machines child processes,
retrieving an intercommunicator each time.
With MPI_Intercomm_Merge it is possible to get an intracommunicator
containing the master process(es) and the newly spawned child
Copy-and-paste-error: The second part of the fix ought to be:
if ( !have_wdir ) {
free(cwd);
}
Murat
Murat Knecht schrieb:
> Hi all,
>
> I think, I found a bug and a fix for it.
> Could someone verify the rationale behind this bug, as I have this
>
Hi all,
I think, I found a bug and a fix for it.
Could someone verify the rationale behind this bug, as I have this
SIGSEG on only one of two machines, and I don't quite see why it doesn't
occur always. (Same testprogram, equally compiled 1.2.4 OpenMPI).
Though the fix does prevent the segmentatio
Hi all,
I get a segmentation fault when trying to spawn a single process on the
localhost (127.0.0.1).
I tried both the current stable 1.2.3 and the beta 1.2.4, both ended up the
same way.
>From the stack trace, i know it's the spawn call.
Is it possible that there is an error with authentificati
Hi,
I have a question regarding the --host(file) option of mpirun. Whenever I
try to fork a process on another node using Spawn(), I get the following
message:
Verify that you have mapped the allocated resources properly using the
--host specification.
I understand this can be fixed by providing
14 matches
Mail list logo