Hello,
Could you pl tell me how to connect a client(not in any mpi group ) to a
process in a mpi group.
(i.e. just like we do in socket programming by using connect( ) call).
Does mpi provide any call for accepting connections from outside
processes?
--
Regards
Mahesh
Hi,
Various people contributed:
Isn't it possible to set this up in torque/moab directly? In SGE I would simply
define h_vmem and it's per slot then; and with a tight integration all Open MPI
processes will be children of sge_execd and the limit will be enforced.
I could be wrong, but I -th
Hi Mahesh
At least in simple cases you can use normal socket functions for this.
I used this in order to change the run-time behaviour of an application
of a master-worker MPI application. I implemented a simple TCP-Server
which runs in a separate thread on the Master processor; connecting to
thi
No problem - I got to learn something too!
On Oct 11, 2010, at 11:19 PM, David Turner wrote:
> Hi,
>
> Various people contributed:
>
> Isn't it possible to set this up in torque/moab directly? In SGE I would
> simply define h_vmem and it's per slot then; and with a tight integration
On Oct 11, 2010, at 1:29 PM, Bowen Zhou wrote:
> Try MPI_Isend?
'zactly correct.
You currently have an MPI_Wait on the sender side for no reason -- the request
is only filled in on the receiver. So you're waiting on an uninitialized
variable on the sender.
MPI_Send is a "blocking" send. MPI
Actually, that wasn't the problem. My code is working now with no changes to
it. Not sure what the problem was but it wasn't the called to MPI_Send
blocking.
Ed
From: users-boun...@open-mpi.org on behalf of Jeff Squyres
Sent: Tue 10/12/2010 6:52 AM
To: Open
Chris Jewell writes:
> I've scrapped this system now in favour of the new SGE core binding feature.
How does that work, exactly? I thought the OMPI SGE integration didn't
support core binding, but good if it does.
Am 12.10.2010 um 15:49 schrieb Dave Love:
> Chris Jewell writes:
>
>> I've scrapped this system now in favour of the new SGE core binding feature.
>
> How does that work, exactly? I thought the OMPI SGE integration didn't
> support core binding, but good if it does.
With the default binding_i
The data that I want to send via MPI is in the form of a struct:
struct myDataStruct {
struct subData1 {
int position[2];
int length[2];
};
struct subData2 {
float *data1;
float *data2;
float *data3;
float *data4;
};
struct subData3 {
float
The code you showed was incorrect -- you were waiting on an uninitialized
variable. Perhaps that code was only a snipit...?
On Oct 12, 2010, at 8:00 AM, Ed Peddycoart wrote:
> Actually, that wasn't the problem. My code is working now with no changes to
> it. Not sure what the problem was bu
Have a look at MPI_Type_create_struct().
http://www.open-mpi.org/doc/v1.5/man3/MPI_Type_create_struct.3.php
On Oct 12, 2010, at 11:28 AM, Ed Peddycoart wrote:
> The data that I want to send via MPI is in the form of a struct:
>
> struct myDataStruct {
>struct subData1 {
> int p
The "WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR" section in the man page
for MPI_Allreduce (in both 1.4.1 and the current SVN trunk) mentions the
use of a root process and a "root" parameter name (which doesn't exist for
MPI_Allreduce). Should I add this to Trac?
-- Jeremiah Willcock
Hi,
I am trying to install openmpi 1.4.1 on my cluster with linux ( RH EL 3 upd
3 ).
I want to run Ls dyna job on cluster, but it showed error as some files are
missing.
I also tried to copy shared lib files from ls dyna but then open mpi stops
working.
I am attaching ompi-output files.
Kindly help
Fixed on the trunk; queued up for the release branches.
Thanks!
On Oct 12, 2010, at 12:54 PM, Jeremiah Willcock wrote:
> The "WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR" section in the man page for
> MPI_Allreduce (in both 1.4.1 and the current SVN trunk) mentions the use of a
> root process
Greetings,
I'm doing software fault injection in a parallel application to evaluate
the effect of hardware failures to the execution. My question is how to
execute the faulty version of the application on one node and the
fault-free version on all other nodes in the same run?
I understand th
See here:
http://www.open-mpi.org/faq/?category=running#mpmd-run
On Tue, 2010-10-12 at 22:21 -0400, Bowen Zhou wrote:
> Greetings,
>
> I'm doing software fault injection in a parallel application to evaluate
> the effect of hardware failures to the execution. My question is how to
> execute t
16 matches
Mail list logo