Since each node has its own memory in a distributed memory system,
there is no such thing as "global variable" that can be accessed by all
processes. So you need to use MPI to scatter the input from rank 0
process to all the other processes explicitly.
From: dtustud...@hotmail.com
To: us...@open-
thanks very much !!!
May I use global variable to do that ?
It means that all nodes have the same global variable, such as globalVector.
In the initialization, only node 0 load data from files and assign values to
the globalVector.
After that, all other nodes can get the same data by accessing
Hi Jack/Jinxu
Jack Bryan wrote:
Dear All,
I am working on a multi-computer Open MPI cluster system.
If I put some data files in /home/mypath/folder, is it possible that all
non-head nodes can access the files in the folder ?
Yes, possible, for instance, if the /home/mypath/folder directo
Dear All,
I am working on a multi-computer Open MPI cluster system.
If I put some data files in /home/mypath/folder, is it possible that all
non-head nodes can access the files in the folder ?
I need to load some data to some nodes, if all nodes can access the data, I do
not need to load them
Cool. If you're building OpenMPI on 32-bit Windows as well, you won't
have any 64-bit switches to sort out. This part of my instructions:
Visual Studio command prompt: "Start, All Programs, Visual Studio 2008,
Visual Studio Tools, Visual Studio 2008 Win64 x64 Command Prompt" is
slightly wron
I am running 32 bit Windows. The actual cluster is 64 bit and the OS is
CentOS
On Mon, Jul 12, 2010 at 7:15 PM, Damien Hocking wrote:
> You don't need to check anything alse in the red window, OpenMPI doesn't
> know it's in a virtual machine. If you're running Windows in a virtual
> cluster, a
You don't need to check anything alse in the red window, OpenMPI doesn't
know it's in a virtual machine. If you're running Windows in a virtual
cluster, are you running as 32-bit or 64-bit?
Damien
On 12/07/2010 5:05 PM, Alexandru Blidaru wrote:
Wow thanks a lot guys. I'll try it tomorrow morn
Wow thanks a lot guys. I'll try it tomorrow morning. I'll admit that this
time when i saw that there are some header files "not found" i didn't even
bother going through the all process as I did previously. Could have had it
installed by today. Well i'll give it a try tomorrow and come back to you
Alex,
That red window is what you should see after the first Configure step in
CMake. You need to do the next few steps in CMake and Visual Studio to
get a Windows OpenMPI build done. That's how CMake works. It's
complicated because CMake has to be able to build on multiple OSes so
what yo
Just so you don't have to wait for 1.4.3 to be released, here is the patch.
Ralph
iof.diff
Description: Binary data
On Jul 12, 2010, at 2:44 AM, jody wrote:
> yes, i'm using 1.4.2
>
> Thanks
> Jody
>
> On Mon, Jul 12, 2010 at 10:38 AM, Ralph Castain wrote:
>>
>> On Jul 12, 2010, at 2:17
On Jul 12, 2010, at 3:23 PM, Brian Budge wrote:
> Hi Ralph -
>
> So you can just start this daemon on all of the nodes when the
> machines are booted, for example, and then these connections can be
> made programmatically?
Ummm...not exactly. You have to start only one ompi-server, but it must
Hi Alex,
Actually, I don't see the errors from your outputs, the headers that are
not found won't stop you to build Open MPI, they are not errors, but
only the checking results of your system for configuring Open MPI. What
you need to do is just press configure button twice, and then press
g
Hi Ralph -
So you can just start this daemon on all of the nodes when the
machines are booted, for example, and then these connections can be
made programmatically?
Sounds great. I look forward to that functionality.
Brian
On Mon, Jul 12, 2010 at 12:38 PM, Ralph Castain wrote:
>
> On Jul 12
On Jul 12, 2010, at 11:12 AM, Brian Budge wrote:
> HI Ralph -
>
> Thanks for the reply. I think this patch sounds great! The idea in
> our software is that it won't be known until after the program is
> running whether or not MPI is needed, so it would be best if the
> communication initializa
Just so you don't have to wait for 1.4.3 release, here is the patch (doesn't
include the prior patch).
dpm.diff
Description: Binary data
On Jul 12, 2010, at 12:13 PM, Grzegorz Maj wrote:
> 2010/7/12 Ralph Castain :
>> Dug around a bit and found the problem!!
>>
>> I have no idea who or why
Then do it on a USB drive.
https://fedorahosted.org/liveusb-creator/
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Alexandru Blidaru
Sent: Monday, July 12, 2010 2:20 PM
To: Open MPI Users
Subject: Re: [OMPI users] Install OpenMPI on Win 7 machine
W
Well, I tried cygwin, and it aborted the whole thing at one point. I know
that most Linux distros come with OpenMPI. The cluster I'm actually going to
be working with has Linux on it. The reason why I am not switching to Linux
for the virtual cluster part is that my computer doesn't have a DVD/CD d
I would say trying put Cygwin on the computer.
http://www.cygwin.com/
It puts a Linux like environment on Windows which includes gcc and g++.
Since you are setting up virtual clusters, why not just go ahead and setup a
virtual Linux cluster and be on to other things than trying to get it
On Jul 12, 2010, at 11:12 AM, Brian Budge wrote:
> HI Ralph -
>
> Thanks for the reply. I think this patch sounds great! The idea in
> our software is that it won't be known until after the program is
> running whether or not MPI is needed, so it would be best if the
> communication initializa
2010/7/12 Ralph Castain :
> Dug around a bit and found the problem!!
>
> I have no idea who or why this was done, but somebody set a limit of 64
> separate jobids in the dynamic init called by ompi_comm_set, which builds the
> intercommunicator. Unfortunately, they hard-wired the array size, but
Hey,
I installed a 90 day trial of Visual Studio 2008, and I am pretty sure I am
getting the exact same thing. The log and the picture are attached just as
last time. Any new ideas?
Regards,
Alex
On Mon, Jul 12, 2010 at 9:58 AM, Shiqing Fan wrote:
>
> Hi Alex,
>
> When the attachment is large,
Hi Jody -
I have successfully run mpi programs on my machine without mpirun, but
I guess where this breaks down is on multiple machines? Because the
ompi-server that Ralph mentioned is never started?
Thanks,
Brian
On Mon, Jul 12, 2010 at 8:32 AM, jody wrote:
> Hi Brian
>
> Generally it is po
HI Ralph -
Thanks for the reply. I think this patch sounds great! The idea in
our software is that it won't be known until after the program is
running whether or not MPI is needed, so it would be best if the
communication initialization could be done programmatically instead of
through an exter
Sorry for the delayed response - Brad asked if I could comment on this.
I'm afraid your application, as written, isn't going to work because the
rendezvous protocol isn't correct. You cannot just write a port to a file and
have the other side of a connect/accept read it. The reason for this is t
Hi Brian
Generally it is possible to create new communicators from existing ones
(see for instance the various MPI_GROUP_* functions and MPI_COMM_CREATE)
> Also, how can you specify with MPI_Comm_spawn/multiple() how do you
> specify IP addresses on which to start the processes?
I haven't tried i
On Jul 12, 2010, at 9:07 AM, Brian Budge wrote:
> Hi Jody -
>
> Thanks for the reply. is there a way of "fusing" intercommunicators?
> Let's say I have a higher level node scheduler, and it makes a new
> node available to a COMM that is already running. So the master
> spawns another process f
Hi again,
after testing as suggested, it is indeed a massive slowdown rather than
a full-blown machine hang.
Would the next test be to run with debug flags for openmpi ?
Regards,
Olivier Marsden
Jeff Squyres wrote:
On Jul 7, 2010, at 12:50 PM, Olivier Marsden wrote:
Hi Jeff, thanks for the
Hi Jody -
Thanks for the reply. is there a way of "fusing" intercommunicators?
Let's say I have a higher level node scheduler, and it makes a new
node available to a COMM that is already running. So the master
spawns another process for that node. How can the new process
communicate with the ot
Dug around a bit and found the problem!!
I have no idea who or why this was done, but somebody set a limit of 64
separate jobids in the dynamic init called by ompi_comm_set, which builds the
intercommunicator. Unfortunately, they hard-wired the array size, but never
check that size before addin
Hi Alex,
When the attachment is large, you can set the email directly to me off
the list.
For the problem you got, the reason is that you are using MinGW, but not
Microsoft C/C++ compiler. Is that possible for you to just switch to
Microsoft Visual Studio 2005 or 2008? There are still many
Hi,
I am attaching all the output text resulted when configuring for the first
time. I am also attaching a picture of the of the main area. My main purpose
of installing OpenMPI is to set up a set of "virtual cluster" on the Windows
7 machine, so I could get accustomed with the different settings,
1024 is not the problem: changing it to 2048 hasn't change anything.
Following your advice I've run my process using gdb. Unfortunately I
didn't get anything more than:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xf7e4c6c0 (LWP 20246)]
0xf7f39905 in ompi_comm_set ()
Hi
> mpi_irecv(workerNodeID, messageTag, bufferVector[row][column])
OpenMPI contains no function of this form.
There is MPI_Irecv, but it takes a different number of arguments.
Or is this a boost method?
If yes, i guess you have to make sure that the
bufferVector[row][column] is large enough...
Pe
Hi,
I'm focusing on the MPI_Bcast routine that seems to randomly segfault when
using the openib btl.
I'd like to know if there is any way to make OpenMPI switch to a different
algorithm than the default one being selected for MPI_Bcast.
Thanks for your help,
Eloi
On Friday 02 July 2010 11:06:
yes, i'm using 1.4.2
Thanks
Jody
On Mon, Jul 12, 2010 at 10:38 AM, Ralph Castain wrote:
>
> On Jul 12, 2010, at 2:17 AM, jody wrote:
>
>> Hi
>>
>> I have a master process which spawns a number of workers of which i'd
>> like to save the output in separate files.
>>
>> Usually i use the '-outp
On Jul 12, 2010, at 2:17 AM, jody wrote:
> Hi
>
> I have a master process which spawns a number of workers of which i'd
> like to save the output in separate files.
>
> Usually i use the '-output-filename' option in such a situation.
> However, if i do
> mpirun -np 1 -output-filename work_out
Hi
I have a master process which spawns a number of workers of which i'd
like to save the output in separate files.
Usually i use the '-output-filename' option in such a situation.
However, if i do
mpirun -np 1 -output-filename work_out master arg1 arg2
all the files work_out.1, work_out.2, ..
37 matches
Mail list logo