I have some old memory of this where the .bashrc file and the .profile are
distinguishing login and non-login.
Also something to do with the - being an argument to the bash process or
something like this.
man bash would give you a definite answer.
rds,
> -Original Message-
> From: users-
Hi
Will MPI_Probe return that there is a message pending reception if the
sender MPI_Bcast a message?
Is the only way to receive a broadcast from the root is to call MPI_BCast in
the slave?
ditto,
Hicham Mouline
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Andrew Ball
> Sent: 06 January 2011 17:48
> To: Open MPI Users
> Subject: Re: [OMPI users] IRC channel
>
> Hello Jeff,
>
> JS
Hello
Do people looking at this list ever join the #openmpi IRC channel. The channel
seems to point to the website already.
The medium could be useful to spread even more the use of openmpi.
More specific channels could also be created. -beginner, -platform specific ,
-compilation issues, -perfo
>From what I understand, unix variants can talk to each other (linux to
macosx sunos ...) but windows cannot talk to non windows (not yet? :-)
regards,
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of ??
Sent: 04 January 2011 06:25
To: us...@open-mpi.org
Subje
I don't understand 1 thing though and would appreciate your comments.
In various interfaces, like network sockets, or threads waiting for data from
somewhere, there are various solutions based on _not_ checking the state of the
socket or some sort of queue continuously, but sort of getting _in
very clear, thanks very much.
-Original Message-
From: "Ralph Castain" [r...@open-mpi.org]
List-Post: users@lists.open-mpi.org
Date: 13/12/2010 03:49 PM
To: "Open MPI Users"
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
Thanks for the link!
Just to cl
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Eugene Loh
> Sent: 08 December 2010 16:19
> To: Open MPI Users
> Subject: Re: [OMPI users] curious behavior during wait for broadcast:
> 100% cpu
>
> I wouldn't mind some clarificatio
Hello,
on win32 openmpi 1.4.3, I have a slave process that reaches this pseudo-code
and then blocks and the CPU usage for that process stays at 25% all the time (I
have a quadcore processor). When I set the affinity to 1 of the cores, that
core is 100% busy because of my slave process.
main()
-Original Message-
From: "Tim Prince" [n...@aol.com]
List-Post: users@lists.open-mpi.org
Date: 06/12/2010 01:40 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] meaning of MPI_THREAD_*
>On 12/6/2010 3:16 AM, Hicham Mouline wrote:
>> Hello,
>>
>>
Hello,
1. MPI_THREAD_SINGLE: Only one thread will execute.
Does this really mean the process cannot have any other threads at all, even if
they doen't deal with MPI at all?
I'm curious as to how this case affects the openmpi implementation?
Essentially, what is the difference between MPI_THREAD_S
Hi,
Following the instructions from Readme.windows, I've used cmake and 4 build
directories to generate release and debug win32 and x64 builds. When it came
to install, I wondered: there are 4 directories involved, bin, lib, share
and include.
Are include and share identical across the 4 configur
> -Original Message-
> From: Shiqing Fan [mailto:f...@hlrs.de]
> Sent: 01 December 2010 11:29
> To: Open MPI Users
> Cc: Hicham Mouline
> Subject: Re: [OMPI users] win: mpic++ -showme reports duplicate .libs
>
> Hi Hicham,
>
> Thanks for noticing it
Hello,
>mpic++ -showme:link
/TP /EHsc /link /LIBPATH:"C:/Program Files (x86)/openmpi/lib" libmpi.lib
libopen-pal.lib libopen-rte.lib libmpi_cxx.lib libmpi.lib libopen-pal.lib
libopen-rte.lib advapi32.lib Ws2_32.lib shlwapi.lib
reports using the 4 mpi libs twice.
I've followed the cmake way in RE
> -Original Message-
> From: Shiqing Fan [mailto:f...@hlrs.de]
> Sent: 30 November 2010 23:39
> To: Open MPI Users
> Cc: Hicham Mouline; Rainer Keller
> Subject: Re: [OMPI users] failure to launch MPMD program on win32 w
> 1.4.3
>
> Hi,
>
> I don't
Users"
Subject: Re: [OMPI users] failure to launch MPMD program on win32 w 1.4.3
It truly does help to know what version of OMPI you are using - otherwise,
there is little we can do to help
On Nov 30, 2010, at 4:05 AM, Hicham Mouline wrote:
> Hello,
>
> I have successfully ru
Hello,
I have successfully run
mpirun -np 3 .\test.exe
when I try MPMP
mpirun -np 3 .\test.exe : -np 3 .\test2.exe
where test and test2 are identical (just for a trial), I get this error:
[hostname:04960] [[47427,1],0]-[[47427,0],0] mca_oob_tcp_peer_send_blocking:
send() failed: Unkno
>> therefore, I guess I need to separate the GUI binary from the
mpi-processes
>> binary and have the GUI process talk to the "master" mpi process (on
linux)
>> for calc requests.
>>
>> I was hoping I wouldn't have to write a custom code to do that.
>MPI doesn't necessarily mean SPMD -- you ca
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Bill Rankin
> Sent: 24 November 2010 15:54
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Comm_split
>
> In this case, creating all those communicators really doesn't buy you
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Bill Rankin
> Sent: 23 November 2010 19:32
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Comm_split
>
> Hicham:
>
> > If I have a 256 mpi processes in 1 communicator, am I abl
Hello
If I have a 256 mpi processes in 1 communicator, am I able to split that
communicator, then again split the resulting 2 subgroups, then again the
resulting 4 subgroups and so on, until potentially having 256 subgroups?
Is this insane in terms of performance?
regards,
>MPI doesn't necessarily mean SPMD -- you can certainly have the GUI call
>MPI_INIT and then call MPI_COMM_SPAWN to launch a different >executable to do
>the compute working stuff.
>--
>Jeff Squyres
>jsquyres_at_[hidden]
This is confusing to me.
If the GUI does that, will the GUI process (runn
OMPI community member
> organizations do so).
>
> What are you trying to do?
>
>
> On Nov 18, 2010, at 11:37 AM, David Zhang wrote:
>
> > you could spawn more processes from currently running processes.
> >
> > On Thu, Nov 18, 2010 at 3:05 AM, Hicham
No way!!! That is so limiting,
Are you aware of any MPI implementation that is able to do both windows and
linux?
regards,
_
Hi Hicham,
Unfortunately, it's not possible to run over both windows and linux.
Regards,
Shiqing
--
Hello
Is it possible to run openmpi application over 2 hosts win32 and linux64?
I ran this from the win box
> mpirun -np 2 --hetero --host localhost,host2 .\Test1.exe
and the error was:
[:04288] This feature hasn't been implemented yet.
[:04288] Could not connect to namespace cimv2 on node host2
hello,sorry for cross posting. I've built openmpi 1.4.3 on win32 and generated
only 4 release libs:
3,677,712 libmpi.lib
336,466 libmpi_cxx.lib
758,686 libopen-pal.lib
1,307,592 libopen-rte.lib
I've installed the boostPro 1.44 mpi library with the installer. But I have
link errors:
1>lib
Hi,
One typically uses mpirun to launch a set of mpi processes.
Is there some programmatical interface to launching the runtime and having
the process that launched the runtime becoming part of the list of mpi
processes,
Regards,
hello,I currently have a serial application with a GUI that runs some
calculations.My next step is to use OpenMPI with the help of the Boost.MPI
wrapper library in C++ to parallelize those calculations.There is a set of
static data objects created once at startup or loaded from files.1. In terms
28 matches
Mail list logo