before
I start?
rds,
MM
s possible, can the executables of my MPI program
sendable over the wire before running them?
If we exclude GPU or other nonMPI solutions, and cost being a primary
factor, what is progression path from 2boxes to a cloud based solution
(amazon and the like...)
Regards,
MM
Hello,
I have the following 3 1-socket nodes:
node1: 4GB RAM 2-core: rank 0 rank 1
node2: 4GB RAM 4-core: rank 2 rank 3 rank 4 rank 5
node3: 8GB RAM 4-core: rank 6 rank 7 rank 8 rank 9
I have a model that takes a input and produces a output, and I want to run
this model for N possible combi
On 14 June 2016 at 13:56, Gilles Gouaillardet
wrote:
On Tuesday, June 14, 2016, MM wrote:
>
> Hello,
> I have the following 3 1-socket nodes:
>
> node1: 4GB RAM 2-core: rank 0 rank 1
> node2: 4GB RAM 4-core: rank 2 rank 3 rank 4 rank 5
> node3: 8GB RAM 4-core: rank 6
in a prorata way, ie f_i/ sum(f_i) => n_i for each core.
sum(n_i) = N
2. A 2nd stage could then be to ensure that no:n_i > m_i/M
which would then involvetaking any excesses (n_i - m_i/M) and
spreading it over the cores.
Or
perhaps both cpufrequencies and maxmem could be considered in 1 go,
but I don't know how to do that?
Thanks
MM
I would like to see if there are any updates re this thread back from 2010:
https://mail-archive.com/users@lists.open-mpi.org/msg15154.html
I've got 3 boxes at home, a laptop and 2 other quadcore nodes . When the
CPU is at 100% for a long time, the fans make quite some noise:-)
The laptop runs t
Hi,
openmpi 1.10.3
this call:
mpirun --hostfile ~/.mpihosts -H localhost -np 1 prog1 : -H A.lan -np
4 prog2 : -H B.lan -np 4 prog2
works, yet this one:
mpirun --hostfile ~/.mpihosts --app ~/.mpiapp
doesn't. where ~/.mpiapp
-H localhost -np 1 prog1
-H A.lan -np 4 prog2
-H B.lan -np 4 prog2
oth A.lan and B.lan ?
Yes both A and B have exactly 4 cores each.
>
> Cheers,
>
> Gilles
>
>
> On Sunday, October 16, 2016, MM wrote:
>>
>> Hi,
>>
>> openmpi 1.10.3
>>
>> this call:
>>
>> mpirun --hostfile ~/.mpihosts -H localhos
Hello,
Given mpi nodes 0 N-1, 0 being root, master node.
and trying to determine the maximum value of a function over a large
range of values of its parameters,
What are the differences between if any:
1. At node i:
evaluate f for each of the values assigned to i of the parameters space
On 27 October 2016 at 18:35, MM wrote:
> Hello,
>
> Given mpi nodes 0 N-1, 0 being root, master node.
> and trying to determine the maximum value of a function over a large
> range of values of its parameters,
>
> What are the differences between, if any:
>
> 1.
Hello,
boost: 1.46.1
openmpi: 1.5.3
winxp : 64bit
For openmpi mailing list users, boost comes with a boost.MPI library which
is a C++-nativized library that wraps around any MPI-1 implementation
available.
Boost libraries can be built with bjam, a tool that is part of a build
system. It comes wit
on winxp, with the following net setup (just localhost, is it on?)
C:\trunk-build-release>ipconfig /all
Windows IP Configuration
Host Name . . . . . . . . . . . . : SOMEHOSTNAME
Primary Dns Suffix . . . . . . . : DOMAIN.SOMECO.COM
Node Type . . . . . . . . . . . . : Hyb
release
and debug version of the libs when built on windows 7 work also for xp?
I would definitely rather use prebuilt openmpi libs as it's easier to change
versions,
thanks,
MM
-Original Message-
if the interface is down, should localhost still allow mpirun to run mpi
processes?
file for Release, build boost mpi,
override for Debug, build for Debug.
thanks,
MM
related to DLLs somehow).
I gather this MPI_Address() function resides in libmpi.lib and libmpid.lib
PS: I didn't have these link errors when I built against the prebuilt win
libraries from the website, what are the CMAke flags for those?
Thanks,
MM
I took off the OMPI_IMPORTS actually and it now compiles correctly. Maybe
those are to be defined if I had built shared lib version of mpi libs.
thanks
From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: 19 November 2011 04:45
To: Open MPI Users
Cc: MM
Subject: Re: [OMPI users] vs2010
te_ess_set_name failed
--> Returned value Not found (-13) instead of ORTE_SUCCESS
--
[LLDNRATDHY9H4J:04960] [[1282,0],0] ORTE_ERROR_LOG: Not found in file
C:\Program Files\openmpi-1.5.4\orte\tools\orterun\orterun.c at line 616
any help is appreciated,
MM
bs of openmpi.
but to be able to link against vs2010 Release libs of openmpi, I need them
to be linked against the Release c runtime, so I might as well link against
the debug version of the openmpi libs.
Your help is very appreciated,
MM
-Original Message-
From: Shiqing Fan [mailto:f...@hl
Hi Shiqing,
Is the info provided useful to understand what's going on?
Alternatively, is there a way to get the provided binaries for win but off
trunk rather than off 1.5.4 as on the website, because I don't have this
problem when I link against those libs,
Thanks
MM
-Origin
those and that may work
MM
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Markus Stiller
Sent: 24 November 2011 20:41
To: us...@open-mpi.org
Subject: [OMPI users] open-mpi error
Hello,
i have some problem with mpi, i looked in the FA
t,
-Original Message-
From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: 25 November 2011 22:19
To: MM
Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed
Hi MM,
Do you really want to build Open MPI by yourself? If you only need the
libraries, probably you may stick to
, I built opnempi static libs (with dll c/c++ runtime)
OMPI_IMPORTS is __not__ defined, that's how I got it to compile
MM
-Original Message-
From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: 25 November 2011 22:19
To: MM
Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set
fantastic, thank you very much,
-Original Message-
From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: 29 November 2011 14:10
To: MM
Cc: 'Open MPI Users'
Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed
Hi MM,
That doesn't really help.
Do you need
hared across
the threads in the same process.
I'd be curious to see some timing comparisons.
MM
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of amjad ali
Sent: 10 December 2011 20:22
To: Open MPI Users
Subject: [OMPI users] How to justify the use MP
x/mac distributions to do similarily
It would be also useful to publish the cmake flags used by default to
produce the win binaries
I am available to test the packages if possible, also is there a wiki for
requests or a similar system where I should file the above.
MM
Regards,
From: Shiqin
. . . . . . : No
Ethernet adapter Wireless Network Connection:
Media State . . . . . . . . . . . : Media disconnected
Description . . . . . . . . . . . : Intel(R) WiFi Link 5100 AGN
Physical Address. . . . . . . . . :
rds,
MM
:54
To: Open MPI Users
Subject: Re: [OMPI users] localhost only
Have you tried to specify the hosts with something like this?
mpirun -np 2 -host localhost ./my_program
See 'man mpirun' for more details.
I hope it helps,
Gus Correa
On Jan 16, 2012, at 6:34 PM, MM wrote:
>
Even with a -host localhost ? Is there a way to change that?
I have a long commute from work and I run 4 mpi processes on my quadcore
laptop, and while commuting, there's no connection:-)
MM
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
both.
Thanks,
MM
From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: 17 January 2012 15:06
To: MM
Cc: 'Open MPI Users'; Jeff Squyres
Subject: Re: feature requests: mpic++ to report both release and debug flags
Hi MM,
Actually option 3 has already been implemented for Windows build, a
+ openmpi but single box
shared-memory openmpi multiprocess is not necessarily worse than a single
process multithread openmp.
'-mca btl sm,self' indeed didn't work,
Ralph, please let me know if testing required.
MM
-Original Message-
From: users-boun...@open-mpi.org [mailto:use
travel and just got back,
> so I'll take a look and see why we aren't doing so.
perhaps this was a simple implementation?
thanks
MM
On 18 December 2012 22:04, Stephen Conley wrote:
> Hello,
>
> ** **
>
> I have installed CMake version 2.8.10.2 and OpenMPI version 1.6.2 on a 64
> bit Windows 7 computer.
>
> ** **
>
> OpenMPI is installed in “C:\program files\OpenMPI” and the path has been
> updated to include the bin
like to report to the root process some progress indicator, ie 40%
done so far and so on...
What is the customary solution?
Thanks
MM
not naturally in sync.
Would you suggest to modify the loop to do a MPI_ISend after x iterations
(for the clients) and MPI_IRecv on the root?
Thanks MM
Of course, by this you mean, with the same total number of nodes, for e.g.
64 process on 1 node using shared mem, vs 64 processes spread over 2 nodes
(32 each for e.g.)?
On 29 October 2013 14:37, Ralph Castain wrote:
> As someone previously noted, apps will always run slower on multiple nodes
>
Hello,
Is there a canonical way to obtain a globally unique 64bit unsigned integer
across all mpi processes, multiple times?
Thanks
MM
_WORLD, &size);
> unique += size;
>
> If this isn't insufficient, please ask to question differently.
>
> There is no canonical method for this.
>
> Jeff
>
> Sent from my iPhone
>
> On Jan 3, 2014, at 3:50 AM, MM wrote:
>
> Hello,
> Is there a
with msvc, and stepping into MPI_Isend (i
don't have the sources for it). At that moment, suddenly a new thread is
created, and a call to f() is made.
This all sounds quite nightmarish.
I understand I haven't presented any specific code to receive an accurate
answer, but any help is appreciated.
Regards,
MM
Apologies for the issue,
I was getting output from the 2 processes, and their threads, and I was
focused on only 1 process.
Please ignore,
On 13 February 2014 14:33, MM wrote:
> Hello,
>
> I am running a MPI application on a single host, with a dual quadcore with
> hyperthreadin
On 13 February 2014 15:33, Matthias Troyer wrote:
> Hi,
>
> In orders to use MPI in a multi-threaded environment, even when only one
> thread uses MPI, you need to request the necessary level of thread support
> in the environment constructor. Then you'd an check whether your MPI
> implementation
my ompi_info says (openmpi)
Threading support: No
Does that mean it's not supported?
If so, what to do?
On 13 February 2014 17:00, Matthias Troyer wrote:
>
>
>
>
>
> On Feb 13, 2014, at 17:44, MM wrote:
>
> On 13 February 2014 15:33, Matthias Troyer wrote:
&g
Hello,
With a miniature case of 3 linux quadcore boxes, linked via 1Gbit Ethernet,
I have a UI that runs on 1 of the 3 boxes, and that is the root of the
communicator.
I have a 1-second-running function on up to 10 parameters, my parameter
space fits in the memory of the root, the space of it is N
43 matches
Mail list logo