Hi Jeff,
Here is the output of lstopo on one of the workers (thanks
Jean-Christophe) :
> lstopo
Machine (35GB)
NUMANode L#0 (P#0 18GB) + Socket L#0 + L3 L#0 (8192KB)
L2 L#0 (256KB) + L1 L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#8)
L2 L#1 (256KB) + L1 L#1 (32KB) + Core
Sounds to me like you are getting a heavy dose of version contamination. I
can't see any way for a single instance of mpirun to launch two procs given an
input line of
mpirun -np 1 my_prog
There isn't any way OMPI can (or would) uninstall a prior installation of
MPICH2, or any other MPI for
Afraid not - though you could alias your program name to be "nice --10 prog"
On Jan 6, 2011, at 3:39 PM, David Mathog wrote:
> Is it possible using mpirun to specify the nice value for each program
> run on the worker nodes? It looks like some MPI implementations allow
> this, but "mpirun --hel
It appears to be due to not being able to fully remove MPICH2 on machines
that have and aged install. Then Open MPI will not overwrite the remaining
crumbs and is not automagically well configured. I tested this hypothesis on
an additional pair of recent/aged MPICH2 install machines. Currently
rein
Is it possible using mpirun to specify the nice value for each program
run on the worker nodes? It looks like some MPI implementations allow
this, but "mpirun --help" doesn't say anything about it.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Cal
On Jan 6, 2011, at 5:07 PM, Gilbert Grosdidier wrote:
> Yes Jeff, I'm pretty sure indeed that hyperthreading is enabled, since 16
> CPUs are visible in the /proc/cpuinfo pseudo-file, while it's a 8 core
> Nehalem node.
>
> However, I always carefully checked that only 8 processes are running o
Yes Jeff, I'm pretty sure indeed that hyperthreading is enabled, since
16 CPUs
are visible in the /proc/cpuinfo pseudo-file, while it's a 8 core
Nehalem node.
However, I always carefully checked that only 8 processes are running
on each node.
Could it be that they are assigned to 8 hyperthre
On Thu, 6 Jan 2011, Jeff Squyres wrote:
Jeremiah --
Is this the same as:
https://svn.open-mpi.org/trac/ompi/ticket/2656
Not that I can tell -- my code is only using built-in datatypes, while
that bug appears to require user-defined datatypes.
-- Jeremiah Willcock
Jeff, I don't believe it is. I'm still waiting for a compile to finish to
test, but there shouldn't be a problem with predefined datatypes. It's just
user-defined that the ddt->opal move screwed up.
Brian
On Jan 6, 2011, at 2:19 PM, Jeff Squyres wrote:
> Jeremiah --
>
> Is this the same as:
Rob,
Thanks for the clarification. I am using a old school solution to the
problem, namely with a little 12 line subroutine that simply reverses
the order of the bytes in each floating point word. Before fortran
compilers supported 'byteswapping', this was the way we did it, and it
still does th
Jeremiah --
Is this the same as:
https://svn.open-mpi.org/trac/ompi/ticket/2656
On Jan 6, 2011, at 3:58 PM, Jeremiah Willcock wrote:
> When I run the following program on one rank using Open MPI 1.5:
>
> #include
> #include
> #include
>
> int main(int argc, char** argv) {
> int size
On Jan 6, 2011, at 4:10 PM, Gilbert Grosdidier wrote:
> Where's located lstopo command on SuseLinux, please ?
'fraid I don't know anything about Suse... :-(
It may be named hwloc-ls...?
> And/or hwloc-bind, which seems related to it ?
hwloc-bind is definitely related, but it's a different uti
Hi Jeff,
Where's located lstopo command on SuseLinux, please ?
And/or hwloc-bind, which seems related to it ?
Thanks, G.
Le 06/01/2011 21:21, Jeff Squyres a écrit :
(now that we're back from vacation)
Actually, this could be an issue. Is hyperthreading enabled on your machine?
Can yo
When I run the following program on one rank using Open MPI 1.5:
#include
#include
#include
int main(int argc, char** argv) {
int size = 128;
unsigned char one = 1;
MPI_Init(&argc, &argv);
unsigned char* data = (unsigned char*)malloc(size * sizeof(unsigned char));
memset(data, 0, si
On Tue, Dec 21, 2010 at 06:38:59PM -0800, Tom Rosmond wrote:
> I use the function MPI_FILE_SET_VIEW with the 'native'
> data representation and correctly write a file with MPI_FILE_WRITE_ALL.
> However, if I change to the 'external32' representation, the file is
> truncated, with a length that sugg
On Jan 1, 2011, at 1:30 AM, 阚圣哲 wrote:
> I want to know:
> 1) When I want to use XRC, I must have a special IB switch?
> 2) How can I use XRC in ompi,and on which situation the XRC feature will
> bring benifit?
> 3) If this is only way to using XRC that using "-mca btl_openib_cpc_include
> xoob
Many thanks, folks!
A vibrant community is definitely one of the really important things that makes
the Open MPI project a success.
On Jan 6, 2011, at 3:14 PM, Hicham Mouline wrote:
> ditto,
>
> Hicham Mouline
>
>> -Original Message-
>> From: users-boun...@open-mpi.org [mailto:users-
(now that we're back from vacation)
Actually, this could be an issue. Is hyperthreading enabled on your machine?
Can you send the text output from running hwloc's "lstopo" command on your
compute nodes?
I ask because if hyperthreading is enabled, OMPI might be assigning one process
per *hyert
ditto,
Hicham Mouline
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Andrew Ball
> Sent: 06 January 2011 17:48
> To: Open MPI Users
> Subject: Re: [OMPI users] IRC channel
>
> Hello Jeff,
>
> JS> True. I think the challenge
Hello Jeff,
JS> True. I think the challenge will be to find
> people who can staff these channels.
I know very little about MPI, but I'll make a
point of loitering in #openmpi when I'm logged into
Freenode. I've already met a couple of people in
there this week.
- Andy Ball
That might well be a good idea (create an MCA param for the number of send /
receive CQEs).
It certainly seems that OMPI shouldn't be scaling *any* IB resource based on
the number of peer processes without at least some kind of upper bound.
Perhaps an IB vendor should reply here...
On Dec
On Jan 5, 2011, at 9:28 AM, Hicham Mouline wrote:
> Do people looking at this list ever join the #openmpi IRC channel. The
> channel seems to point to the website already.
I know that this IRC channel exists, but I'm afraid I don't join it (and my
corporate overlords seem to block IRC -- doh!).
On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:
> MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
> {
> int clt=0;
> MPI_Request request; /* requete pour communication non bloquante */
> MPI_Comm gcom;
> MPI_Status status;
> char port_name_c
Is it a bug in openmpi V1.5.1 ?
Bernard
Bernard Secher - SFME/LGLS a écrit :
Hello,
What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2
service of publishing name.
I have 2 programs which connect them via MPI_Publish_name and
MPI_Lookup_name subroutines and ompi-server.
That's OK
24 matches
Mail list logo