[OMPI users] IRIX: unrecognized opcode `leaf(opal_atomic_mb)'

2008-04-26 Thread Daniel
f the codes and add "-n32" option where ld is used? Please help. I'd really appreciate your help. Daniel -- Below is what I met when I do "make". -- at

[OMPI users] IRIX Assembler messages unrecognized opcode > `leaf(opal_atomic_mb)

2008-04-26 Thread Daniel
opcode > `leaf(opal_atomic_mb)' The same question I searched in the mailing list is in 2005, by Jonathan Day, http://www.open-mpi.org/community/lists/users/2005/09/0138.php 3 years have past, I wonder why this error is still remained unsolved on IRIX? or am I missing something? Best R

Re: [O-MPI users] Question about support for finding MPI processes from a tool

2005-08-05 Thread David Daniel
automatically attach to parallel jobs. We will also consider supporting other interfaces... if publicly documented. David -- David Daniel Advanced Computing Laboratory, LANL, MS-B287, Los Alamos NM 87545, USA

[OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-04-08 Thread Daniel Milroy
Hello, Recently a couple of our users have experienced difficulties with compute jobs failing with OpenMPI 1.6.4 compiled against GCC 4.7.2, with the nodes running kernel 2.6.32-279.5.2.el6.x86_64. The error is: File locking failed in ADIOI_Set_lock(fd 7,cmd F_SETLKW/7,type F_WRLCK/1,whence 0

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-04-14 Thread Daniel Milroy
Subject: Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4 Sorry for the delay in replying. Can you try upgrading to Open MPI 1.8, which was released last week? We refreshed the version of ROMIO that is included in OMPI 1.8 vs. 1.6. On Apr 8, 2014, at 6:49 PM, Daniel Milroy wrote: > He

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-04-15 Thread Daniel Milroy
¹t know but will pass these questions on to the users. Thank you, Dan Milroy On 4/14/14, 2:23 PM, "Rob Latham" wrote: > > >On 04/08/2014 05:49 PM, Daniel Milroy wrote: >> Hello, >> >> The file system in question is indeed Lustre, and mounting with flock

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-04 Thread Daniel Letai
Did you check shifter? https://www.nersc.gov/assets/Uploads/cug2015udi.pdf , https://www.nersc.gov/assets/Uploads/cug2015udi.pdf , http://www.nersc.gov/research-and-development/user-defined-images/ , https://github.com/NERSC/shifter On 06/03/2016 01:58 AM, Rob Na

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-06 Thread Daniel Letai
That's why they have acl in ZoL, no? just bring up a new filesystem for each container, with acl so only the owning container can use that fs, and you should be done, no? To be clear, each container would have to have a unique uid for this to work, but together

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-07 Thread Daniel Letai
e directory to be mounted. Daniel, we've had bad experiences with ZoL. It's allocation algorithm degrades rapidly when the file system gets over 80% full. It still is not integrated into major distros, which leads to dkms nightmares on system

[OMPI users] Potential developer to reinstate Xgrid support

2010-09-30 Thread Daniel Beatty
place. Thank you, Daniel Beatty Computer Scientist, Detonation Sciences Branch Code 474300D 2401 E. Pilot Plant Rd. M/S 1109 China Lake, CA 93555 daniel.bea...@navy.mil (760)939-7097

Re: [OMPI users] Error when using OpenMPI with SGE multiple hosts

2010-11-17 Thread Daniel Gruber
cessary reformulation of the request (modify #slots or #cores if necessary). Did I miss some important points from SGE/OGE point of view? Cheers Daniel Am Dienstag, den 16.11.2010, 18:24 -0700 schrieb Ralph Castain: > > > On Tue, Nov 16, 2010 at 12:23 PM, Terry Dontje >

[OMPI users] Segfault on mpirun with OpenMPI 1.4.5rc2

2012-01-31 Thread Daniel Milroy
Hello, I have built OpenMPI 1.4.5rc2 with Intel 12.1 compilers in an HPC environment. We are running RHEL 5, kernel 2.6.18-238 with Intel Xeon X5660 cpus. You can find my build options below. In an effort to test the OpenMPI build, I compiled "Hello world" with an mpi_init call in C and Fortran

Re: [OMPI users] Segfault on mpirun with OpenMPI 1.4.5rc2

2012-02-01 Thread Daniel Milroy
t; --without-memory-manager configure option? > > > On Jan 31, 2012, at 2:19 PM, Daniel Milroy wrote: > >> Hello, >> >> I have built OpenMPI 1.4.5rc2 with Intel 12.1 compilers in an HPC >> environment.  We are running RHEL 5, kernel 2.6.18-238 with Intel Xeon >>

Re: [OMPI users] Segfault on mpirun with OpenMPI 1.4.5rc2

2012-02-01 Thread Daniel Milroy
Hi Götz, I don't know whether we can implement your suggestion; it is dependent on the terms of our license with Intel. I will take this under advisement. Thank you very much. Dan Milroy 2012/2/1 Götz Waschk : > On Tue, Jan 31, 2012 at 8:19 PM, Daniel Milroy > wrote: >> He

[OMPI users] setsockopt() fails with EINVAL on solaris

2012-07-30 Thread Daniel Junglas
, ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), Can anybody confirm that the patch is good/correct? In particular that the '__sun' part is the right thing to do? Thanks, Daniel smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] setsockopt() fails with EINVAL on solaris

2012-07-30 Thread Daniel Junglas
I built from a tarball, not svn. In the VERSION file I have svn_r=r26429 Is that the information you asked for? Daniel users-boun...@open-mpi.org wrote on 07/30/2012 04:15:45 PM: > > Do you know what r# of 1.6 you were trying to compile? Is this via > the tarball or svn? &g

Re: [OMPI users] setsockopt() fails with EINVAL on solaris

2012-07-31 Thread Daniel Junglas
Thanks, configuring with '--enable-mca-no-build=rmcast' did the trick for me. Daniel users-boun...@open-mpi.org wrote on 07/30/2012 04:21:13 PM: > FWIW: the rmcast framework shouldn't be in 1.6. Jeff and I are > testing removal and should have it out of there soon. &g

[OMPI users] [threads] How to configure Open MPI for thread support

2012-10-08 Thread Daniel Mitchell
nable-thread-multiple even to use FUNNELED and SERIALIZED threads? Daniel

[OMPI users] Performance/stability impact of thread support

2012-10-29 Thread Daniel Mitchell
everyone but me, apparently). Does enabling thread support impact performance/stability? Daniel

[OMPI users] mpi problems/many cpus per node

2012-12-14 Thread Daniel Davidson
I have had to cobble together two machines in our rocks cluster without using the standard installation, they have efi only bios on them and rocks doesnt like that, so it is the only workaround. Everything works great now, except for one thing. MPI jobs (openmpi or mpich) fail when started fr

Re: [OMPI users] mpi problems/many cpus per node

2012-12-14 Thread Daniel Davidson
d line - this will report all the local proc launch debug and hopefully show you a more detailed error report. On Dec 14, 2012, at 12:29 PM, Daniel Davidson wrote: I have had to cobble together two machines in our rocks cluster without using the standard installation, they have efi only bios

Re: [OMPI users] mpi problems/many cpus per node

2012-12-14 Thread Daniel Davidson
] odls:kill_local_proc working on WILDCARD On 12/14/2012 04:11 PM, Ralph Castain wrote: Sorry - I forgot that you built from a tarball, and so debug isn't enabled by default. You need to configure --enable-debug. On Dec 14, 2012, at 1:52 PM, Daniel Davidson wrote: Oddly enough, adding this debugging

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
might try running this with the 1.7 release candidate, or even the developer's nightly build. Both use a different timing mechanism intended to resolve such situations. On Dec 14, 2012, at 2:49 PM, Daniel Davidson wrote: Thank you for the help so far. Here is the information that the debugg

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
, Daniel Davidson wrote: I will give this a try, but wouldn't that be an issue as well if the process was run on the head node or another node? So long as the mpi job is not started on either of these two nodes, it works fine. Dan On 12/14/2012 11:46 PM, Ralph Castain wrote: It must be making co

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
n it could be that launch from a backend node isn't allowed (e.g., on gridengine). On Dec 17, 2012, at 8:28 AM, Daniel Davidson wrote: This looks to be having issues as well, and I cannot get any number of processors to give me a different result with the new version. [root@compute-2-1

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
, we are going to attempt to send a message from tnode 2-0 to node 2-1 on the 10.1.255.226 address. Is that going to work? Anything preventing it? On Dec 17, 2012, at 8:56 AM, Daniel Davidson wrote: These nodes have not been locked down yet so that jobs cannot be launched from the backend, at

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
:01 compute-2-0 sshd[24868]: pam_unix(sshd:session): session opened for user root by (uid=0) On 12/17/2012 11:16 AM, Daniel Davidson wrote: A very long time (15 mintues or so) I finally received the following in addition to what I just sent earlier: [compute-2-0.local:24659] [[32341,0],1

Re: [OMPI users] mpi problems/many cpus per node

2012-12-17 Thread Daniel Davidson
compute-2-1 Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Mon Dec 17 16:12:32 2012 from biocluster.local [root@compute-2-1 ~]# On 12/17/2012 03:39 PM, Doug Reeder wrote: Daniel

Re: [OMPI users] mpi problems/many cpus per node

2012-12-19 Thread Daniel Davidson
I figured this out. ssh was working, but scp was not due to an mtu mismatch between the systems. Adding MTU=1500 to my /etc/sysconfig/network-scripts/ifcfg-eth2 fixed the problem. Dan On 12/17/2012 04:12 PM, Daniel Davidson wrote: Yes, it does. Dan [root@compute-2-1 ~]# ssh compute-2-0

[OMPI users] mpirun completes for one user, not for another

2013-02-11 Thread Daniel Fetchinson
erconnect is infiniband. I've really run out of ideas what else to compare between user A and B. Thanks for any hints, Daniel -- Psss, psss, put it down! - http://www.cafepress.com/putitdown -- Psss, psss, put it down! - http://www.cafepress.com/putitdown

Re: [OMPI users] mpirun completes for one user, not for another

2013-02-11 Thread Daniel Fetchinson
ile and apparently in non-interactive logins .bash_profile is not sourced. Only .bashrc is sourced. So if the PATH is set in .bashrc everything is fine and the problem went away. Thanks again, Daniel > Also check the LD_LIBRARY_PATH. > > > On Feb 11, 2013, at 7:11 AM, Daniel Fetchinson

[OMPI users] Multi-threading support for openib

2013-11-27 Thread Daniel Cámpora
al questions related to these. Does --enable-opal-multi-threads have any impact on the BTL multi-threading support? (If there's more documentation on what this does I'd be glad to read it). Is there any additional configuration tag necessary for enabling opal-multi-threads to work? Cheers, t

[OMPI users] valgrind invalid reads for large self-sends using thread_multiple

2014-02-10 Thread Daniel Ibanez
Hello, I have used OpenMPI in conjunction with Valgrind for a long time now, and developed a list of suppressions for known false positives over time. Now I am developing a library for inter-thread communication that is based on using OpenMPI with MPI_THREAD_MULTIPLE support. I have noticed that

[OMPI users] Process termination problem

2007-08-16 Thread Daniel Spångberg
to force the mpirun (or orted, or...) to kill the whole MPI program when this happens? If one of the application processes die from a signal (I have tested SEGV and FPE) rather than just exiting the whole application is indeed killed. Best regards Daniel Spångberg

Re: [OMPI users] Process termination problem

2007-08-17 Thread Daniel Spångberg
ed between MPI_Init and exit/_exit. I'd rather not keep this "solution" for too long. If it is indeed so that the mpirun man-page is wrong and the code right, I'd rather push the proper error-handling solution. Best regards Daniel Spångberg On Fri, 17 Aug 2007 18:25:17

Re: [OMPI users] Process termination problem

2007-08-20 Thread Daniel Spångberg
, otherwise there's problems with things like call system) has been called before my atexit routine is called... Best regards Daniel On Mon, 20 Aug 2007 14:37:44 +0200, Sven Stork wrote: instead of doing dirty with the library you could try to register a cleanup function with atexit. T

[OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-12 Thread Daniel Rozenbaum
_frag_recv: readv failed with errno=110 Excerpts from strace output, and ompi_info are attached below. Any advice would be greatly appreciated! Thanks in advance, Daniel strace on the orterun process: poll([{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=5, events=POLLIN}, {fd=8, events=POLLIN

Re: [OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-17 Thread Daniel Rozenbaum
his particular run somehow triggers it?.. Could these messages also mean that some messages got lost due to these errors, and that's why the server thinks it still has some results to receive while the clients think they've sent everything out? Many thanks, Daniel Jeff Squyres wrote:

Re: [OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-19 Thread Daniel Rozenbaum
t, and those seem to have kept working all along, until the app got stuck. Once this valgrind experiment is over, I'll proceed to your other suggestion about the debug loop on the server side checking for any of the requests the app is waiting for being MPI_REQUEST_NULL. Many thanks, Daniel

Re: [OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-27 Thread Daniel Rozenbaum
at the beginning of the run and are processed correctly though. Also, I ran the same experiment on another cluster that uses slightly different hardware and network infrastructure, and could not reproduce the problem. Hope at least some of the above makes some sense. Any additional advice would be greatl

Re: [OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-28 Thread Daniel Rozenbaum
d()'s and three unprocessed Irecv()'s. I've upgraded to Open MPI 1.2.4, but this made no difference. Are there any internal logging or debugging facilities in Open MPI that would allow me to further track the calls that eventually result in the error in mca_btl_tcp_frag_recv() ? Tha

[OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-03 Thread Daniel Rozenbaum
the immediately preceding MPI_Probe and MPI_Get_elements return properly? Thanks, Daniel

Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
omplete == false" and calls opal_condition_wait(). Jeff Squyres wrote: Can you send a short test program that shows this problem, perchance? On Oct 3, 2007, at 1:41 PM, Daniel Rozenbaum wrote: Hi again, I'm trying to debug the problem I posted on several times recently; I thought I

Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
ome reason, OMPI appears to have decided that it had not yet received the message. Perhaps a memory bug in your application...? Have you run it through valgrind, or some other memory-checking debugger, perchance? On Oct 18, 2007, at 12:35 PM, Daniel Rozenbaum wrote: Unfortunately, so far

[OMPI users] SCALAPACK: Segmentation Fault (11) and Signal code: Address not mapped (1)

2008-01-22 Thread Backlund, Daniel
Hello all, I am using OMPI 1.2.4 on a Linux cluster (Rocks 4.2). OMPI was configured to use the Pathscale Compiler Suite installed in the (NFS mounted on nodes) /home/PROGRAMS/pathscale. I am trying to compile and run the example1.f that comes with the ACML package from AMD, and I am unable

Re: [OMPI users] flash2.5 with openmpi

2008-01-25 Thread Daniel Pfenniger
Hi, Brock Palen wrote: Is anyone using flash with openMPI? we are here, but when ever it tries to write its second checkpoint file it segfaults once it gets to 2.2GB always in the same location. Debugging is a pain as it takes 3 days to get to that point. Just wondering if anyone else h

Re: [OMPI users] SCALAPACK: Segmentation Fault (11) and Signal code:Address not mapped (1)

2008-01-30 Thread Backlund, Daniel
eout instead of ORTE_SUCCESS. -- [compute-0-1.local:19365] OOB: Connection to HNP lost <<< END example1.output >>> Is it possible that the ACML libraries are incompatible with linking to my version of OMPI? Or like Jeff said, maybe it is just a Pathscale bug. I hope not. Danie

[OMPI users] MPI_Alltoallv and unknown data send sizes

2008-09-10 Thread Daniel Spångberg
here is no way of determining the length of the data sent by the sender on the receiving end, I see two options: Either always transmit too much data using MPI_Alltoall(v) or cook up my own routine based on PTP calls, probably MPI_Sendrecv is the best option. Am I missing something?

Re: [OMPI users] MPI_Alltoallv and unknown data send sizes

2008-09-10 Thread Daniel Spångberg
ome short tests, anyway if it turns out the alltoall/alltoallv combo is too slow. Thanks again! Daniel Den 2008-09-10 17:10:06 skrev George Bosilca : Daniel, Your understanding of he MPI standard requirement with regard to MPI_Alltoallv is now 100% accurate. The send count and datatype shou

[OMPI users] Strange segfault in openmpi

2008-09-19 Thread Daniel Hansen
m ran fine before we upgraded to the current openmpi version, and that he can't find any bugs in his code. Thanks for your help, Daniel Hansen Systems Administrator BYU Fulton Supercomputing Lab

[OMPI users] segfault issue - possible bug in openmpi

2008-10-03 Thread Daniel Hansen
any suggestions on how best to do this? Is there an easy way to attach gdb to one of the processes or something?? I have already compiled openmpi with debugging, memory profiling, etc. How can I best take advantage of these features? Thanks, Daniel Hansen Systems Administrator BYU Fulton

Re: [OMPI users] segfault issue - possible bug in openmpi

2008-10-03 Thread Daniel Hansen
/replica_mpi_marylou2/Openmpi_md_twham [0x4040b9] [m4b-1-8:11483] *** End of error message *** On Fri, Oct 3, 2008 at 3:20 PM, Daniel Hansen wrote: > I have been testing some code against openmpi lately that always causes it > to crash during certain mpi function calls. The code does not seem to be > th

[OMPI users] Disconnections

2009-07-01 Thread Daniel Miles
Hi, everybody. I¹m having trouble where one of my client nodes crashes while I have an MPI job on it. When this happens, the mpirun process on the head node never returns. I can kill it with a SIGINT (ctrl-c) and it still cleans up its child processes on the remaining healthy client nodes but I do

[OMPI users] Very different speed of collective tuned algorithms for alltoallv

2009-08-29 Thread Daniel Spångberg
d for my problem, but I was somewhat surprised about the very large difference in speed, so I wanted to report it here, if other users find themselves in a similar situation. -- Daniel Spångberg Materialkemi Uppsala Universitet

[OMPI users] openmpi 1.4 broken -mca coll_tuned_use_dynamic_rules 1

2009-12-30 Thread Daniel Spångberg
0x400869] [girasole:27508] *** End of error message *** Best regards, -- Daniel Spångberg Materialkemi Uppsala Universitet

Re: [OMPI users] openmpi 1.4 broken -mca coll_tuned_use_dynamic_rules 1

2009-12-30 Thread Daniel Spångberg
to use one, unfortunately. Daniel Den 2009-12-30 15:17:17 skrev Lenny Verkhovsky : This is the a knowing issue, https://svn.open-mpi.org/trac/ompi/ticket/2087 Maybe it's priority should be raised up. Lenny.

Re: [OMPI users] openmpi 1.4 broken -mca coll_tuned_use_dynamic_rules 1

2009-12-30 Thread Daniel Spångberg
ue gets fixed in the future! Daniel Den 2009-12-30 15:57:50 skrev Lenny Verkhovsky : The only workaround that I found is a file with dynamic rules. This is an example that George sent me once. It helped for me, until it will be fixed. " Lenny, You asked for dynamic rules

Re: [OMPI users] openmpi 1.4 broken -mca coll_tuned_use_dynamic_rules 1

2009-12-30 Thread Daniel Spångberg
That works! Many thanks! Daniel Den 2009-12-30 16:44:52 skrev Lenny Verkhovsky : it may crash if it doesnt see a file with rules. try providing it through the command line $mpirun -mca coll_tuned_use_dynamic_rules 1 -mca coll_tuned_dynamic_rules_filename full_path_to_file_ . On Wed

Re: [OMPI users] dynamic rules

2010-01-15 Thread Daniel Spångberg
Number of alltoall algorithms available MCA coll: parameter "coll_tuned_alltoall_algorithm" (current value: "0") Which alltoall algorithm is used. Can be locked down to choice of: 0 ignore, 1 basic linear, 2 pairwise, 3: modified

Re: [OMPI users] dynamic rules

2010-01-15 Thread Daniel Spångberg
: mpirun -mca coll_tuned_use_dynamic_rules 1 -mca coll_tuned_dynamic_rules_filename /home/.openmpi/dynamic_rules_file That works for me with openmpi 1.4. I have not tried 1.4.1 yet. Daniel

Re: [OMPI users] dynamic rules

2010-01-20 Thread Daniel Spångberg
WORLD,half_group,&half_comm); HTH -- Daniel Spångberg Materialkemi Uppsala Universitet

[OMPI users] Ok, I've got OpenMPI set up, now what?!

2010-07-17 Thread Daniel Janzon
ow to chunk up a matrix and pass it out to the available processes. All the best, Daniel

Re: [OMPI users] Ok, I've got OpenMPI set up, now what?!

2010-07-19 Thread Daniel Janzon
Thanks a lot! PETSc seems to be really solid and integrates with MUMPS suggested by Damien. All the best, Daniel Janzon On 7/18/10, Gustavo Correa wrote: > Check PETSc: > http://www.mcs.anl.gov/petsc/petsc-as/ > > On Jul 18, 2010, at 12:37 AM, Damien wrote: > >> You shoul

[OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
given a simple hello.c: #include #include int main(int argc, char* argv[]) { int size, rank, len; char name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_proc

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
No, that's the issue. I had to disable it to get things working. That's why I included my config settings - I couldn't figure out which option enabled it, so I could remove it from the configuration... On 06/18/2015 02:43 PM, Gilles Gouaillardet wrote: Daniel, ML module i

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
user config, cli, environment variable) Cheers, Gilles On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote: No, that's the issue. I had to disable it to get things working. That's why I included my config settings - I couldn't figure ou

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-21 Thread Daniel Letai
s is really odd... you can run ompi_info --all and search coll_ml_priority it will display the current value and the origin (e.g. default, system wide config, user config, cli, environment variable) Cheers, Gilles On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> w

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-24 Thread Daniel Letai
Gilles, Attached the two output logs. Thanks, Daniel On 06/22/2015 08:08 AM, Gilles Gouaillardet wrote: Daniel, i double checked this and i cannot make any sense with these logs. if coll_ml_priority is zero, then i do not any way how ml_coll_hier_barrier_setup can be invoked. could you

[OMPI users] display-map option in v1.8.8

2015-10-12 Thread Daniel Letai
Hi, After upgrading to 1.8.8 I can no longer see the map. When looking at the man page for mpirun, display-map no longer exists. Is there a way to show the map in 1.8.8 ? Another issue - I'd like to map 2 process per node - 1 to each socket. What is the current "correct" syntax? --map-by ppr:2

Re: [OMPI users] display-map option in v1.8.8

2015-10-20 Thread Daniel Letai
Thanks for the reply, On 10/13/2015 04:04 PM, Ralph Castain wrote: On Oct 12, 2015, at 6:10 AM, Daniel Letai wrote: Hi, After upgrading to 1.8.8 I can no longer see the map. When looking at the man page for mpirun, display-map no longer exists. Is there a way to show the map in 1.8.8 ? I

Re: [OMPI users] display-map option in v1.8.8

2015-10-21 Thread Daniel Letai
On 10/20/2015 04:14 PM, Ralph Castain wrote: On Oct 20, 2015, at 5:47 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote: Thanks for the reply, On 10/13/2015 04:04 PM, Ralph Castain wrote: On Oct 12, 2015, at 6:10 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote: Hi,

[OMPI users] It's possible to get mpi working without ssh?

2018-12-19 Thread Daniel Edreira
Hi all, Does anyone know if there's a possibility to configure a cluster of nodes to communicate with each other with mpirun without using SSH? Someone is asking me about making a cluster with Infiniband that does not use SSH to communicate using OpenMPI. Thanks in advance Regards. __

Re: [OMPI users] It's possible to get mpi working without ssh?

2018-12-19 Thread Daniel Edreira
(jsquyres) via users Sent: Wednesday, December 19, 2018 7:18:23 PM To: Open MPI User's List Cc: Jeff Squyres (jsquyres) Subject: Re: [OMPI users] It's possible to get mpi working without ssh? On Dec 19, 2018, at 11:42 AM, Daniel Edreira wrote: > > Does anyone know if there&#x

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Hello, I have built the following stack : centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4) MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.5-x86_64.tgz built with --all --without-32bit (this includes ucx 1.5.0) hwloc from centos 7.5 : 1.11.8-4.el7

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Sent from my iPhone > On 3 Mar 2019, at 16:31, Gilles Gouaillardet > wrote: > > Daniel, > > PMIX_MODEX and PMIX_INFO_ARRAY have been removed from PMIx 3.1.2, and > Open MPI 4.0.0 was not ready for this. > > You can either use the internal PMIx (3.0.2), or try 4.

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Gilles, On 04/03/2019 01:59:28, Gilles Gouaillardet wrote: Daniel, keep in mind PMIx was designed with cross-version compatibility in mind, so a PMIx 3.0.2 client (read Open MPI 4.0.0 app with the internal

Re: [OMPI users] Building PMIx and Slurm support

2019-03-04 Thread Daniel Letai
Gilles, On 3/4/19 8:28 AM, Gilles Gouaillardet wrote: Daniel, On 3/4/2019 3:18 PM, Daniel Letai wrote: So unless you have a specific reason not to mix both, you might also give the internal PMIx a try

Re: [OMPI users] Building PMIx and Slurm support

2019-03-12 Thread Daniel Letai
ce : +966 (0) 12-808-0367 *From:* users on behalf of Ralph H Castain *Sent:* Monday, March 4, 2019 5:29 PM *To:* Open MPI Users *Subject:* Re: [OMPI users] Building PMIx and Slurm support On Mar 4, 2019,

[OMPI users] Are there any issues (performance or otherwise) building apps with different compiler from the one used to build openmpi?

2019-03-20 Thread Daniel Letai
Hello, Assuming I have installed openmpi built with distro stock gcc(4.4.7 on rhel 6.5), but an app requires a different gcc version (8.2 manually built on dev machine). Would there be any issues, or performance penalty, if building the app u

[OMPI users] Packaging issue with linux spec file when not build_all_in_one_rpm due to empty grep

2019-04-16 Thread Daniel Letai
In src rpm version 4.0.1 if building with --define 'build_all_in_one_rpm 0' the grep -v _mandir docs.files is empty. The simple workaround is to follow earlier pattern and pipe to /bin/true, as the spec doesn't really care if the file is empty. I'm wonderi

[OMPI users] TCP usage in MPI singletons

2019-04-17 Thread Daniel Hemberger
Hi everyone, I've been trying to track down the source of TCP connections when running MPI singletons, with the goal of avoiding all TCP communication to free up ports for other processes. I have a local apt install of OpenMPI 2.1.1 on Ubuntu 18.04 which does not establish any TCP connections by d

Re: [OMPI users] TCP usage in MPI singletons

2019-04-19 Thread Daniel Hemberger
Hi Gilles, all, Using `OMPI_MCA_ess_singleton_isolated=true ./program` achieves the desired result of establishing no TCP connections for a singleton execution. Thank you for the suggestion! Best regards, -Dan On Wed, Apr 17, 2019 at 5:35 PM Gilles Gouaillardet wrote: > Daniel, > >

Re: [OMPI users] Beowulf cluster and openmpi

2008-11-05 Thread Daniel Gruner
libraries and executables, so this directory must be mounted on the nodes. You don't want to copy all this stuff to the nodes in a bproc environment, since it would eat away at your ram. Daniel On Wed, Nov 05, 2008 at 12:44:03PM -0600, Rima Chaudhuri wrote: > Thanks for all your help Ralph

[OMPI users] problem with overlapping communication with calculation

2009-03-25 Thread Daniel Spångberg
Dear list, We've found a problem with openmpi when running over IB when calculation reading elements of an array is overlapping communication to other elements (that are not used in the calculation) of the same array. I have written a small test program (below) that shows this behaviour. Wh

Re: [OMPI users] problem with overlapping communication with calculation

2009-03-25 Thread Daniel Spångberg
to this list once I know what's going on. Sorry to trouble you too early! Daniel Spångberg Den 2009-03-25 09:44:37 skrev Daniel Spångberg : Dear list, We've found a problem with openmpi when running over IB when calculation reading elements of an array is overlapping commun

Re: [OMPI users] problem with overlapping communication with calculation

2009-03-25 Thread Daniel Spångberg
ime to figure out the circumstances when this happens. I will report back to this list once I know what's going on. Sorry to trouble you too early! Daniel Spångberg Den 2009-03-25 09:44:37 skrev Daniel Spångberg : Dear list, We've found a problem with openmpi when running ov

Re: [OMPI users] Open-MPI and gprof

2009-04-23 Thread Daniel Spångberg
is the MPI rank integer. vprof can also use papi, but I have not (yet) tried this. Daniel Spångberg Den 2009-04-23 02:00:01 skrev Brock Palen : There is a tool (not free) That I have liked that works great with OMPI, and can use gprof information. http://www.allinea.com/index.php?page=74

Re: [OMPI users] Open-MPI and gprof

2009-04-23 Thread Daniel Spångberg
Regarding miscompilation of vprof and bfd_get_section_size_before_reloc. Simply change the call from bfd_get_section_size_before_reloc to bdf_get_section_size in exec.cc and recompile. Daniel Spångberg Den 2009-04-23 10:16:07 skrev jody : Hi all Thanks for all the input. I have not

Re: [OMPI users] Building OMPI-1.0.2 on OS X v10.3.9 with IBM XLC +XLF

2006-04-10 Thread David Daniel
Perhaps this is a bug in xlc++. Maybe this one... http://www-1.ibm.com/support/docview.wss?uid=swg1IY78555 My (untested) guess is that removing the const_cast will allow it to compile, i.e. in ompi/mpi/cxx/group_inln.h replace const_cast(ranges) by ranges David On Apr 10,

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-11 Thread David Daniel
I suspect that to get this to work for bproc, then we will have to build mpirun as 64-bit and the library as 32-bit. That's because a 32-bit compiled mpirun calls functions in the 32-bit /usr/lib/ libbroc.so which don't appear to function when the system is booted 64-bit. Of course that w

[OMPI users] mpirun crashes when compiled in 64-bit mode on Apple Mac Pro

2006-10-26 Thread Daniel Vollmer
em_init.c:41 #3 0x000100407eea in orte_init (infrastructure=true) at runtime/ orte_init.c:48 #4 0x00010e20 in orterun (argc=2, argv=0x7fff5fbffbc0) at orterun.c:329 #5 0x00010cc1 in main (argc=2, argv=0x7fff5fbffbc0) at main.c:13 Any ideas / advice? Thanks,

Re: [OMPI users] mpirun crashes when compiled in 64-bit mode on Apple Mac Pro

2006-10-26 Thread Daniel Vollmer
in certain circumstances, but this seems to fix it. Thank you for the quick reply, but the patch did not help matters. I'm currently in the process of compiling a current gcc as I am not sure how far Apple's rather old gcc 4.01 derivative can be trusted. Daniel.

[OMPI users] bproc problems

2007-04-26 Thread Daniel Gruner
always fails, so this is a bug. The same occurs for all the codes that I have tried, both simple and complex. Thanks for your attention to this. Regards, Daniel -- Dr. Daniel Grunerdgru...@chem.utoronto.ca Dept. of Chemistry danie

Re: [OMPI users] Compile WRFV2.2 with OpenMPI

2007-04-27 Thread Daniel Gruner
>From Jiming's error messages, it seems that he is using 1.1 libraries and header files, while supposedly compiling for ompi 1.2, therefore causing undefined stuff. Am I wrong in this assessment? Daniel On Fri, Apr 27, 2007 at 08:03:34AM -0400, Jeff Squyres wrote: > This is quite o

Re: [OMPI users] bproc problems

2007-04-27 Thread Daniel Gruner
Thanks to both you and David Gunter. I disabled pty support and it now works. There is still the issue of the mpirun default being "-byslot", which causes all kinds of trouble. Only by using "-bynode" do things work properly. Daniel On Thu, Apr 26, 2007 at 02:28:33PM -

[OMPI users] Compilation bug in libtool

2007-06-01 Thread Daniel Pfenniger
Hello, version 1.2.2 refuses to compile on Mandriva 2007.1: (more details are in the attached lg files) ... make[2]: Entering directory `/usr/src/rpm/BUILD/openmpi-1.2.2/opal/asm' depbase=`echo asm.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`; \ if /bin/sh ../../libtool --tag=CC --mode=compile

[OMPI users] collective algorithms

2014-11-17 Thread Faraj, Daniel A
sort of work, or point me to a paper? Basically, for a given collective operation, what are: a) Communication algorithm being used for a given criteria (i.e. message size or np) b) What is theoretical algorithm cost Thanx --- Daniel Faraj

Re: [OMPI users] collective algorithms

2014-11-20 Thread Faraj, Daniel A
will look around..again Thanx --- Daniel Faraj From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gilles Gouaillardet Sent: Monday, November 17, 2014 10:07 PM To: Open MPI Users Subject: Re: [OMPI users] collective algorithms Daniel, you can run $ ompi_info --parseable --all

[OMPI users] netloc

2014-12-05 Thread Faraj, Daniel A
issue but no solution was posted. Any idea why we are seeing 0 subnet? Is there something I should check for in the xml files? --- Daniel Faraj

[OMPI users] open mpi and MLX

2014-12-09 Thread Faraj, Daniel A
n16 Registerable memory: 24576 MiB Total memory:65457 MiB Your MPI job will continue, but may be behave poorly and/or hang. --- Daniel Faraj

  1   2   >