I agree; that is a bummer. :-(
Warner -- do you have any advice here, perchance?
On May 4, 2009, at 7:26 PM, Vicente Puig wrote:
But it doesn't work well.
For example, I am trying to debug a program, "floyd" in this case,
and when I make a breakpoint:
No line 26 in file "../../../gcc-4.
On May 5, 2009, at 2:47 PM, Jeff Squyres wrote:
On May 5, 2009, at 1:59 PM, Robert Kubrick wrote:
I am preparing a presentation where I will discuss commodity
interconnects and the evolution of Ethernet and Infiniband NICs. The
idea is to show the advance in network interfaces speed over time
It is indeed surprisingly hard to draw a simple timeline for ethernet
speed evolution. Network throughput is relative to the medium (fiber
optic, coaxial, twisted pair...), distance (LAN, switch, WAN), number
of channels (half duplex, full duplex), level of commercialization
(research, prod
On May 5, 2009, at 1:59 PM, Robert Kubrick wrote:
I am preparing a presentation where I will discuss commodity
interconnects and the evolution of Ethernet and Infiniband NICs. The
idea is to show the advance in network interfaces speed over time on
a chart. So far I have collected the following
I can't find a similar data set for Infiniband. I would appreciate any
comment/links.
Here is IB roadmap http://www.infinibandta.org/itinfo/IB_roadmap
...But I do not see there SDR
Pasha
Greetins,
I am preparing a presentation where I will discuss commodity
interconnects and the evolution of Ethernet and Infiniband NICs. The
idea is to show the advance in network interfaces speed over time on
a chart. So far I have collected the following *approximative* data
for Ethernet
2009/5/5 Jeff Squyres :
> On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote:
>
>> The first is what the support of LSF by OpenMPI means. When mpirun is
>> executed, it is an LSF job that is actually ran? Or what does it
>> imply? I've tried to search on the openmpi website as well as on the
>> int
Eugene Loh wrote:
Put more strongly, the "correct" (subjective term) way for an MPI
implementation to bind processes is upon process creation and waiting
until MPI_Init is "wrong". This point of view has nothing to do with
asking the MPI implementation to support binding of non-MPI processes.
On May 5, 2009, at 9:25 AM, Jeroen Kleijer wrote:
If you wish to submit to lsf using its native commands (bsub) you
can do the following:
bsub -q ${QUEUE} -a openmpi -n ${CPUS} "mpirun.lsf -x PATH -x
LD_LIBRARY_PATH -x MPI_BUFFER_SIZE ${COMMAND} ${OPTIONS}"
It should be noted that in thi
Ralph Castain wrote:
On May 5, 2009, at 3:37 AM, Geoffroy Pignot wrote:
The result is : everything works fine with MPI executables : logical !!!
What I was trying to do , was to run non MPI exes thanks to mpirun.
There , openmpi is not able to bind these processes to a particular CPU.
My
If you wish to submit to lsf using its native commands (bsub) you can do the
following:
bsub -q ${QUEUE} -a openmpi -n ${CPUS} "mpirun.lsf -x PATH -x
LD_LIBRARY_PATH -x MPI_BUFFER_SIZE ${COMMAND} ${OPTIONS}"
It should be noted that in this case you don't call OpenMPI's mpirun
directly but use th
Actually, my memory was correct!
I believe you are looking at the old code in the 1.3 branch, and not the new
code in the trunk (and soon to come to the 1.3 branch). The new code does
not have this check any more as it is not required.
Sorry for confusion...
On Tue, May 5, 2009 at 7:08 AM, Ral
Ah - thx for catching that, I'll remove that check. It no longer is
required.
Thx!
On Tue, May 5, 2009 at 7:04 AM, Lenny Verkhovsky wrote:
> According to the code it does cares.
>
> $vi orte/mca/rmaps/rank_file/rmaps_rank_file.c +572
>
> ival = orte_rmaps_rank_file_value.ival;
> if ( ival > (
According to the code it does cares.
$vi orte/mca/rmaps/rank_file/rmaps_rank_file.c +572
ival = orte_rmaps_rank_file_value.ival;
if ( ival > (np-1) ) {
orte_show_help("help-rmaps_rank_file.txt", "bad-rankfile", true, ival,
rankfile);
rc = ORTE_ERR_BAD_PARAM;
goto unlock;
}
If I remembe
Sorry Lenny, but that isn't correct. The rankfile mapper doesn't care if the
rankfile contains additional info - it only maps up to the number of
processes, and ignores anything beyond that number. So there is no need to
remove the additional info.
Likewise, if you have more procs than the rankfil
On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote:
The first is what the support of LSF by OpenMPI means. When mpirun is
executed, it is an LSF job that is actually ran? Or what does it
imply? I've tried to search on the openmpi website as well as on the
internet, but I couldn't find a clear an
On Tue, 2009-05-05 at 12:10 +0200, Matthieu Brucher wrote:
> Hello,
>
> I have two questions, in fact.
>
> The first is what the support of LSF by OpenMPI means. When mpirun is
> executed, it is an LSF job that is actually ran? Or what does it
> imply? I've tried to search on the openmpi website
Hi,
maximum rank number must be less then np.
if np=1 then there is only rank 0 in the system, so rank 1 is invalid.
please remove "rank 1=node2 slot=*" from the rankfile
Best regards,
Lenny.
On Mon, May 4, 2009 at 11:14 AM, Geoffroy Pignot wrote:
> Hi ,
>
> I got the
> openmpi-1.4a1r21095.tar.g
On May 5, 2009, at 3:37 AM, Geoffroy Pignot wrote:
Hi
The result is : everything works fine with MPI executables :
logical !!!
What I was trying to do , was to run non MPI exes thanks to mpirun.
There , openmpi is not able to bind these processes to a particular
CPU.
My conclusion i
Hello,
I have two questions, in fact.
The first is what the support of LSF by OpenMPI means. When mpirun is
executed, it is an LSF job that is actually ran? Or what does it
imply? I've tried to search on the openmpi website as well as on the
internet, but I couldn't find a clear answer/use case.
Hi
The result is : everything works fine with MPI executables : logical !!!
What I was trying to do , was to run non MPI exes thanks to mpirun. There ,
openmpi is not able to bind these processes to a particular CPU.
My conclusion is that the process affinity is set in MPI_Init, right ?
Could i
Jan,
I guess that you have OFED driver installed on you machines. You may do
basic network verification with ibdiagnet utility
(http://linux.die.net/man/1/ibdiagnet) that is part of OFED installation.
Regards,
Pasha
Jeff Squyres wrote:
On May 4, 2009, at 9:50 AM, jan wrote:
Thank you Jef
22 matches
Mail list logo