When I was doing presales, the vast majority of our small to middle size
procurements were for a three years duration.
sometimes, the maintenance was extended for one year, but the cluster was
generally replaced after three years.
I can understand the fastest clusters might last longer (5 years for
On Mon, Mar 21, 2016 at 6:06 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Durga,
>
> currently, the average life expectancy of a cluster is 3 years.
>
By average life expectancy, do you mean the average time to upgrade? DOE
supercomputers usually run for 5-6 years, and some
On Mon, Mar 21, 2016 at 1:37 PM, Brian Dobbins wrote:
>
> Hi Jeff,
>
> On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond
> wrote:
>
>> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf
>> to see the status of all implementations w.r.t. MPI-3 as of one year ago.
>>
>
> Thank yo
Hi Dave,
With which compiler, and even optimized?
>
> $ `mpif90 --showme` --version | head -n1
> GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)
> $ cat a.f90
> use mpi
> if (mpi_version == 3) call undefined()
> print *, mpi_version
> end
> $ mpif90 a.f90 && ./a.out
>
Brian Dobbins writes:
> Hi everyone,
>
> This isn't really a problem, per se, but rather a search for a more
> elegant solution. It also isn't specific to OpenMPI, but I figure the
> experience and knowledge of people here made it a suitable place to ask:
It's also not Fortran specific, thoug
Hi Jeff,
On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond
wrote:
> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf
> to see the status of all implementations w.r.t. MPI-3 as of one year ago.
>
Thank you - that's something I was curious about, and it's incredibly
helpful.
The better solution is just to require MPI-3. It is available everywhere
except Blue Gene/Q at this point, and it is better to putting the burden of
ensuring MPI-3 is installed on the system to your users than doing horrible
gymnastics to support ancient MPI libraries.
You can consult http://meet
Call MPI from C code, where you will have all the preprocessor support you
need. Wrap that C code with Fortran 2003 ISO_C_BINDING. If you don't have
neighborhood collectives from MPI-3, you can implement them using MPI-1
yourself in the interface between your Fortran code and MPI C bindings.
This
Hi everyone,
This isn't really a problem, per se, but rather a search for a more
elegant solution. It also isn't specific to OpenMPI, but I figure the
experience and knowledge of people here made it a suitable place to ask:
I'm working on some code that'll be used and downloaded by others on
hmm, I'm not correct. cr_restart starts with no errors, launches some of the
processes, then suspends itself. strace on mpirun on this manual invocation
yields the behavior same as below.
-Henk
[hmeij@swallowtail kflaherty]$ ps -u hmeij
PID TTY TIME CMD
29481 ?00:00:00 re
openmpi1.2 (yes, I know old),python 2.6.1 blcr 0.8.5
when I attempt to cr_restart (having performed cr_checkpoint --save-all) I can
restart the job manually with blcr on a node. but when I go through my openlava
scheduler, the cr_restart launches mpirun, then nothing. no orted or the python
p
I have some ideas on how to make sure self is always selected. PR coming
shortly.
-Nathan
On Mon, Mar 21, 2016 at 02:33:53PM +, Jeff Squyres (jsquyres) wrote:
> On Mar 19, 2016, at 11:53 AM, dpchoudh . wrote:
> >
> > 1. Why 'self' needs to be explicitly mentioned when using the BTL
> > co
"Lane, William" writes:
> Ralph,
>
> For the following openMPI job submission:
>
> qsub -q short.q -V -pe make 84 -b y mpirun -np 84 --prefix
> /hpc/apps/mpi/openmpi/1.10.1/ --hetero-nodes --mca btl ^sm --mca
> plm_base_verbose 5 /hpc/home/lanew/mpi/openmpi/a_1_10_1.out
>
> I have some more infor
+1 on what Gilles says. 10 years is too lengthy of a horizon to guarantee
knowledge in the fast-moving tech sector. All you can do is make good
estimates based on your requirements and budget today (and what you can
estimate over the next few years).
> On Mar 21, 2016, at 6:06 AM, Gilles Gou
On Mar 20, 2016, at 9:23 PM, Gilles Gouaillardet
wrote:
>
> Durga,
>
> since the MPI c++ bindings are not required, you might want to
> mpicc ... -lstd++
> instead of
> mpicxx ...
I'm not sure I'd recommend that. Using the C++ compiler may do other
C++-specific bootstrapping things that the
On Mar 19, 2016, at 11:53 AM, dpchoudh . wrote:
>
> 1. Why 'self' needs to be explicitly mentioned when using the BTL
> communication? Since it must always be there for MPI communication to work,
> should it not be implicit? I am sure there is some architectural rationale
> behind this; could
Durga,
currently, the average life expectancy of a cluster is 3 years.
si if you have to architect a cluster out of off the shelf components, I
would recommend
you take the "best" components available today or to be released in a
very near future.
so many things can happen in 10 years, so I can on
Hello all
I don't mean this to be a political conversation, but more of a research
type.
>From what I have been observing, some of the interconnects that had very
good technological features as well as popularity in the past have
basically gone down the history book and some others, with comparab
On Sun, Mar 20, 2016 at 10:37 PM, dpchoudh . wrote:
> I'd tend to agree with Gilles. I have written CUDA programs in pure C
> (i.e. neither involving MPI nor C++) and a pure C based tool chain builds
> the code successfully. So I don't see why CUDA should be intrinsically C++.
>
nvcc calls the C
Thanks Eric,
that makes sense now.
Durga,
since the MPI c++ bindings are not required, you might want to
mpicc ... -lstd++
instead of
mpicxx ...
Cheers,
Gilles
On Monday, March 21, 2016, Erik Schnetter wrote:
> According to the error message, "device.o" is the file that causes the
> error.
20 matches
Mail list logo