I have a dissenting opinion about computers "moving on a bit".  At least when it comes to most crystallography software.

Back in the late 20th century I defined some benchmarks for common crystallographic programs with the aim of deciding which hardware to buy.  By about 2003 the champion of my refmac benchmark (https://bl831.als.lbl.gov/~jamesh/benchmarks/index.html#refmac) was the new (at the time) AMD "Opteron" at 1.4 GHz.  That ran in 74 seconds.

Last year, I bought a rather expensive 4-socket Intel Xeon E7-8870 v3 (turbos to 3.0 GHz), which is the current champion of my XDS benchmark.  The same old refmac benchmark on this new machine, however, runs in 68.6 seconds.  Only a smidge faster than that old Opteron (which I threw away years ago).

The Xeon X5550 in consideration here takes 74.1 seconds to run this same refmac benchmark, so price/performance wise I'd say that's not such a bad deal.

The fastest time I have for refmac to date is 41.4 seconds on a Xeon W-2155, but if you scale by GHz you can see this is mostly due to its fast clock speed (turbo to 4.5 GHz). With a few notable exceptions like XDS, HKL2k and shelx, which are multi-processing and optimized to take advantage of the latest processor features using intel compilers, most crystallographic software is either written in Python or compiled with gcc.  In both these cases you end up with performance pretty much scaling with GHz.  And GHz is heat.

Admittedly, the correlation is not perfect, and software has changed a wee bit over the years, so comparisons across the decades are not exactly fair, but the lesson I have learned from all my benchmarking is that single-core raw performance has not changed much in the last ~10 years or so.  Almost all the speed increase we have seen has come from parallelization.

And one should not be too quick to dismiss clusters in favor of a single box with a high core count. The latter can be held back by memory contention and other hard-to-diagnose problems.  Even with parallel execution many crystallography programs don't get any faster beyond using about 8-10 cores.  Don't let 100% utilization fool you!  Use a timer and you'll see.  I'm not really sure why that is, but it is the reason that same Xeon W-2155 that leads my refmac benchmark is also my champion system for running DIALS and phenix.refine.

My two cents,

-James Holton
MAD Scientist


On 11/26/2018 1:10 AM, V F wrote:
Dear all,
Thanks for all the off/list replies.

To be honest, how much are they paying you to take it? Can you sell it for
scrap?
May be I will give it a pass.

To compare, two dual CPU servers with Skylake Gold 6148 - that is 40 cores -
will probably beat the whole lot even if you could keep the cluster going.
And keeping clusters busy is a time consuming challenge... I know!
If they are 250W servers, then you are looking at £8000 per year to power
and cool it. The two modern servers will be more like £1500 per year to run.
And the servers will only cost about £6000... the economics and planet don't
stack up!
By servers do you mean tower/standalone?

Thanks for the detailed explanation. From 2012, we already have many
dell precision T5600 with 2 x Xeon E5-2643 (8 Cores) (16 threads) and
I was hoping parallellisation with clusters maybe of some help. Looks
not.

These are running so well (takes about 45 min for a typical dataset
reduction with DIALS) I am not sure buying new ones is useful.

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to