On Fri, Nov 20, 2009 at 02:57:24PM +0000, David Tweed wrote: > I was pointing out more how the simple-minded software metrics would > condemn you to around about the level of performance acheived by the > reference LAPACK (white bars) in the paper referenced, which to my > mind suggests there's a flaw in the software metrics. I'd also query > that the code quality is terrible in most numerical software: what I'd > say is that they've got a task to acheive (ie, using as much of the > computing power as possible) and make the software as simple and > maintainable as it can be given the task. (What they don't generally > do is say "if we reduce what portion of the task we'll implement for > users, we get wonderfully simple code".)
I think there is no misunderstanding here; the "suckless metrics" do not apply here. But given that this is a suckless list, with the "terrible code quality" I meant that numerical software is often terrible against conventional metrics of code quality; cryptic, hard to read and understand, full of little "hacks", difficult to change and maintain, badly formatted, etc. Take the aspects of software quality mentioned in ISO 9126-1: * Functionality (suitability, accuracy, compliance, security, etc.) * Reliability (maturity, recoverability, fault tolerance, etc.) * Usability (learnability, understandability, operability, etc.) * Efficiency (time and space performance, etc.) * Maintainability (stability, analyzability, testability, etc.) * Portability (installability, replaceability, adaptability, etc.) Against these abstract concepts of general software quality, I'd say that numerical software is generally par excellence in some aspects, but terrible in others. - Jukka.