On Fri, 10 Apr 2015 02:25 am, Marko Rauhamaa wrote: > Chris Angelico <ros...@gmail.com>: > >> As far as it's concerned, it's impossible for a CPU register to >> arbitrarily change without notice. It's equally impossible for the >> addition of two positive signed integers to result in a negative >> integer. > > The standard says that any program that takes a signed integer out of > its valid range is broken and deserves anything that happens to it. > > I say it's the standard that is broken.
It's not so much the undefined behaviour part that gets me. That's bad enough. But the concept that having the compiler ignore what you write in the source code because you've accidentally hit some undefined part of the spec *is a feature* rather than a horrible, horrible design flaw that blows my mind. http://blogs.msdn.com/b/oldnewthing/archive/2014/06/27/10537746.aspx A *sane* language developer would insist that if the compiler is smart enough to detect a bug, say a pointer which might be null but is dereferenced regardless, it should *stop compiling and tell you*, not treat it as an excuse to do something radically different from what the source code says. Calculating the wrong result really quickly is not an optimization except in the minds of crazy people, its a bug. Code intended to sanitize untrusted code was turned into a no-op. http://code.google.com/p/nativeclient/issues/detail?id=245 A check that a pointer wasn't null was removed by the compiler, creating an exploitable vulnerability. https://isc.sans.edu/diary.html?storyid=6820 I've come to the conclusion that C is the PHP of low level languages. Like PHP, it's popularity is at least in part due to its lack of consistency and psychotic design. That, I think, is the secret of success for some languages: you appeal to the cowboy coders by giving them something tricky to use, but not too tricky, something which doesn't require the discipline and intellectual rigour (and/or cleverness) of writing code which is actually correct, so long as they can write code which is *almost* correct and have it mostly work. What do a few buffer overflows, seg faults, and incorrect results matter when the resulting code runs like a rocket? C: calculating more wrong answers in less time than any other language! Of course not all C programmers are cowboys. Linux Torvalds has a reputation for a zero-tolerance attitude towards kernel bugs, and a take-no-prisoners attitude to anyone who might break userspace code due to changes in the kernel. But I think that the vast number of C/C++ exploitable bugs is proof that most C coders lack either the skill or inclination to write correct C code. Even the Linux kernel contains bugs. Things were bad enough in the old days of classical C compilers, but modern C optimizing compilers may actively counteract your code as you have written it. How that isn't considered an outright malicious act, I don't know. -- Steven -- https://mail.python.org/mailman/listinfo/python-list