On 4/12/2015 10:22 AM, Lux wrote:
For future reference for who will read this topic:

This announcement on sqlite mailinglist seems to at least partially prove my point for 
usefulness of "micro-optimization", something valuable to research from my 
point of view.

"""50% faster than 3.7.17"""

"""
This is 50%
faster at the low-level grunt work of moving bits on and off disk and
search b-trees.  We have achieved this by incorporating hundreds of
micro-optimizations.  Each micro-optimization might improve the performance
by as little as 0.05%.  If we get one that improves performance by 0.25%,
that is considered a huge win.  Each of these optimizations is unmeasurable
on a real-world system (we have to use cachegrind to get repeatable
run-times) but if you do enough of them, they add up."""

Read the rest on:

http://permalink.gmane.org/gmane.comp.db.sqlite.general/90549

Somebody doesn't understand fractions: the numbers show a 33% improvement, not 50%. 3.7 is 50% slower, but 3.8 is only 33% faster.

And the conclusion about improved code efficiency is ... not wrong per se, but not exactly correct either ... because cachegrind doesn't measure code efficiency, but rather it profiles memory accesses. A lot of that speed improvement can be explained simply by elimination of redundant data movement. SQL operations generally are hostile to CPU cache optimizations, so the objective always is to reduce overall data movement within the system. Clever implementations of set and bag operations which avoid unnecessary copying is what separates commercial quality RDBMS from toys.

Note that 3.7 added major new features which slowed it wrt previous versions and many of the optimizations in 3.8 were to fix performance issues in the 3.7 functionality. Note also that they spent 16 months doing it and that 3.8 consequently has introduced only 1 significant new feature.


I mentioned previously that I have done HRT programming, so I understand the urge to make code as efficient as possible. But you can spend months or years tweaking a program, only to find that the next generation CPU (or DDR4 memory system or SSD storage or ... ) makes a lot of your hard work superfluous.

Most programs simply can't justify the developer time to micro optimize them. Most of the time micro optimization a waste of developer effort that would be better spent devising better algorithms and data structures - it only makes a difference where you can't squeeze anything more out of you algorithm ... a situation which is quite rare. There are some obvious cases, but most programmer guesses at where high optimization is needed are simply wrong - you really have to profile execution using representative data. In those cases where tweaking is warranted: e.g., programs with humongous data sets or requiring real time results, etc., the optimizations typically can be confined to less than 2% of the code (the other 98% of the program will not measurably benefit). In contrast, all code benefits from more efficient algorithms and data structures.

George

--
You received this message because you are subscribed to the Google Groups "Racket 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to