06.05.2018 20:28, Andrey Borodin пишет: > >> 6 мая 2018 г., в 20:38, Yura Sokolov <funny.fal...@gmail.com> написал(а): >> >> I've been playing with logarithmic scale in postgresql roughly year ago. >> I didn't found any benefits compared to current code. In fact, it looked >> to perform worse than current code. >> That is why I didn't wrote about that experiment to pgsql-hackers. > Is there a feature of testgres to benchmark pgbench tps against shared buffer > size? Let's request this feature if it is not there :)
That would be great. Will you? But pgbench is a bit... far from realworld. I used pgbench to test log-scale, and it shows that log-scale is worser. It will be more important to test with some realworld installations. Or validate algorithm with realworld traces. >> But probably I measured in a wrong way. And why I dream to have >> real-world traces in hands. >> >> Consider all known to be effective algorithms: 2Q, ARC, Clock-PRO, >> CAR, - they all consider buffer "hot" if it has more temporal frequency >> in opposite to raw access count. They all mostly ignores spike of usages >> during first moments after placement into cache, and moves buffer to hot >> if it is accessed at some time after. > These algorithms do not ignore spikes, they ignore spike's amplitude. And > this does not mean that amplitude is irrelevant information, even if these > algorithms perform almost like Belady's. More important their consideration of temporal frequency. > Building a complicated heuristics (with a lot of magic numbers) to merge this > spikes into one event does not look promising to me. But this is just my > superstition, chances are that you can tune your algorithm into serious > advancement. But please publish benchmarks, whatever result will be. Yes, this numbers looks magic. They are hand tuned on some traces that probably irrelevant to PostgreSQL behavior. And since you already present some patch, may you publish its benchmark results? With regards, Sokolov Yura.