On 2017-12-20 17:13:31 -0200, neto brpr wrote:
> Just to explain it better. The idea of ​​differentiating read and write
> parameters (sequential and random) is exactly so that the access plans can
> be better chosen by the optimizer. But for this, the Hash join, merge join,
> sorting and other algorithms should also be changed to consider these new
> parameters.

I'm doubtful that there's that much benefit. Mergejoin doesn't write,
hashjoins commonly don't write , and usually if so there's not that many
alternatives to batched hashjoins. Similar-ish with sorts, although
sometimes that can instead be done using ordered index scans.

What are the cases you forsee where costing reads/writes differently
will lead to better plans?

Greetings,

Andres Freund

Reply via email to