On Tue, Nov 12, 2019 at 4:12 PM Alexey Kondratov <a.kondra...@postgrespro.ru> wrote: > > On 04.11.2019 13:05, Kuntal Ghosh wrote: > > On Mon, Nov 4, 2019 at 3:32 PM Dilip Kumar <dilipbal...@gmail.com> wrote: > >> So your result shows that with "streaming on", performance is > >> degrading? By any chance did you try to see where is the bottleneck? > >> > > Right. But, as we increase the logical_decoding_work_mem, the > > performance improves. I've not analyzed the bottleneck yet. I'm > > looking into the same. > > My guess is that 64 kB is just too small value. In the table schema used > for tests every rows takes at least 24 bytes for storing column values. > Thus, with this logical_decoding_work_mem value the limit should be hit > after about 2500+ rows, or about 400 times during transaction of 1000000 > rows size. > > It is just too frequent, while ReorderBufferStreamTXN includes a whole > bunch of logic, e.g. it always starts internal transaction: > > /* > * Decoding needs access to syscaches et al., which in turn use > * heavyweight locks and such. Thus we need to have enough state around to > * keep track of those. The easiest way is to simply use a transaction > * internally. That also allows us to easily enforce that nothing writes > * to the database by checking for xid assignments. ... > */ > > Also it issues separated stream_start/stop messages around each streamed > transaction chunk. So if streaming starts and stops too frequently it > adds additional overhead and may even interfere with current in-progress > transaction. > Yeah, I've also found the same. With stream_start/stop message, it writes 1 byte of checksum and 4 bytes of number of sub-transactions which increases the write amplification significantly.
-- Thanks & Regards, Kuntal Ghosh EnterpriseDB: http://www.enterprisedb.com