Hello,
do you have any advice on this?

Thanks a lot in advance

Il giorno gio 7 mar 2019 alle ore 09:39 Nicola Contu <nicola.co...@gmail.com>
ha scritto:

> So the first file is on Postgres11.2 on a test server (and where I compare
> 10 vs 11)
> The second file, is our preprod machine running Postgres 11.2 (different
> hardware etc, it is a VM). I know that could be confusing, but I just
> wanted to compare that too because if you see the two files there's a lot
> of difference between the two machines.
> And they are both running CentOS 7.
>
> So at this point I have two problems. One inside the machine between
> Postgres 10 and 11 and another problem on the preprod (similar to prod)
> with a lot of lseek.
>
> Sorry if this is confusing, hope it is clear now.
>
> Regarding partitions/tables. The first file involves just one table. The
> second file (with a huge lseek) was running the test on a single table, but
> meanwhile it was accessible by the preprod web application. So it was maybe
> hit by some user and some other table.
>
>
> Question:
> 1) Is it possible that pgbench could not be really a good tool for testing
> the performances? If I use a sql script of thousands of insert records and
> compare on the same server between pg10 and pg11 I get pretty much the same
> result (maybe better on pg11)
> 2) regarding preprod, is there any way to reduce those lseek()?  Just to
> let you know, comparing the same insert script between the first server,
> the first server takes 2m the second one takes 5-7m.
>
> Thanks a lot,
>
>
>
>
>
> Il giorno gio 7 mar 2019 alle ore 04:47 Thomas Munro <
> thomas.mu...@gmail.com> ha scritto:
>
>> On Thu, Mar 7, 2019 at 12:12 AM Nicola Contu <nicola.co...@gmail.com>
>> wrote:
>> > This is instead the strace of another server running the same version
>> compiled  but that is even slower.
>>
>> Huh.  That's a lot of lseek().  Some of these will be for random
>> reads/writes and will go way in v12, and some will be for probing the
>> size of relations while planning, and some while executing scans.  I
>> bet you could make some of them go away by using prepared statements.
>> Does the query in your test involve many partitions/tables?
>>
>> % time     seconds  usecs/call     calls    errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>>  32.50  143.010306           7  21044095           lseek
>>  26.21  115.354045          14   8144577           read
>>   6.18   27.185578          16   1669889        10 sendto
>>   5.29   23.300584          57    407528           fdatasync
>>   4.93   21.709522           9   2313529    824174 recvfrom
>>   3.31   14.547568          19    765897           write
>>   2.73   12.007486          14    867088     14494 epoll_wait
>>   2.18    9.597460          15    659871     84097 futex
>>   1.85    8.147759          14    567414           close
>>   1.77    7.767832          18    437656     11319 open
>>
>> The other results had 1 usec lseek(), and much fewer of them relative
>> to the number of reads and writes.  BTW, are you comparing v10 and v11
>> on the same hardware, kernel, filesystem?  Just wondering if there
>> could be some change in syscall overhead on different kernel patch
>> levels or something like that: we see 7 usec vs 1 usec in those two
>> files (though I have no idea how reliable these times are) and if
>> we're going to call it 21 million times at some point it might
>> matter...
>>
>> --
>> Thomas Munro
>> https://enterprisedb.com
>>
>

Reply via email to