Timing things that are fairly simple is hard enough to do repeatedly, but when 
it involves access to slower media and especially to network connections to 
servers, the number of things that can change are enormous. There are all kinds 
of caching at various levels depending on your hardware and resource contention 
with other programs running here and there as well as on various network-like 
structures and busses or just hard disks. Asking for anything to be repeated 
multiple times in a row as a general rule can make your results seem slower or 
faster depending on too many factors including what else is running on your 
machine.

I am wondering if an approach to running something N times that may average 
things out a bit is to simply put in a pause. Have your program wait a few 
minutes between attempts and perhaps even do other things within your loop that 
make it likely some of the resources you want not to be in a queue have a 
chance to be flushed as other things take their place. Obviously, a machine or 
system with lots of resources may take more effort to use enough new data that 
replaces the old.

Good luck. Getting reliable numbers is no easy feat as someone else may have 
trouble duplicating the results with a somewhat different setup.

-----Original Message-----
From: Python-list <python-list-bounces+avi.e.gross=gmail....@python.org> On 
Behalf Of Albert-Jan Roskam via Python-list
Sent: Sunday, September 17, 2023 5:02 AM
To: Peter J. Holzer <hjp-pyt...@hjp.at>
Cc: python-list@python.org
Subject: Re: Postgresql equivalent of Python's timeit?

   On Sep 15, 2023 19:45, "Peter J. Holzer via Python-list"
   <python-list@python.org> wrote:

     On 2023-09-15 17:42:06 +0200, Albert-Jan Roskam via Python-list wrote:
     >    This is more related to Postgresql than to Python, I hope this is
     ok.
     >    I want to measure Postgres queries N times, much like Python timeit
     >    (https://docs.python.org/3/library/timeit.html). I know about
     EXPLAIN
     >    ANALYZE and psql \timing, but there's quite a bit of variation in
     the
     >    times. Is there a timeit-like function in Postgresql?

     Why not simply call it n times from Python?

     (But be aware that calling the same query n times in a row is likely to
     be
     unrealistically fast because most of the data will already be in
     memory.)

   =====
   Thanks, I'll give this a shot. Hopefully the caching is not an issue if I
   don't re-use the same database connection.
-- 
https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to