Re:Explain plan shows fewer shared blocks when index+table compared to index alone?

2023-04-04 Thread Sergei Kornilov
Hello This block is reading and checking the visibility map, I think. We don't have to check the visibility map during the index scan - we still need to get a tuple from the table, we can check the visibility for current transaction there. With index only scan, we need to check the visibility m

Re:VACUUM: Nonremovable rows due to wal sender process

2022-01-04 Thread Sergei Kornilov
Hello This is exactly the reason why you need to track the age of the oldest transaction on the primary itself and on every replica that has hot_standby_feedback = on. By default hot_standby_feedback is disabled. > Is there anything I can do short of shutting down and restarting the primary > (

Re: Partial index creation always scans the entire table

2020-02-15 Thread Sergei Kornilov
Hello > When creating partial indexes, can postgres utilize another index for > figuring which rows should be included in the partial index, without > performing a full table scan? No. create index always perform a seqscan on table. And two full table scan for create index concurrently. regar

Re: Searching in varchar column having 100M records

2019-07-17 Thread Sergei Kornilov
Hello Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o > Buffers: shared hit=2 read=31492 31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD Your query reads table data from disks (

Re: using sequential scan instead of index for join with a union

2019-05-17 Thread Sergei Kornilov
Hi Please check datatypes in union all part. Possible, notice.id or notice.mbct_id datatypes does not match with other tables. regards, Sergei

Re: Log size in bytes of query result

2019-05-10 Thread Sergei Kornilov
Hi > extension that hooks into PostgreSQL We have any hooks that can be used for such purposes? Sometimes I think how to implement counters "bytes sent to client"/"bytes recv from client" in pg_stat_statements but did not found good place. Where we can accumulate such counters and how they can

Re: How to get the content of Bind variables

2019-03-02 Thread Sergei Kornilov
Hello Postgresql does not log statement parameters on log_lock_wait. Because this is not implemented: https://github.com/postgres/postgres/blob/REL_10_STABLE/src/backend/storage/lmgr/proc.c#L1461 Compare with errdetail_params routine in this file: https://github.com/postgres/postgres/blob/REL_1

Re: checkpoint occurs very often when vacuum full running

2018-11-15 Thread Sergei Kornilov
Hi > I mean basicly the wals should contain the changes, and vacuum full changes > the location of the data and not actually the data. Row location is data. For example, index lookup relies on TID (tuple id, hidden ctid column) - physical row address in datafile. Postgresql WAL - it is about phy

Re: checkpoint occurs very often when vacuum full running

2018-11-15 Thread Sergei Kornilov
Hi Checkpoint can be occurs due timeout (checkpoint_timeout) or due amount of WAL (max_wal_size). Vacuum full does write all data through WAL and therefore may trigger checkpoint more frequently. regards, Sergei

Re: To keep indexes in memory, is large enough effective_cache_size enough?

2018-09-19 Thread Sergei Kornilov
Hi effective_cache_size is not cache. It is just approx value for query planner: how many data can be found in RAM (both in shared_buffers and OS page cache) > Q: Size of shared_buffers does not matter regarding keeping index in memory? shared_buffers is cache for both tables and indexes pages.

Re: How to see/calculate size of index in memory?

2018-09-19 Thread Sergei Kornilov
Hello You can use pg_buffercache contrib module: https://www.postgresql.org/docs/current/static/pgbuffercache.html pg_relation_size - yes, its full size on disk regardless buffer cache regards, Sergei

Re: Performance of INSERT into temporary tables using psqlODBC driver

2018-09-09 Thread Sergei Kornilov
Hello > The data to be inserted into temporary tables is obtained from one or more > queries run earlier and the data is available as a vector of strings. You can not use "insert into temp_table select /*anything you wish*/" statement? Or even insert .. select ... returning if you need receive dat

Re: trying to delete most of the table by range of date col

2018-09-03 Thread Sergei Kornilov
Hello >  Delete on my_table  (cost=0.00..65183.30 rows=1573862 width=6) (actual > time=5121.344..5121.344 rows=0 loops=1) >    ->  Seq Scan on my_table  (cost=0.00..65183.30 rows=1573862 width=6) > (actual time=0.012..2244.393 rows=1572864 loops=1) >          Filter: ((end_date <= to_date('12/12

Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan

2018-06-05 Thread Sergei Kornilov
Hello Try using index btree(vclf_number, cl_value) instead of btree (vclf_number). regards, Sergei

Re: pg_upgrade help

2018-04-18 Thread Sergei Kornilov
Hi Both version should be correctly stopped. pg_upgrade started clusters itself. Please check pg_upgrade_server.log file in directory where pg_upgrade was run. Also where is postgresql.conf? In PGDATA? Otherwise you need tell pg_upgrade correct path, for example with options '-o " -c config_file=

Re: 8.2 Autovacuum BUG ?

2018-01-22 Thread Sergei Kornilov
Hello How big is database? Please show result of this query: select * from pg_stat_activity where query like 'autovacuum%'; I think here is running antiwraparound autovacuum. In this case all is normal, antiwraparound will produce a lot of WAL and this is necessary to continue database working.

Re: Updating a large table

2018-01-09 Thread Sergei Kornilov
Hello > 1. ALTER TABLE clusters ADD COLUMN "is_paid" BOOLEAN DEFAULT FALSE; this is wrong. To avoid large table lock you need DEFAULT NULL: ALTER TABLE clusters ADD COLUMN "is_paid" BOOLEAN DEFAULT NULL; Default null changes only system catalog, default with any non-null value will rewrite all ro

Re: Prepared Transactions

2017-12-11 Thread Sergei Kornilov
Hello! You need prepared transactions only if you need two-phase commit to provide distributed atomic transaction on multiple different databases. If you not need distributed transactions - you not needed prepared transactions at all. But if you need distributed transactions - here is no more ch

Re: insert and query performance on big string table with pg_trgm

2017-12-05 Thread Sergei Kornilov
>Buffers: shared hit=544 read=6760 dirtied=4034 >I/O Timings: read=69709.611 You has very slow (or busy) disks, not postgresql issue. Reading 6760 * 8KB in 70 seconds is very bad result. For better performance you need better disks, at least raid10 (not raid5). Much more memory in shared