Fw: Re: Bitmap scan seem like such a strange choice when "limit 1"
‐‐‐ Original Message ‐‐‐ On Monday, September 7, 2020 9:04 AM, Klaudie Willis wrote: > Maybe the right way to improve this is to bypass add_paths_to_append_rel > entirely when there's exactly one surviving child rel, and make it > just use all the surviving paths for that child. > regards, tom lane Does this classify as a bug, or is it "by design"? Should I submit one for PG13beta3? To me, this seems like quite a straight forward case. Big database partitioned, two key lookups, both indexed where one key isolates a partition, and with a limit constraint. Yet it chooses a plan that is 1000x slower than the optimal plan. K
Fw: Re: Bitmap scan seem like such a strange choice when "limit 1"
> t > '2020-01-01'::timestamp and t < '2021-01-01'::timestamp >Not at all important, but it seems odd to be exclusive of the start and end >both. I would >consider including the start with >= >Michael Lewis | Database Engineer >Entrata Michael, funny I was thinking that myself minutes after posting. Perhaps it is that tiny gap that makes a difference; however changing it to t >= '2020etc' and perfectly matching the partition range, did not change anything of significance in the explain or runtime. :-| On that other topic, n_distinct, it is for the moment indeed hardcoded to -0,1. I have tried to reset n_distinct, and run analyze with default_target_statistics = 2000; no dice! However, the cars_ref in question, is present in the most_common_vals of pg_stats, and according to that frequency array, that value occurs with a frequency of 1,7%. That seems correct. select count(*) from bigtablet where cars_ref = 1769854207 and t >= '2020-01-01'::timestamp and t < '2021-01-01'::timestamp; --> 2 817 169 I can add that car_ref in general is quite skewed in its distribution, but I don't think that is the issue here. I think the key hint is that when targeting the partition child table directly, the plan changes. See below for "proof" explain (analyze,buffers) select * from bigtable where car_ref = 1769854207 and t >= '2020-01-01'::timestamp and t < '2021-01-01'::timestamp limit 1 Limit (cost=24961.76..24962.67 rows=1 width=636) (actual time=1456.315..1456.316 rows=1 loops=1) Buffers: shared hit=2377 -> Bitmap Heap Scan on bigtable_y2020 bigtable (cost=24961.76..2640351.94 rows=2874279 width=636) (actual time=1456.313..1456.314 rows=1 loops=1) Recheck Cond: (car_ref = 1769854207) Filter: ((t >= '2020-01-01 00:00:00'::timestamp without time zone) AND (t < '2021-01-01 00:00:00'::timestamp without time zone)) Heap Blocks: exact=1 Buffers: shared hit=2377 -> Bitmap Index Scan on bigtable_2020_ref_index (cost=0.00..24243.19 rows=2874336 width=0) (actual time=721.428..721.428 rows=2817169 loops=1) Index Cond: (car_ref = 1769854207) Buffers: shared hit=2376 Planning Time: 0.321 ms Execution Time: 1480.087 ms explain (analyze,buffers) select * from bigtable_y2020 tt where car_ref = 1769854207 and t >= '2020-01-01'::timestamp and t < '2021-01-01'::timestamp limit 1 Limit (cost=0.57..1.60 rows=1 width=636) (actual time=0.037..0.038 rows=1 loops=1) Buffers: shared hit=5 -> Index Scan using bigtable_2020_ref_index on bigtable_y2020 tt (cost=0.57..2967225.58 rows=2874279 width=636) (actual time=0.036..0.036 rows=1 loops=1) Index Cond: (car_ref = 1769854207) Filter: ((t >= '2020-01-01 00:00:00'::timestamp without time zone) AND (t < '2021-01-01 00:00:00'::timestamp without time zone)) Buffers: shared hit=5 Planning Time: 0.349 ms Execution Time: 0.106 ms best regards K
compatibility matrix between client and server
Hello, Is there some matrix about which PG client versions, esp. the shared libs of ESQL/C can work with which PG server versions. We have a case where a customer wants to use his own PG server, perhaps even on Windows, with our 11.4 client version (compiled by us) and used via ESQL/C. Thanks matthias -- Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub
Re: compatibility matrix between client and server
On Mon, 2020-09-07 at 11:22 +0200, Matthias Apitz wrote: > Is there some matrix about which PG client versions, esp. the shared > libs of ESQL/C can work with which PG server versions. We have a case > where a customer wants to use his own PG server, perhaps even on > Windows, with our 11.4 client version (compiled by us) and used via > ESQL/C. There is no such matrix, but since the protocol hasn't changed in a long time, anything but truly ancient servers should work. The bigger problem is that some of your SQL statements might fail, if they use syntax that didn't exist back then. However, if I were you, I'd refuse to support any PostgreSQL major version that is no longer supported by the project: https://www.postgresql.org/support/versioning/ So, nothing older than 9.5. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
Re: compatibility matrix between client and server
On 9/7/20 9:06 AM, Laurenz Albe wrote: [snip] However, if I were you, I'd refuse to support any PostgreSQL major version that is no longer supported by the project: https://www.postgresql.org/support/versioning/ So, nothing older than 9.5. That's grossly unrealistic. -- Angular momentum makes the world go 'round.
Re: BUG? Slave don't reconnect to the master
Hi > On 3 Sep 2020, at 14:39, Jehan-Guillaume de Rorthais wrote: > On Mon, 24 Aug 2020 18:45:42 +0300 > Олег Самойлов wrote: > >>> 21 авг. 2020 г., в 17:26, Jehan-Guillaume de Rorthais >>> написал(а): >>> >>> On Thu, 20 Aug 2020 15:16:10 +0300 >>> Based on setup per node, you can probably add >>> 'synchronous_commit=remote_write' in the common conf. >> >> Nope. I set 'synchronous_commit=remote_write' only for 3 and 4 node clusters. >> [...] > > Then I suppose your previous message had an error as it shows three > nodes tuchanka3a, tuchanka3b and tuchanka3c (no 4th node), all with > remote_write > in krogan3.conf. But anyway. I tested 4 different types of clusters. The cluster 1 and 2 has two nodes and thus don't reveal this bug. The cluster 3 and 4 has 3 and 4 nodes and thus this bug is observed. I used the cluster 3 as example. > [...] pacemaker config, specific for this cluster: [...] >>> >>> why did you add "monitor interval=15"? No harm, but it is redundant with >>> "monitor interval=16 role=Master" and "monitor interval=17 role=Slave". >> >> I can't remember clearly. :) Look what happens without it. >> >> + pcs -f configured_cib.xml resource create krogan2DB ocf:heartbeat:pgsqlms >> bindir=/usr/pgsql-11/bin pgdata=/var/lib/pgsql/krogan2 >> recovery_template=/var/lib/pgsql/krogan2.paf meta master notify=true >> resource-stickiness=10 >> Warning: changing a monitor operation interval from 15 to 16 to make the >> operation unique >> Warning: changing a monitor operation interval from 16 to 17 to make the >> operation unique > > Something fishy here. This command lack op monitor settings. Pacemaker don't > add any default monitor operation with default interval if you don't give one > at resource creation. > > If you create such a resource with no monitoring, the cluster will start/stop > it when needed, but will NOT check for its health. See: > > https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Explained/s-resource-monitoring.html May be. But keep in mind, that I uses `pcs`, I do not edit the xml file directly. And I use too old pacemaker, the default package of CentOS 7 is pacemaker-1.1.21-4.el7.x86_64, while you link of documentation is for Pacemaker 2.0. But never mind, this does not concern the discussed bug. > >> So trivial monitor always exists by default with interval 15. > > nope. This is not true for CentOS 7. I removed my monitor options, for this example. pcs cluster cib original_cib.xml cp original_cib.xml configured_cib.xml pcs -f configured_cib.xml resource create krogan3DB ocf:heartbeat:pgsqlms bindir=/usr/pgsql-11/bin pgdata=/var/lib/pgsql/krogan3 recovery_template=/var/lib/pgsql/krogan3.paf meta master notify=true resource-stickiness=10 pcs -f configured_cib.xml resource create krogan3IP ocf:heartbeat:IPaddr2 nic=eth0 cidr_netmask=24 ip=192.168.89.35 pcs -f configured_cib.xml resource create krogan3s1IP ocf:heartbeat:IPaddr2 nic=eth0 cidr_netmask=24 ip=192.168.89.36 pcs -f configured_cib.xml resource create krogan3s2IP ocf:heartbeat:IPaddr2 nic=eth0 cidr_netmask=24 ip=192.168.89.37 pcs -f configured_cib.xml constraint colocation add krogan3IP with master krogan3DB-master INFINITY pcs -f configured_cib.xml constraint order promote krogan3DB-master then start krogan3IP symmetrical=false pcs -f configured_cib.xml constraint order demote krogan3DB-master then stop krogan3IP symmetrical=false kind=Optional pcs -f configured_cib.xml constraint location krogan3s1IP rule score=-INFINITY master-krogan3DB lt integer 0 pcs -f configured_cib.xml constraint location krogan3s2IP rule score=-INFINITY master-krogan3DB lt integer 0 pcs -f configured_cib.xml constraint colocation add krogan3s1IP with slave krogan3DB-master INFINITY pcs -f configured_cib.xml constraint colocation add krogan3s2IP with slave krogan3DB-master INFINITY pcs -f configured_cib.xml constraint colocation add krogan3s1IP with krogan3s2IP -1000 pcs -f configured_cib.xml constraint order start krogan3DB-master then start krogan3s1IP pcs -f configured_cib.xml constraint order start krogan3DB-master then start krogan3s2IP pcs cluster cib-push configured_cib.xml --wait diff-against=original_cib.xml 13:44:27 j0 root@tuchanka3a:~ # pcs resource show krogan3DB-master Master: krogan3DB-master Meta Attrs: notify=true resource-stickiness=10 Resource: krogan3DB (class=ocf provider=heartbeat type=pgsqlms) Attributes: bindir=/usr/pgsql-11/bin pgdata=/var/lib/pgsql/krogan3 recovery_template=/var/lib/pgsql/krogan3.paf Operations: demote interval=0s timeout=120 (krogan3DB-demote-interval-0s) methods interval=0s timeout=5 (krogan3DB-methods-interval-0s) monitor interval=15 timeout=10 (krogan3DB-monitor-interval-15) monitor interval=16 role=Master timeout=10 (krogan3DB-monitor-interval-16) monitor interval=17 role=Slave timeout=10 (krogan3DB-monitor-interval-17) notify in
Re: Tuchanka
> On 3 Sep 2020, at 17:25, Jehan-Guillaume de Rorthais wrote: > > On Wed, 2 Sep 2020 20:33:09 +0300 > Олег Самойлов wrote: > >> The software is rather outdated. It works with PostgreSQL 11 and CentOS 7. >> The next step will be upgrading to CentOS 8 and PostgreSQL 12. Please tell >> me, is it useful and worth to continue? Where is better announce it? May be >> somewhere exists special mailing list for such things. > > I answered on Pacemaker mailing list with a couple of links and questions > to discuss before answering if it useful to continue. > > Thanks! Okey, lets move to the pacemaker mail list.
Re: Extensión pldebugger para Postgresql 12 en Centos 7, no encontrada
Hi all! It's possible that pldebugger12 does not exists for centos 7? Thanks a lot, Diego On 2020-09-07 10:16, Lucas Luengas wrote: Centos 6: https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-6-x86_64/pldebugger12-1.1-1.rhel6.1.x86_64.rpm https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-6-x86_64/pldebugger12-1.1-1.rhel6.x86_64.rpm Y para Centos 8: https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-8-x86_64/pldebugger12-1.1-1.rhel8.1.x86_64.rpm Pero no para Centos 7: https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/