Re: Cascade publication with pg_stat_replication empty
On publisher it shows me "terminating walsender process due to replication timeout" But the problem is that this message is occurring 3 or 4 times a day only. It tries to send data, gets a timeout then shows the message. Then it waits a long long long time to try again, I don´t know why. wal_sender_timeout is configured to 1 hour Yesterday I had 4 messages of timeout on server log but no data were replicated. So, it tried just these 4 times ? -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Re: Cascade publication with pg_stat_replication empty
So, there is a way to call replication, to client restart connection and try again ? Because aparently it stays disconnected for hours to try again, gets that timeout message and remains disconnected for hours again -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Re: Primary keys and composite unique keys(basic question)
On 2021-04-07 10:35:55 -0600, Rob Sargent wrote: > On Apr 7, 2021, at 10:17 AM, Ron wrote: > > On 4/5/21 9:37 PM, Rob Sargent wrote: > > It's a small thing, but UUIDs are absolutely not memorizable by > humans; they have zero semantic value. Sequential numeric > identifiers > are generally easier to transpose and the value gives some clues > to > its age (of course, in security contexts this can be a downside). > > > I take the above as a definite plus. Spent too much of my life > correcting others’ use of “remembered” id’s that just happened to > perfectly match the wrong thing. > > > People seem to have stopped appending check digits to identifiers about 20 > years ago, and I'm not sure why. > > > No the problem is “start from one”. User has item/I’d 10875 in hand and types > in 10785 which of course in a sequence supplied ID steam is perfectly valid > and > wrong. Really hard to track down. What I've often done when writing software for an Oracle RDBMS is to use a global sequence instead of one sequence per table (Oracle didn't have a SERIAL type, so you had to use an explicit sequence and write trigger anyway). That caught wrong joins (an id value used in one table is never used in another table) as well as most typos (since ids in a single table were not dense). With PostgreSQL I've stopped doing this since the SERIAL type makes it much more convenient to have a separate sequence per table. But of course that means that almost any table will have a row with id 10785 and one with 10875. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
RE: Check constraint failure messages
From: Miles Elam Sent: Tuesday, April 6, 2021 4:19 PM Sadly, this is a cloud-managed database without direct access to 5432 from outside the VPC and bastian instances are frowned upon by our security folks. Guess I'm stuck with bisecting. Thanks for the confirmation. It'd slow you down a little, but you could install a before trigger to do the check and if it fails you'll have all the info you need and can call RAISE to communicate a record identifier, bad value, whatever. If performance is really a big deal, set up the trigger only after you get a failure so you only have to run it on the 'bad batches'. Or else write a precheck program that looks at the data first and looks only for this and therefore can report the exact record before it feeds the good data to your real loader. Or there's probably other ways I'm not thinking of. The question is what options does your process allow, I suppose. HTH, Kevin . This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.
Re: Check constraint failure messages
Following up in case someone else runs into this problem. I changed the function the CHECK statement called to raise a warning. Not perfect, but noticeably better. I don't get the column that failed but I do get what bad input gummed things up. CREATE OR REPLACE FUNCTION po.confirm(p_val anyelement, p_validated boolean) RETURNS boolean LANGUAGE plpgsql STRICT IMMUTABLE PARALLEL SAFE AS $$ BEGIN IF NOT p_validated THEN RAISE WARNING 'Invalid value: %', p_val; END IF; RETURN p_validated; END; $$; COMMENT ON FUNCTION po.confirm(anyelement,boolean) IS 'Raises a warning when a condition is false; useful for outputting CHECK constraint error values.'; CREATE DOMAIN po.email AS varchar CHECK (po.confirm(VALUE, VALUE IS NULL OR NOT po.email_expanded(VALUE) IS NULL)); Code is not seamless or DRY, but manageable. - Miles On Tue, Apr 6, 2021 at 2:18 PM Miles Elam wrote: > On Tue, Apr 6, 2021 at 1:59 PM Ron wrote: > >> >> The blunt force answer is to not use bulk inserts. Try COPY; it's good >> at saying which record throws an error. >> > > Sadly, this is a cloud-managed database without direct access to 5432 from > outside the VPC and bastian instances are frowned upon by our security > folks. Guess I'm stuck with bisecting. Thanks for the confirmation. > >
"index contains unexpected zero page" problem
Hi guys, We face "index contains unexpected zero page" problem in prd postgresql 11 environment This url shows that need to backup database firstly, is it necessary? https://cloudblue.freshdesk.com/support/solutions/articles/44001889599-error-index-tablename-contains-unexpected-zero-page If "create index concurrently" create a new index and drop the old broken index is a solution to avoid table lock please? Thanks Best Regards Dennis