Error During Recover
Hello, Getting below error while recovering Postgres 2020-09-11 05:26:54.498 UTC [200369] LOG: database system was shut down at 2020-09-11 05:23:01 UTC 2020-09-11 05:26:54.500 UTC [200369] FATAL: could not access status of transaction 78961 2020-09-11 05:26:54.500 UTC [200369] DETAIL: Could not read from file "pg_multixact/offsets/0001" at offset 49152: Success. Kindly suggest. Thanks. Krish
Feature Request pgadmin4 support more storage backends for servers/user data
Good Day Firstly I apologise if this has been formally requested before or if there is a better place for this to go.. While trying to debug why the docker image would not deploy and run on Azure Kubernetes Service (AKS) I finally got to the bottom of it in that AzureFile and possible also AzureDisk storage backends have issues with SQLITE and locking (https://github.com/kubernetes/kubernetes/issues/59755) - while I was able to workaround the issue with the nobrl mount option mentioned on the linked issue it does raise concerns about potential data corruption of the SQLite DB in event of any issues. At least as it concerns to AKS there isn't any real immediate improvement to be had. It got me thinking, it would be really great if pgadmin could support alternative storage backends for this data, my first prize would obviously be towards supporting postgres as a backend. While I appreciate there aren't particularly any performance concerns with sqlite as its used in pgadmin4 it but it does potentially cause challenges when/if you need to scale the deployment and how you manage things at that point. It really would make life easier from a management perspective once you need to start scaling to multiple instances of pgadmin or want to create DR scenarios and replicatio of the data if other backends were available. Anthony Somerset This email disclaimer applies to the original email, all attachments and any subsequent emails sent by Liquid Telecom. This email contains valuable business information that is privileged, confidential and/or otherwise protected from disclosure, intended only for the named person or entity to which it is addressed. If you are not the intended recipient of this email and you received this e-mail in error, any review, use, dissemination, distribution, printing or copying of this e-mail is strictly prohibited and may be unlawful and/or an infringement of copyright. Please notify us immediately of the error and permanently delete the email from your system, retaining no copies in any media. No employee or agent is authorized to conclude any binding agreement on behalf of Liquid Telecom with another party or give any warranty by email without the express written confirmation by an authorized representative or a director of Liquid Telecom. Nothing in this email shall be construed as a legally binding agreement or warranty or an offer to contract. Liquid Telecom will not be responsible for any damages suffered by the recipient as a result of the recipient not taking cognizance of this principle. Liquid Telecom accepts no liability of whatever nature for any loss, liability, damage or expense resulting directly or indirectly from the access of any files which are attached to this message. Any email addressed to Liquid Telecom shall only be deemed to have been received once receipt is confirmed by Liquid Telecom orally or in writing. An automated acknowledgment of receipt will not suffice as proof of receipt by the Liquid Telecom. This email disclaimer shall be governed by the laws of South Africa.
Re: Feature Request pgadmin4 support more storage backends for servers/user data
On 9/11/20 7:25 AM, Anthony Somerset wrote: Good Day Firstly I apologise if this has been formally requested before or if there is a better place for this to go.. https://www.pgadmin.org/support/issues/ https://azure.microsoft.com/en-us/support/create-ticket/ While trying to debug why the docker image would not deploy and run on Azure Kubernetes Service (AKS) I finally got to the bottom of it in that AzureFile and possible also AzureDisk storage backends have issues with SQLITE and locking (https://github.com/kubernetes/kubernetes/issues/59755) – while I was able to workaround the issue with the nobrl mount option mentioned on the linked issue it does raise concerns about potential data corruption of the SQLite DB in event of any issues. At least as it concerns to AKS there isn’t any real immediate improvement to be had. It got me thinking, it would be really great if pgadmin could support alternative storage backends for this data, my first prize would obviously be towards supporting postgres as a backend. While I appreciate there aren’t particularly any performance concerns with sqlite as its used in pgadmin4 it but it does potentially cause challenges when/if you need to scale the deployment and how you manage things at that point. It really would make life easier from a management perspective once you need to start scaling to multiple instances of pgadmin or want to create DR scenarios and replicatio of the data if other backends were available. Anthony Somerset -- Adrian Klaver adrian.kla...@aklaver.com
Inserting many rows using "with"
Hi all, I got a table with subscriptions to some kind of campaigns, and every contact can have max 1 running subscription per campaign: CREATE TABLE subscriptions ( id SERIAL NOT NULL PRIMARY KEY , campaign_id INT NOT NULL , contact_id INT NOT NULL , done BOOL NOT NULL ); CREATE UNIQUE INDEX subscriptions_running ON subscriptions ( campaign_id , contact_id ) WHERE NOT done; Now I want to add, say, 100_000 contacts to a subscription. The contacts may or may not already have a subscription, I want to simply ignore the ones which already have a not-done subscription. I begin a transaction, and loop over the contact IDs with: INSERT INTO subscriptions (campaign_id, contact_id, done) VALUES ($1, $2, FALSE) ON CONFLICT (campaign_id, contact_id) WHERE NOT done DO NOTHING RETURNING id; This does what it should do, so in that sense it's fine :) But it's still a network roundtrip per statement, and it takes a while to run. How about doing it in a single query, sending all contacts_ids at once: WITH ids AS (SELECT unnest($2::int[]) AS contact_id) INSERT INTO subscriptions (campaign_id, contact_id, done) SELECT $1, ids.contact_id, FALSE FROM ids ON CONFLICT (campaign_id, contact_id) WHERE NOT done DO NOTHING RETURNING id; Where $2 is an array of integers. This is a single query, and it behaves the same as the loop above. And it's indeed faster. Are there any known problems with this strategy? Are they any other methods of inserting lots of records in a nicer way? Alternatives I know of, and the only options I could find documented: - create a huge custom insert statement - use COPY, but that doesn't work with 'ON CONFLICT' as far as I can see Any thoughts? Thanks! Harmen
Re: Inserting many rows using "with"
On Fri, Sep 11, 2020 at 1:31 PM Harmen wrote: > Hi all, > > Are there any known problems with this strategy? Are they any other > methods of > inserting lots of records in a nicer way? > I do this all the time with insert and it's wonderful. It can get tricky if you need to do UPDATEs. You can avoid the unnest and just use value rows directly if you want (which can be easier to read)l with ids(id) as ( values (1), (2), (3) ) select id from ids; Further, you can use this technique with multiple CTEs to insert into completely different tables all in the same query saving lots of round trips. > Alternatives I know of, and the only options I could find documented: > - create a huge custom insert statement > - use COPY, but that doesn't work with 'ON CONFLICT' as far as I can see > COPY is great for large amounts of data. If it's a huge amount of data, you can load it into a temp table with COPY, and then use your ON CONFLICT insert to implement your logic.