Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Juliano
I'm not knew that system catalogs could be saved as schema, thanks Achilleas. So, I would like to get the data and the schema of the system catalogs. How can I include the data in a dump file with system schemas? (The parameter -a makes sense together the parameter --schema?) Regards, Juliano

[GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Thomas Güttler
Hi, PostgreSQL is rock solid and one of the most reliable parts of our toolchain. Thank you Up to now, we don't store files in PostgreSQL. I was told, that you must not do this But this was 20 years ago. I have 2.3TBytes of files. File count is 17M Up to now we use rsync (via rsnaps

Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Adrian Klaver
On 11/28/2016 03:40 AM, Juliano wrote: I'm not knew that system catalogs could be saved as schema, thanks Achilleas. So, I would like to get the data and the schema of the system catalogs. How can I include the data in a dump file with system schemas? (The parameter -a makes sense together the p

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Adrian Klaver
On 11/28/2016 06:28 AM, Thomas Güttler wrote: Hi, PostgreSQL is rock solid and one of the most reliable parts of our toolchain. Thank you Up to now, we don't store files in PostgreSQL. I was told, that you must not do this But this was 20 years ago. I have 2.3TBytes of files. File c

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Chris Travers
On Mon, Nov 28, 2016 at 3:28 PM, Thomas Güttler < guettl...@thomas-guettler.de> wrote: > Hi, > > PostgreSQL is rock solid and one of the most reliable parts of our > toolchain. > >Thank you > > Up to now, we don't store files in PostgreSQL. > > I was told, that you must not do this But th

Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Melvin Davidson
On Mon, Nov 28, 2016 at 9:33 AM, Adrian Klaver wrote: > On 11/28/2016 03:40 AM, Juliano wrote: > >> I'm not knew that system catalogs could be saved as schema, thanks >> Achilleas. >> >> So, I would like to get the data and the schema of the system catalogs. >> How can I include the data in a dum

Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Adrian Klaver
On 11/28/2016 07:44 AM, Melvin Davidson wrote: *To clarify, you cannot dump the pg_catalog schema. It is the main control of how all other objects are Actually you can. I would not of thought so, but tried Achilleas's suggestion and it worked. Example: -- -- Name: pg_aggregate; Type: T

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Daniel Verite
Thomas Güttler wrote: > Up to now we use rsync (via rsnapshot) to backup our data. > > But it takes longer and longer for rsync to detect > the changes. Rsync checks many files. But daily only > very few files really change. More than 99.9% don't. lsyncd+rsync has worked nicely for me on

Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Melvin Davidson
On Mon, Nov 28, 2016 at 10:50 AM, Adrian Klaver wrote: > On 11/28/2016 07:44 AM, Melvin Davidson wrote: > >> >> > >> >> *To clarify, you cannot dump the pg_catalog schema. It is the main >> control of how all other objects are >> > > Actually you can. I would not of thought so, but tried Achillea

[GENERAL] Bad query? Or planner?

2016-11-28 Thread Devin Smith
Hi, I have recently started using postgres and have been operating under the assumption that the query planner knows best (as long as I don't do anything too stupid). I've been structuring my queries (and data) in a certain way: writing re-usable subqueries, joining them in as necessary, and only

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Eduardo Morras
On Mon, 28 Nov 2016 15:28:28 +0100 Thomas Güttler wrote: > Hi, > > Up to now, we don't store files in PostgreSQL. > > I was told, that you must not do this But this was 20 years ago. > > > I have 2.3TBytes of files. File count is 17M > > Up to now we use rsync (via rsnapshot) to backup

Re: [GENERAL] Query with large in clauses uses a lot of memory

2016-11-28 Thread greigwise
Wow. Thanks for the prompt answer. As a follow-up I was wondering if maybe there would be a way to tell it to NOT try to plan/execute the query (and instead throw an error) if the memory usage exceeded X. Thanks again. Greig -- View this message in context: http://postgresql.nabble.com/Quer

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread
Thomas Güttler wrote: > > Up to now, we don't store files in PostgreSQL. > I was told, that you must not do this But this was 20 years ago. > I have 2.3TBytes of files. File count is 17M > Up to now we use rsync (via rsnapshot) to backup our data. > But it takes longer and longer for rsync t

[GENERAL] FOR UPDATE

2016-11-28 Thread said assemlal
Hello, PG: 9.4 CentOS 6 I am writing functions to lock results. Let's take an example: CREATE OR REPLACE FUNCTION usp_locking_my_result($1 VARCHAR(50), $2 VARCHAR(50)) RETURNS TEXT AS $$ SELECT value FROM my_table WHERE field1 = $1 AND field2 = $2 FOR UPDATE; $$ LANGUAGE SQL; What happens if

[GENERAL] hot_standby_feedback

2016-11-28 Thread Torsten Förtsch
Hi, I am in the process of reviewing our configs for a number of 9.3 databases and found a replica with hot_standby_feedback=on. I remember when we set it long ago we were fighting cancelled queries. I also remember that it never really worked for us. In the end we set up 2 replicas, one suitable

Re: [GENERAL] FOR UPDATE

2016-11-28 Thread Alvaro Herrera
said assemlal wrote: > Hello, > > PG: 9.4 > CentOS 6 > > I am writing functions to lock results. > > Let's take an example: > > CREATE OR REPLACE FUNCTION usp_locking_my_result($1 VARCHAR(50), $2 > VARCHAR(50)) > RETURNS TEXT AS $$ > SELECT value > FROM my_table > WHERE field1 = $1 AND field2 =

Re: [GENERAL] FOR UPDATE

2016-11-28 Thread said assemlal
> Why do you want to lock these results? Because we are migrating very old pipeline where we save data in file texts. So we have decided to keep some logics inside perl code. That means to update a record: 1. we have to read/lock it 2. return the result to the application 3. the application doe

Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread Israel Brewster
On Nov 25, 2016, at 1:00 PM, Jeff Janes wrote: > > On Wed, Nov 23, 2016 at 10:16 AM, Israel Brewster > wrote: > I was wondering if someone could inform me about, or point me to an online > article about best practices for backing up a postgresql database cluster? A

Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread John R Pierce
On 11/28/2016 2:50 PM, Israel Brewster wrote: Why wouldn't the streaming replica salvage that? Are they expected to fail together? Is the NFS share onto which you want to store your basebackup and WAL also expected to fail together with them? That's why I specified *total* failure. If only

Re: [GENERAL] hot_standby_feedback

2016-11-28 Thread Andres Freund
On 2016-11-28 22:14:55 +0100, Torsten Förtsch wrote: > Hi, > > I am in the process of reviewing our configs for a number of 9.3 databases > and found a replica with hot_standby_feedback=on. I remember when we set it > long ago we were fighting cancelled queries. I also remember that it never > rea

Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread Israel Brewster
On Nov 28, 2016, at 2:13 PM, John R Pierce wrote: > > On 11/28/2016 2:50 PM, Israel Brewster wrote: >>> >>> Why wouldn't the streaming replica salvage that? Are they expected to fail >>> together? Is the NFS share onto which you want to store your basebackup >>> and WAL also expected to fail

Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Mike Sofen
From: Thomas Güttler Sent: Monday, November 28, 2016 6:28 AM ...I have 2.3TBytes of files. File count is 17M Since we already store our structured data in postgres, I think about storing the files in PostgreSQL, too. Is it feasible to store file in PostgreSQL? --- I am doing something

Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-28 Thread David Steele
On 11/24/16 8:05 PM, Patrick B wrote: > hmm.. I really don't get it. > > > > If I get messages like: > > *cp: cannot stat '/walfiles/00021AF800A5': No such file or > director*y > > In my head, it's saying that it was unable to recover that file and, > because of that, there is m

Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-28 Thread Patrick B
2016-11-29 15:21 GMT+13:00 David Steele : > On 11/24/16 8:05 PM, Patrick B wrote: > > > hmm.. I really don't get it. > > > > > > > > If I get messages like: > > > > *cp: cannot stat '/walfiles/00021AF800A5': No such file or > > director*y > > > > In my head, it's saying that it was una

Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-28 Thread David G. Johnston
On Mon, Nov 28, 2016 at 8:22 PM, Patrick B wrote: > > Ho > ​[w] > is that even possible?? I don't understand! > > ​https://www.postgresql.org/docs/9.2/static/warm-standby.html """​ If you use streaming replication without file-based continuous archiving, you have to set wal_keep_segments in the

Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread Jeff Janes
On Mon, Nov 28, 2016 at 2:50 PM, Israel Brewster wrote: > > - What is the "best" (or just a good) method of keeping the WAL archives >> under control? Obviously when I do a new basebackup I can "cleanup" any old >> files that said backup doesn't need, >> > > You have said you might be interested

Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread Thomas Kellerer
Israel Brewster schrieb am 28.11.2016 um 23:50: pg_archivecleanup -n /mnt/server/archiverdir 00010010.0020.backup Ok, but where does that "00010010.0020.backup" come from? I mean, I can tell it's a WAL segment file name (plus a backup label), but I don'

Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Achilleas Mantzios
On 28/11/2016 19:21, Melvin Davidson wrote: On Mon, Nov 28, 2016 at 10:50 AM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote: On 11/28/2016 07:44 AM, Melvin Davidson wrote: *To clarify, you cannot dump the pg_catalog schema. It is the main control of how all othe