Re: [GENERAL] Where does vacuum FULL write temp-files?

2015-04-15 Thread Andreas Joseph Krogh
På onsdag 15. april 2015 kl. 04:34:31, skrev Venkata Balaji N mailto:nag1...@gmail.com>>:   I'm planning to vacuum FULL a pg_largeobject relation (after vacuumlo'ing it). The relation is 300GB large so I'm concerned the operation will write full my pg_xlog directory which is on a 200GB (net) RAI

Re: [GENERAL] How to keep pg_largeobject from growing endlessly

2015-04-15 Thread Andreas Joseph Krogh
På onsdag 15. april 2015 kl. 04:43:47, skrev Venkata Balaji N mailto:nag1...@gmail.com>>:   I'm routinely vacuumlo'ing to reap orphan OIDs. Is it necessary to manually vacuum pg_largobject or is it handled by autovacuum?     It is handled by autovacuum. What we do is, we schedule a manual VACUUM

Re: [GENERAL] How to keep pg_largeobject from growing endlessly

2015-04-15 Thread Adam Hooper
On Wed, Apr 15, 2015 at 4:49 AM, Andreas Joseph Krogh wrote: > > > In other words: Does vacuumlo cause diskspace used by pg_largeobject to be > freed to the OS (after eventually vacuumed by autovacuum)? No. But that shouldn't matter in your scenario: if you create more large objects than you de

Re: [GENERAL] How to keep pg_largeobject from growing endlessly

2015-04-15 Thread Andreas Joseph Krogh
På onsdag 15. april 2015 kl. 15:50:36, skrev Adam Hooper mailto:a...@adamhooper.com>>: On Wed, Apr 15, 2015 at 4:49 AM, Andreas Joseph Krogh wrote: > > > In other words: Does vacuumlo cause diskspace used by pg_largeobject to be freed to the OS (after eventually vacuumed by autovacuum)? No

Re: [GENERAL] How to keep pg_largeobject from growing endlessly

2015-04-15 Thread Adam Hooper
On Wed, Apr 15, 2015 at 9:57 AM, Andreas Joseph Krogh wrote: > > På onsdag 15. april 2015 kl. 15:50:36, skrev Adam Hooper > : > > On Wed, Apr 15, 2015 at 4:49 AM, Andreas Joseph Krogh > wrote: > > > > In other words: Does vacuumlo cause diskspace used by pg_largeobject to be > > freed to the OS

Re: [GENERAL] Help with slow table update

2015-04-15 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Pawel Veselov Sent: Tuesday, April 14, 2015 8:01 PM To: Jim Nasby Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Help with slow table update [skipped] This is where using sets becomes rea

Re: [GENERAL] How to keep pg_largeobject from growing endlessly

2015-04-15 Thread Andreas Joseph Krogh
På onsdag 15. april 2015 kl. 16:05:22, skrev Adam Hooper mailto:a...@adamhooper.com>>: On Wed, Apr 15, 2015 at 9:57 AM, Andreas Joseph Krogh wrote: > > På onsdag 15. april 2015 kl. 15:50:36, skrev Adam Hooper : > > On Wed, Apr 15, 2015 at 4:49 AM, Andreas Joseph Krogh > wrote: > > > > I

Re: [GENERAL] Help with slow table update

2015-04-15 Thread Pawel Veselov
> > [skipped] > > > > This is where using sets becomes really tedious, as Postgres severely > lacks an upsert-like statement. > > I don't think there are joins allowed in UPDATE statement, so I will need > to use WITH query, right? > > Also, I'm not sure how LEFT JOIN will help me isolate and inser

[GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6

2015-04-15 Thread Filip Lyncker
Dear List, I need to migrate my database from a 2.x to 3.x. Usually Im using pg_basebackup , but this is not possible with different versions. Pg_dump seems to be an option but it is slow like hell and I dont want to stay offline all the time. Is there another possibility to migrate a databas

Re: [GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6

2015-04-15 Thread Raymond O'Donnell
On 15/04/2015 20:03, Filip Lyncker wrote: > Dear List, > > I need to migrate my database from a 2.x to 3.x. Usually Im using > pg_basebackup , but this is not possible with different versions. > Pg_dump seems to be an option but it is slow like hell and I dont want > to stay offline all the time.

Re: [GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6

2015-04-15 Thread Igor Neyman
-Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Filip Lyncker Sent: Wednesday, April 15, 2015 3:03 PM To: pgsql-general@postgresql.org Subject: [GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6 Dear Li

Re: [GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6

2015-04-15 Thread Andy Colson
On 4/15/2015 2:03 PM, Filip Lyncker wrote: Dear List, I need to migrate my database from a 2.x to 3.x. Usually Im using pg_basebackup , but this is not possible with different versions. Pg_dump seems to be an option but it is slow like hell and I dont want to stay offline all the time. Is there

Re: [GENERAL] Best way to migrate a 200 GB database from PG 2.7 to 3.6

2015-04-15 Thread Joshua D. Drake
On 04/15/2015 12:14 PM, Andy Colson wrote: Postgresql is on version 9. What do you mean version 2 or 3? He probably means 9.2.7 to 9.3.6. Remember to a lot of people 9 means 9. That said, pg_upgrade is the way to do this as long as you can have an outage. JD -- Command Prompt, Inc. - h

[GENERAL] Error in the connection to the server

2015-04-15 Thread Ravi Kiran
Hi, I have installed postgresql-9.4.0 version. I have started the server from eclipse indigo version using Run configurations. There is a table in my database whose name is "b". whenever I give a query which is related to this table I get the error *"The connection to the server was lost. Attemp

Re: [GENERAL] Error in the connection to the server

2015-04-15 Thread Adrian Klaver
On 04/15/2015 01:35 PM, Ravi Kiran wrote: Hi, I have installed postgresql-9.4.0 version. I have started the server from eclipse indigo version using Run configurations. Have no idea what that means. Some detail on exactly how you are making the connection would be helpful including: 1) Is

Re: [GENERAL] Error in the connection to the server

2015-04-15 Thread PT
On Thu, 16 Apr 2015 02:05:34 +0530 Ravi Kiran wrote: > > I have installed postgresql-9.4.0 version. > > I have started the server from eclipse indigo version using Run > configurations. > > There is a table in my database whose name is "b". whenever I give a query > which is related to this tab