Re: [GENERAL] installing on mac air development machine

2014-10-03 Thread john gale
The GUI installer for Mac OS X downloaded from postgresql.org works fine. ~ john On Oct 2, 2014, at 3:50 PM, john.tiger wrote: > we've always installed on linux so need help with a new mac air running > latest osx > > in the instructions it shows several methods: > 1) enterprisedb (b

Re: [GENERAL] installing on mac air development machine

2014-10-02 Thread john gale
The GUI installer for Mac OS X downloaded from postgresql.org works fine. ~ john On Oct 2, 2014, at 3:50 PM, john.tiger wrote: > we've always installed on linux so need help with a new mac air running > latest osx > > in the instructions it shows several methods: > 1) enterprisedb (b

Re: [GENERAL] understanding why two nearly identical queries take two different planner routes, one 5s and one 2hr

2014-08-05 Thread john gale
On Aug 5, 2014, at 1:26 PM, Shaun Thomas wrote: > On 08/05/2014 03:06 PM, john gale wrote: > >> Even on a 114G table with a 16G index, you would consider this slow? >> (physical disk space is closer to 800G, that was our high-water before >> removing lots of rows and v

Re: [GENERAL] understanding why two nearly identical queries take two different planner routes, one 5s and one 2hr

2014-08-05 Thread john gale
On Aug 5, 2014, at 12:45 PM, Shaun Thomas wrote: > Your EXPLAIN output basically answered this for you. Your fast query has this: > >> Nested Loop (cost=0.85..2696.12 rows=88 width=1466) > > While your slow one has this: > >> Hash Join (cost=292249.24..348608.93 rows=28273 width=1466) >

[GENERAL] understanding why two nearly identical queries take two different planner routes, one 5s and one 2hr

2014-08-05 Thread john gale
I would be appreciative if somebody could help explain why we have two nearly identical queries taking two different planner routes; one a nested index loop that takes about 5s to complete, and the other a hash join & heap scan that takes about 2hr. This is using Postgres 9.3.3 on OS X 10.9.4

[GENERAL] what specifically does vacuum have to scan / why does it need to rescan the same indexes many, many times

2014-07-01 Thread john gale
What does vacuum have to scan to be able to reclaim space, and how many times does it need to scan to finalize ? More specifically, my VACUUM VERBOSE is taking a long time and seems to be rescanning the same indexes / fields multiple times without finishing. db=# vacuum verbose testruns; INFO

Re: [GENERAL] Thousands of errors...what happened?

2014-03-24 Thread john gale
On Mar 24, 2014, at 9:43 AM, Alvaro Herrera wrote: > Jerry Levan wrote: >> The other day I attempted to connect to my 9.3.2 postgresql data base and my >> connection >> attempts kept failing. >> >> I found about 10 lines in the log file that looked like: >> >> ERROR: could not seek to en

Re: [GENERAL] puzzling perl DBI vs psql problem

2014-03-13 Thread john gale
On Mar 13, 2014, at 12:18 PM, Susan Cassidy wrote: > I copied and pasted the query from the program's log file, so I know I'm > doing the exact same query. If it matters, I'm only seeing the rows with > 'root' in them via DBI, which the CASE statement refers to. Try enabling full query l

Re: [GENERAL] cannot delete corrupted rows after DB corruption: tuple concurrently updated

2014-02-26 Thread john gale
On Feb 26, 2014, at 2:59 AM, Tomas Vondra wrote: > On 26 Únor 2014, 8:45, john gale wrote: > >> munin2=# delete from testruns where ctid = '(37069305,4)'; >> ERROR: tuple concurrently updated > > AFAIK this error is raised when a before trigger modifi

Re: [GENERAL] cannot delete corrupted rows after DB corruption: tuple concurrently updated

2014-02-25 Thread john gale
3 GMT ERROR: tuple concurrently updated 2014-02-26 07:42:53 GMT STATEMENT: delete from testruns where ctid = '(37069305,4)'; thanks, ~ john On Feb 25, 2014, at 11:43 AM, john gale wrote: > We ran into an open file limit on the DB host (Mac OS X 10.9.0, Postgres >

[GENERAL] cannot delete corrupted rows after DB corruption: tuple concurrently updated

2014-02-25 Thread john gale
We ran into an open file limit on the DB host (Mac OS X 10.9.0, Postgres 9.3.2) and caused the familiar "ERROR: unexpected chunk number 0 (expected 1) for toast value 155900302 in pg_toast_16822" when selecting data. Previously when we've run into this kind of corruption we could find the spe

Re: [GENERAL] how much disk space does a VACUUM FULL take?

2013-12-03 Thread john gale
On Dec 3, 2013, at 3:53 PM, Andreas Brandl wrote: > John, > >> ... >> For a variety of reasons I would prefer disk usage to be as low as >> possible, thus I would like to run a VACUUM FULL during some >> maintenance cycle (since it exclusively locks the table). However, > > you might want to

[GENERAL] how much disk space does a VACUUM FULL take?

2013-12-03 Thread john gale
Due to running low on disk space, we have recently removed a majority of rows from a table to an archival DB. Although VACUUM allows disk space to be re-used, VACUUM FULL is the only one that actively reclaims disk space for use by the OS. http://www.postgresql.org/docs/9.0/static/routine-vac

Re: [GENERAL] Hstore::TooBig ?

2013-10-31 Thread john gale
On Oct 31, 2013, at 4:44 PM, Adrian Klaver wrote: >> So to piece out the questions, >> - is there a total size limitation of the hstore field? or is it >> theoretically large enough (1GB) that it really shouldn't matter? >> - is there a string size limitation of each key/val in 9.x, or is it st

Re: [GENERAL] Hstore::TooBig ?

2013-10-31 Thread john gale
On Oct 31, 2013, at 4:44 PM, Adrian Klaver wrote: >> So to piece out the questions, >> - is there a total size limitation of the hstore field? or is it >> theoretically large enough (1GB) that it really shouldn't matter? >> - is there a string size limitation of each key/val in 9.x, or is it st

Re: [GENERAL] Hstore::TooBig ?

2013-10-31 Thread john gale
On Oct 31, 2013, at 3:46 PM, john gale wrote: > I don't quite know where this error is coming from. The ActiveRecord source > doesn't seem to have it, and I'm not familiar enough with Rails or > ActiveRecord to track definitively whether the failing function is act

[GENERAL] Hstore::TooBig ?

2013-10-31 Thread john gale
I am running a somewhat unfamiliar Ruby automation results app using ActiveRecord to manage the postgres 9.0 backend. During our automation runs we sometimes get bursts of HTTP 500 errors coming back at us, and the Ruby app log shows an Hstore::TooBig error: Hstore::TooBig (Hstore::TooBig):