Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron
> Date: Mon, 22 Sep 2014 12:46:21 -0700 > From: pie...@hogranch.com > To: pgsql-general@postgresql.org > Subject: Re: [GENERAL] large table > > On 9/22/2014 12:33 PM, Luke Coldiron wrote: > > > > It is possible and that is part of what I am trying to discover

Re: [GENERAL] large table

2014-09-22 Thread Bill Moran
On Mon, 22 Sep 2014 12:46:21 -0700 John R Pierce wrote: > On 9/22/2014 12:33 PM, Luke Coldiron wrote: > > > > It is possible and that is part of what I am trying to discover > > however I am very familiar with the system / code base and in this > > case there is a single process updating the ti

Re: [GENERAL] large table

2014-09-22 Thread Eduardo Morras
On Mon, 22 Sep 2014 12:15:27 -0700 Luke Coldiron wrote: > > > I'd guess that some other process held a transaction open for a > > > couple of week, and that prevented any vacuuming from taking > > > place. > > > > Interesting idea, on the surface I'm not sure how this would have > happened in th

Re: [GENERAL] large table

2014-09-22 Thread John R Pierce
On 9/22/2014 12:33 PM, Luke Coldiron wrote: It is possible and that is part of what I am trying to discover however I am very familiar with the system / code base and in this case there is a single process updating the timestamp and a single process reading the timestamp. There are no other u

Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron
> Date: Mon, 22 Sep 2014 14:38:52 -0400 > From: wmo...@potentialtech.com > To: lukecoldi...@hotmail.com > CC: pgsql-general@postgresql.org > Subject: Re: [GENERAL] large table > > On Mon, 22 Sep 2014 11:17:05 -0700 > Luke Coldiron wrote: > > > I am trying to fi

Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron
> > From: ahodg...@simkin.ca > > To: pgsql-general@postgresql.org > > Subject: Re: [GENERAL] large table > > Date: Mon, 22 Sep 2014 11:34:45 -0700 > > > > On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote: > > > The actual size of the tab

Re: [GENERAL] large table

2014-09-22 Thread Bill Moran
On Mon, 22 Sep 2014 11:17:05 -0700 Luke Coldiron wrote: > I am trying to figure out what would have caused a table in a PostgreSQL > 8.4.16 to get into a state where there is only 1 live tuple and has only ever > had one 1 tuple but the size of the table is huge. > > CREATE TABLE public.myTabl

Re: [GENERAL] large table

2014-09-22 Thread Alan Hodgson
On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote: > The actual size of the table is around 33 MB. > The myFunc function is called every 2.5 seconds and the wasUpdated function > every 2 seconds by separate processes. I realize that running a FULL VACUUM > or CLUSTER command on the tabl

[GENERAL] large table

2014-09-22 Thread Luke Coldiron
I am trying to figure out what would have caused a table in a PostgreSQL 8.4.16 to get into a state where there is only 1 live tuple and has only ever had one 1 tuple but the size of the table is huge. CREATE TABLE public.myTable( myColumn timestamp with time zone NOT NULL); Note: there is no

Re: [GENERAL] large table starting sequence scan because of default_statistic_target

2009-03-16 Thread Scott Marlowe
On Mon, Mar 16, 2009 at 8:24 AM, Janning Vygen wrote: > On Monday 16 March 2009 15:13:51 Scott Marlowe wrote: >> On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote: >> > Hi, >> > >> > Why does default_statistic_target defaults to 10? >> > >> > I suggest to setting it to 100 by default: >> >> Al

Re: [GENERAL] large table starting sequence scan because of default_statistic_target

2009-03-16 Thread Janning Vygen
On Monday 16 March 2009 15:13:51 Scott Marlowe wrote: > On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote: > > Hi, > > > > Why does default_statistic_target defaults to 10? > > > > I suggest to setting it to 100 by default: > > Already done in 8.4 GREAT! sorry for not searching the archives or

Re: [GENERAL] large table starting sequence scan because of default_statistic_target

2009-03-16 Thread Scott Marlowe
On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote: > Hi, > > Why does default_statistic_target defaults to 10? > I suggest to setting it to 100 by default: Already done in 8.4 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://

[GENERAL] large table starting sequence scan because of default_statistic_target

2009-03-16 Thread Janning Vygen
Hi, we are running a large 8.3 database and had some trouble with a default statistic target. We had set it to one special table some time ago, when we got a problem with a growing table starting with sequence scans. Last week we did manually cluster this table (create table as ... order by; d

Re: [GENERAL] large table vacuum issues

2008-01-06 Thread Usama Dar
On Jan 5, 2008 5:38 AM, Ed L. <[EMAIL PROTECTED]> wrote: > We need some advice on how to handle some large table autovacuum > issues. One of our 8.1.2 autovacuums is launching a DB-wide > vacuum on our 270GB database to prevent xid wrap-around, but is > getting hung-up and/or bogged down for hour

Re: [GENERAL] large table vacuum issues

2008-01-05 Thread Bill Moran
"Ed L." <[EMAIL PROTECTED]> wrote: > > On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: > > On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote: > > > We need some advice on how to handle some large table > > > autovacuum issues. One of our 8.1.2 > > > > First of all, update your 8.1 in

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Scott Marlowe
On Jan 4, 2008 7:29 PM, Ed L. <[EMAIL PROTECTED]> wrote: > On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: > > > > Have you tried adjusting the > > > > #vacuum_cost_delay = 0 # 0-1000 milliseconds > > #vacuum_cost_page_hit = 1 # 0-1 credits > > #vacuum_cos

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Joshua D. Drake
Ed L. wrote: On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote: We need some advice on how to handle some large table autovacuum issues. One of our 8.1.2 First of all, update your 8.1 install to 8.1.10. Failing to keep up with bug

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Scott Marlowe
On Jan 4, 2008 7:41 PM, Ed L. <[EMAIL PROTECTED]> wrote: > On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: > > On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote: > > > We need some advice on how to handle some large table > > > autovacuum issues. One of our 8.1.2 > > > > First of all

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Ed L.
On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: > On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote: > > We need some advice on how to handle some large table > > autovacuum issues. One of our 8.1.2 > > First of all, update your 8.1 install to 8.1.10. Failing to > keep up with bug f

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Ed L.
On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote: > > Have you tried adjusting the > > #vacuum_cost_delay = 0 # 0-1000 milliseconds > #vacuum_cost_page_hit = 1 # 0-1 credits > #vacuum_cost_page_miss = 10 # 0-1 credits > #vacuum_cost_page_dirt

Re: [GENERAL] large table vacuum issues

2008-01-04 Thread Scott Marlowe
On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote: > We need some advice on how to handle some large table autovacuum > issues. One of our 8.1.2 First of all, update your 8.1 install to 8.1.10. Failing to keep up with bug fixes is negligent. who knows, you might be getting bitten by a bug

[GENERAL] large table vacuum issues

2008-01-04 Thread Ed L.
We need some advice on how to handle some large table autovacuum issues. One of our 8.1.2 autovacuums is launching a DB-wide vacuum on our 270GB database to prevent xid wrap-around, but is getting hung-up and/or bogged down for hours on a 40gb table and taking the server performance down with

Re: [GENERAL] large table problem

2007-05-03 Thread Jason Nerothin
Thanks for the redirect... After profiling my client memory usage and using the built-in cursor functionality I discovered that another part of my program was causing the memory overflow and that the ResultSet iteration was doing exactly what it should have all along. On 4/21/07, Kris Jurka <[EMA

Re: [GENERAL] large table problem

2007-04-21 Thread Kris Jurka
On Fri, 20 Apr 2007, Jason Nerothin wrote: I'm trying to work my way around a large query problem. Not too unexpectedly, the app server (EJB3/JPA) is choking on the queries which are "unnamed native queries" in Java parliance. Work-around attempt 1 was to call directly to the JDBC driver, but

Re: [GENERAL] large table problem

2007-04-20 Thread Tom Lane
"Jason Nerothin" <[EMAIL PROTECTED]> writes: > Attempt number 2, now underway, is to pass > LIMIT and OFFSET values to the query which Postgres handles quite > effectively as long as the OFFSET value is less than the total number of > rows in the table. When the value is greater than , the query >

[GENERAL] large table problem

2007-04-20 Thread Jason Nerothin
I'm trying to work my way around a large query problem. In my system, I've created a number of large materialized views that are the output of some computationally expensive stored procedures on other large tables in my system. They are intended to serve as staging tables for the next phase of co

[GENERAL] Large Table Performance

2005-10-22 Thread David Busby
List, I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day. There are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for import/export transaction id's and I have a serial primary key an

Re: [GENERAL] Large Table Performance

2005-10-21 Thread Alex Stapleton
On 22 Oct 2005, at 01:25, Edoceo Lists wrote: List, I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day. There are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for import/expor

Re: [GENERAL] Large Table Performance

2005-10-21 Thread Michael Fuhr
On Fri, Oct 21, 2005 at 05:25:22PM -0700, Edoceo Lists wrote: [summary of situation] > Some queries take more than five minutes to complete and I'm sad > about that. How can I make this faster? You might get more help on pgsql-performance, which is specifically for discussions of performance is

[GENERAL] Large Table Performance

2005-10-21 Thread Edoceo Lists
List, I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day. There are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for import/export transaction id's and I have a serial primary key an

Re: [GENERAL] Large table search question[Scanned]

2004-06-01 Thread Stijn Vanroye
Thanks for the reply. I was afraid it would come down to testing each individual situation.ยต The table I mentioned (6 million+ records) actually is a phonebook. And searching and filtering is possible on almost any combination of fields. So there's an index on each individual field now and that'

Re: [GENERAL] Large table search question

2004-06-01 Thread Richard Huxton
Stijn Vanroye wrote: I don't want to but in, I just find this an interesting discussion and would like to add my 2 cents: I have read this in the manual: (PostgreSQL 7.4beta4 documentation, Chapter 11.3 Multicolumn Indexes) Qoute: "Multicolumn indexes should be used sparingly. Most of the time, an

Re: [GENERAL] Large table search question

2004-06-01 Thread Stijn Vanroye
ards, Stijn Vanroye -Original Message- From: Richard Huxton [mailto:[EMAIL PROTECTED] Sent: dinsdag 1 juni 2004 10:44 To: John Wells Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] Large table search question[Scanned] John Wells wrote: > Guys, > > I have a general question about desig

Re: [GENERAL] Large table search question

2004-06-01 Thread Richard Huxton
John Wells wrote: Guys, I have a general question about designing databases for large data sets. I was speaking with a colleague about an application we're preparing to build. One of the application's tables will potentially contain 2 million or more names, containing (at least) the fields first_n

Re: [GENERAL] Large table search question

2004-05-30 Thread Tom Lane
"John Wells" <[EMAIL PROTECTED]> writes: > A common lookup the application will require is the full name, so prefix + > first_name + middle_name + last_name. > My friend's suggestion was to create a "lookup field" in the table itself, > which would contain a concatenation of these fields created d

[GENERAL] Large table search question

2004-05-30 Thread John Wells
Guys, I have a general question about designing databases for large data sets. I was speaking with a colleague about an application we're preparing to build. One of the application's tables will potentially contain 2 million or more names, containing (at least) the fields first_name, last_name,

Re: [GENERAL] Large table load (40 millon rows) - WAL hassles

2001-07-12 Thread Bruce Momjian
> Version: Postgres 7.1.2 > > A product we are developing requires frequent loading > of a large number of rows into a table. We are using > the "copy file" command, but with WAL we are effectively > doubling the amount of disk writing we are doing. > > After the rows are loaded we do a "create

[GENERAL] Large table load (40 millon rows) - WAL hassles

2001-07-10 Thread simon lai
Version: Postgres 7.1.2 A product we are developing requires frequent loading of a large number of rows into a table. We are using the "copy file" command, but with WAL we are effectively doubling the amount of disk writing we are doing. After the rows are loaded we do a "create index". Is the