> Date: Mon, 22 Sep 2014 12:46:21 -0700
> From: pie...@hogranch.com
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] large table
>
> On 9/22/2014 12:33 PM, Luke Coldiron wrote:
> >
> > It is possible and that is part of what I am trying to discover
On Mon, 22 Sep 2014 12:46:21 -0700
John R Pierce wrote:
> On 9/22/2014 12:33 PM, Luke Coldiron wrote:
> >
> > It is possible and that is part of what I am trying to discover
> > however I am very familiar with the system / code base and in this
> > case there is a single process updating the ti
On Mon, 22 Sep 2014 12:15:27 -0700
Luke Coldiron wrote:
> > > I'd guess that some other process held a transaction open for a
> > > couple of week, and that prevented any vacuuming from taking
> > > place.
> > >
> Interesting idea, on the surface I'm not sure how this would have
> happened in th
On 9/22/2014 12:33 PM, Luke Coldiron wrote:
It is possible and that is part of what I am trying to discover
however I am very familiar with the system / code base and in this
case there is a single process updating the timestamp and a single
process reading the timestamp. There are no other u
> Date: Mon, 22 Sep 2014 14:38:52 -0400
> From: wmo...@potentialtech.com
> To: lukecoldi...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] large table
>
> On Mon, 22 Sep 2014 11:17:05 -0700
> Luke Coldiron wrote:
>
> > I am trying to fi
> > From: ahodg...@simkin.ca
> > To: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] large table
> > Date: Mon, 22 Sep 2014 11:34:45 -0700
> >
> > On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
> > > The actual size of the tab
On Mon, 22 Sep 2014 11:17:05 -0700
Luke Coldiron wrote:
> I am trying to figure out what would have caused a table in a PostgreSQL
> 8.4.16 to get into a state where there is only 1 live tuple and has only ever
> had one 1 tuple but the size of the table is huge.
>
> CREATE TABLE public.myTabl
On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
> The actual size of the table is around 33 MB.
> The myFunc function is called every 2.5 seconds and the wasUpdated function
> every 2 seconds by separate processes. I realize that running a FULL VACUUM
> or CLUSTER command on the tabl
I am trying to figure out what would have caused a table in a PostgreSQL 8.4.16
to get into a state where there is only 1 live tuple and has only ever had one
1 tuple but the size of the table is huge.
CREATE TABLE public.myTable( myColumn timestamp with time zone NOT NULL);
Note: there is no
On Mon, Mar 16, 2009 at 8:24 AM, Janning Vygen wrote:
> On Monday 16 March 2009 15:13:51 Scott Marlowe wrote:
>> On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote:
>> > Hi,
>> >
>> > Why does default_statistic_target defaults to 10?
>> >
>> > I suggest to setting it to 100 by default:
>>
>> Al
On Monday 16 March 2009 15:13:51 Scott Marlowe wrote:
> On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote:
> > Hi,
> >
> > Why does default_statistic_target defaults to 10?
> >
> > I suggest to setting it to 100 by default:
>
> Already done in 8.4
GREAT! sorry for not searching the archives or
On Mon, Mar 16, 2009 at 7:15 AM, Janning Vygen wrote:
> Hi,
>
> Why does default_statistic_target defaults to 10?
> I suggest to setting it to 100 by default:
Already done in 8.4
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://
Hi,
we are running a large 8.3 database and had some trouble with a default
statistic target. We had set it to one special table some time ago, when we
got a problem with a growing table starting with sequence scans.
Last week we did manually cluster this table (create table as ... order by;
d
On Jan 5, 2008 5:38 AM, Ed L. <[EMAIL PROTECTED]> wrote:
> We need some advice on how to handle some large table autovacuum
> issues. One of our 8.1.2 autovacuums is launching a DB-wide
> vacuum on our 270GB database to prevent xid wrap-around, but is
> getting hung-up and/or bogged down for hour
"Ed L." <[EMAIL PROTECTED]> wrote:
>
> On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
> > On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> > > We need some advice on how to handle some large table
> > > autovacuum issues. One of our 8.1.2
> >
> > First of all, update your 8.1 in
On Jan 4, 2008 7:29 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
> >
> > Have you tried adjusting the
> >
> > #vacuum_cost_delay = 0 # 0-1000 milliseconds
> > #vacuum_cost_page_hit = 1 # 0-1 credits
> > #vacuum_cos
Ed L. wrote:
On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote:
We need some advice on how to handle some large table
autovacuum issues. One of our 8.1.2
First of all, update your 8.1 install to 8.1.10. Failing to
keep up with bug
On Jan 4, 2008 7:41 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
> > On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> > > We need some advice on how to handle some large table
> > > autovacuum issues. One of our 8.1.2
> >
> > First of all
On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
> On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> > We need some advice on how to handle some large table
> > autovacuum issues. One of our 8.1.2
>
> First of all, update your 8.1 install to 8.1.10. Failing to
> keep up with bug f
On Friday 04 January 2008 6:21 pm, Scott Marlowe wrote:
>
> Have you tried adjusting the
>
> #vacuum_cost_delay = 0 # 0-1000 milliseconds
> #vacuum_cost_page_hit = 1 # 0-1 credits
> #vacuum_cost_page_miss = 10 # 0-1 credits
> #vacuum_cost_page_dirt
On Jan 4, 2008 6:38 PM, Ed L. <[EMAIL PROTECTED]> wrote:
> We need some advice on how to handle some large table autovacuum
> issues. One of our 8.1.2
First of all, update your 8.1 install to 8.1.10. Failing to keep up
with bug fixes is negligent. who knows, you might be getting bitten
by a bug
We need some advice on how to handle some large table autovacuum
issues. One of our 8.1.2 autovacuums is launching a DB-wide
vacuum on our 270GB database to prevent xid wrap-around, but is
getting hung-up and/or bogged down for hours on a 40gb table and
taking the server performance down with
Thanks for the redirect... After profiling my client memory usage and using
the built-in cursor functionality I discovered that another part of my
program was causing the memory overflow and that the ResultSet iteration was
doing exactly what it should have all along.
On 4/21/07, Kris Jurka <[EMA
On Fri, 20 Apr 2007, Jason Nerothin wrote:
I'm trying to work my way around a large query problem.
Not too unexpectedly, the app server (EJB3/JPA) is choking on the queries
which are "unnamed native queries" in Java parliance. Work-around attempt 1
was to call directly to the JDBC driver, but
"Jason Nerothin" <[EMAIL PROTECTED]> writes:
> Attempt number 2, now underway, is to pass
> LIMIT and OFFSET values to the query which Postgres handles quite
> effectively as long as the OFFSET value is less than the total number of
> rows in the table. When the value is greater than , the query
>
I'm trying to work my way around a large query problem.
In my system, I've created a number of large materialized views that are the
output of some computationally expensive stored procedures on other large
tables in my system. They are intended to serve as staging tables for the
next phase of co
List,
I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day.
There are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for
import/export transaction id's and I have a serial primary key an
On 22 Oct 2005, at 01:25, Edoceo Lists wrote:
List,
I've got a problem where I need to make a table that is going to
grow by an average of 230,000 records per day. There are only 20
columns in the table, mostly char and integer. It's FK'd in two
places to another table for import/expor
On Fri, Oct 21, 2005 at 05:25:22PM -0700, Edoceo Lists wrote:
[summary of situation]
> Some queries take more than five minutes to complete and I'm sad
> about that. How can I make this faster?
You might get more help on pgsql-performance, which is specifically
for discussions of performance is
List,
I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day. There
are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for import/export
transaction id's and I have a serial primary key an
Thanks for the reply. I was afraid it would come down to testing each individual
situation.ยต
The table I mentioned (6 million+ records) actually is a phonebook. And searching and
filtering is possible on almost any combination of fields. So there's an index on each
individual field now and that'
Stijn Vanroye wrote:
I don't want to but in, I just find this an interesting discussion
and would like to add my 2 cents:
I have read this in the manual: (PostgreSQL 7.4beta4 documentation,
Chapter 11.3 Multicolumn Indexes) Qoute: "Multicolumn indexes should
be used sparingly. Most of the time, an
ards,
Stijn Vanroye
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: dinsdag 1 juni 2004 10:44
To: John Wells
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Large table search question[Scanned]
John Wells wrote:
> Guys,
>
> I have a general question about desig
John Wells wrote:
Guys,
I have a general question about designing databases for large data sets.
I was speaking with a colleague about an application we're preparing to
build. One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_n
"John Wells" <[EMAIL PROTECTED]> writes:
> A common lookup the application will require is the full name, so prefix +
> first_name + middle_name + last_name.
> My friend's suggestion was to create a "lookup field" in the table itself,
> which would contain a concatenation of these fields created d
Guys,
I have a general question about designing databases for large data sets.
I was speaking with a colleague about an application we're preparing to
build. One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_name, last_name,
> Version: Postgres 7.1.2
>
> A product we are developing requires frequent loading
> of a large number of rows into a table. We are using
> the "copy file" command, but with WAL we are effectively
> doubling the amount of disk writing we are doing.
>
> After the rows are loaded we do a "create
Version: Postgres 7.1.2
A product we are developing requires frequent loading
of a large number of rows into a table. We are using
the "copy file" command, but with WAL we are effectively
doubling the amount of disk writing we are doing.
After the rows are loaded we do a "create index".
Is the
38 matches
Mail list logo