On 27/09/2011, at 8:29 PM, Marti Raudsepp wrote:
> 1. First things first: vacuum cannot delete tuples that are still
> visible to any old running transactions. You might have some very long
> queries or transactions that prevent it from cleaning properly:
>
> select * from pg_stat_activity where
On 27/09/2011, at 2:21 PM, Tom Lane wrote:
> Royce Ausburn writes:
>> I have a problem with autovacuum apparently not doing the job I need it to
>> do.
>
> Hm, I wonder whether you're getting bit by bug #5759, which was fixed
> after 8.3.12.
If this were the c
Hi all,
I have a problem with autovacuum apparently not doing the job I need it to do.
I have a table named datasession that is frequently inserted, updated and
deleted from. Typically the table will have a few thousand rows in it. Each
row typically survives a few days and is updated every 5
Sorry all - this was a duplicate from another of my addresses =( Thanks to all
that have helped out on both threads.
On 21/09/2011, at 8:44 AM, Royce Ausburn wrote:
> Hi all,
>
> It looks like I've been hit with this well known issue. I have a complicated
> query th
Hi all,
It looks like I've been hit with this well known issue. I have a complicated
query that is intended to run every few minutes, I'm using JDBC's
Connection.prepareStatement() mostly for nice parameterisation, but postgres
produces a suboptimal plan due to its lack of information when the
On 21/09/2011, at 9:39 AM, Craig Ringer wrote:
> On 21/09/2011 7:27 AM, Royce Ausburn wrote:
>> Hi all,
>>
>> It looks like I've been hit with this well known issue. I have a
>> complicated query that is intended to run every few minutes, I'm using
Hi all,
It looks like I've been hit with this well known issue. I have a complicated
query that is intended to run every few minutes, I'm using JDBC's
Connection.prepareStatement() mostly for nice parameterisation, but postgres
produces a suboptimal plan due to its lack of information when the
> On Wed, Feb 2, 2011 at 7:00 PM, Craig Ringer
> wrote:
>> Whatever RAID controller you get, make sure you have a battery backup
>> unit (BBU) installed so you can safely enable write-back caching.
>> Without that, you might as well use software RAID - it'll generally be
>> faster (and cheaper) t
On 17/12/2010, at 9:20 PM, Pierre C wrote:
>
>> fc=# explain analyse select collection, period, tariff, sum(bytesSent),
>> sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as
>> startchunk from sample_20101001 where starttime between 1287493200 and
>> 1290171599 and
On 17/12/2010, at 8:27 PM, Filip RembiaĆkowski wrote:
>
> 2010/12/17 Royce Ausburn
> Hi all,
>
> I have a table that in the typical case holds two minute sample data for a
> few thousand sources. Often we need to report on these data for a particular
> source over a p
Hi all,
I have a table that in the typical case holds two minute sample data for a few
thousand sources. Often we need to report on these data for a particular
source over a particular time period and we're finding this query tends to get
a bit slow.
The structure of the table:
Thanks guys - interesting.
On 14/12/2010, at 5:59 AM, Josh Berkus wrote:
> On 12/12/10 6:43 PM, Royce Ausburn wrote:
>> Hi all,
>>
>> I notice that when restoring a DB on a laptop with an SDD, typically
>> postgres is maxing out a CPU - even during a COPY. I w
Hi all,
I notice that when restoring a DB on a laptop with an SDD, typically postgres
is maxing out a CPU - even during a COPY. I wonder, what is postgres usually
doing with the CPU? I would have thought the disk would usually be the
bottleneck in the DB, but occasionally it's not. We're emb
13 matches
Mail list logo