On PostgreSQL 9.1.1, I'm experiencing extremely slow/inefficient min/max
queries against a partitioned table, despite the recent improvements made in
version 9.1. I haven't seen this issue discussed since 9.1 was released, so I
wanted to provide an example of the inefficient execution plan in ca
On 29/02/12 06:06, David Kerr wrote:
On 02/27/2012 12:08 AM, Reuven M. Lerner wrote:
Hi, everyone. I wanted to thank you again for your help on the huge
delete problem that I was experiencing.
After a lot of trial and error, we finally came to the conclusion that
deleting this much data in the
On 02/27/2012 07:14 AM, Shaun Thomas wrote:
On 02/27/2012 08:59 AM, Reuven M. Lerner wrote:
From what I understand, the issue isn't one of current disk space,
but rather of how quickly the disk space is being used up.
Noted. Just keep in mind that dead rows are not free. In the case of sequen
> Von: Robert Haas [mailto:robertmh...@gmail.com]
> Gesendet: Mi 2/29/2012 7:32
>
> On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin wrote:
> > without analyze: http://explain.depesz.com/s/6At
> > with analyze:http://explain.depesz.com/s/r3B
...
> The problem seems to be that the cost estimator
Do you see any performance difference between the following approaches? The
assumption is that most of the rows in the query will be requested often
enough.
1. SQL function.
CREATE OR REPLACE FUNCTION X(IN a_id uuid, IN b_id uuid)
RETURNS int
STABLE
AS $$
SELECT count(1)
F
On Wed, Feb 8, 2012 at 2:59 PM, Ofer Israeli wrote:
> The settings we used were not in the postgresql.conf file, but rather an
> update of the pg_autovacuum table where we set the vac_cost_limit to 2000.
> The reason for this being that we wanted this definition only for the big
> (TOASTed) ta
On Sun, Feb 5, 2012 at 12:29 PM, Saurabh wrote:
> My intention to keep autovacuum as off is bulk loading only. I was
> thinking after bullk load I will change it.
>
> I changed wal_buffer from 5MB to 16MB but I got same performance that
> I got with 5MB (even less).
Does it help if you create the
On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin wrote:
> without analyze: http://explain.depesz.com/s/6At
> with analyze: http://explain.depesz.com/s/r3B
I think this is the same issue complained about here:
http://archives.postgresql.org/message-id/4ed68eec.9030...@krogh.cc
And here:
http://arc
2012/2/29 Stefan Keller :
> 2012/2/29 Jeff Janes :
>>> It's quite possible the vacuum full is thrashing your disk cache due
>>> to maintainance_work_mem. You can overcome this issue with the tar
>>> trick, which is more easily performed as:
>>>
>>> tar cf /dev/null $PG_DATA/base
>>
>> But on many i
2012/2/29 Jeff Janes :
>> It's quite possible the vacuum full is thrashing your disk cache due
>> to maintainance_work_mem. You can overcome this issue with the tar
>> trick, which is more easily performed as:
>>
>> tar cf /dev/null $PG_DATA/base
>
> But on many implementations, that will not work.
On Wed, Feb 29, 2012 at 12:16 PM, Jeff Janes wrote:
> But on many implementations, that will not work. tar detects the
> output is going to the bit bucket, and so doesn't bother to actually
> read the data.
Really? Getting smart on us?
Shame on it. Who asked it to be smart?
--
Sent via pgsql-
On Tue, Feb 28, 2012 at 2:41 PM, Claudio Freire wrote:
> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller wrote:
>> P.S. And yes, the database is aka 'read-only' and truncated and
>> re-populated from scratch every night. fsync is off so I don't care
>> about ACID. After the indexes on name, hstore
Hello all,
I am trying to help the Django project by investigating if there should
be some default batch size limits for insert and delete queries. This is
realted to a couple of tickets which deal with SQLite's inability to
deal with more than 1000 parameters in a single query. That backend
13 matches
Mail list logo