I wrote:
> I will take another look at it now that you have the results of
> EXPLAIN ANALYZE posted
Could you run this?:
set work_mem = '50MB';
set effective_cache_size = '3GB';
EXPLAIN ANALYZE
begin transaction;
drop index node_comment_statistics_node_comment_timestamp_idx;
EXPLAIN ANALYZE
Jean-Michel Pouré wrote:
> [no postgresql.conf changes except]
> shared_buffer 24M.
That's part of your problem. (Well, that's understating it; we don't
have any real evidence that you have any performance problems *not*
resulting from failure to do normal configuration.) If you download
the
On Wed, Aug 26, 2009 at 6:29 PM, Tom Lane wrote:
> g...@pilotsystems.net (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:
>> So it seems it was quite wrong about estimated matching rows (192 predicted,
>> 10222 reals).
>
> Yup. What's even more interesting is that it seems the real win would
> have be
2009/8/26 Tom Lane :
>> How does a float ("REAL") compare in terms of SUM()s ?
>
> Casting to float or float8 is certainly a useful alternative if you
> don't mind the potential for roundoff error. On any non-ancient
> platform those will be considerably faster than numeric. BTW,
> I think that 8
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= writes:
> 2009/8/26 Tom Lane
>> Do the data columns have to be bigint, or would int be enough to hold
>> the expected range?
> For the 300-sec tables I probably can drop it to an intege
Hi Tom,
Thanks for your quick response.
2009/8/26 Tom Lane
> writes:
> > In my timing tests, the performance of PG is quite a lot worse than the
> > equivalent BerkeleyDB implementation.
>
> Are you actually comparing apples to apples? I don't recall that BDB
> has any built-in aggregation fun
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= writes:
> In my timing tests, the performance of PG is quite a lot worse than the
> equivalent BerkeleyDB implementation.
Are you actually comparing apples to apples? I don't recall tha
Hi All,
We are improving our network appliance monitoring system, and are evaluating
using PostgreSQL as the back-end traffic statistics database (we're
currently running a home-grown Berkeley-DB based statistics database).
We log data from various network elements (it's mainly in/out bytes and
p
g...@pilotsystems.net (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:
> So it seems it was quite wrong about estimated matching rows (192 predicted,
> 10222 reals).
Yup. What's even more interesting is that it seems the real win would
have been to use just the 'claude & duviau' condition (which appa
Hello Guillaume!
Sun, 23 Aug 2009 14:49:05 +0200, you wrote:
> Hi Gaël,
> On Fri, Aug 21, 2009 at 3:37 PM, Gaël Le Mignot wrote:
>> With 8.3 ::
>>
>> Limit (cost=752.67..752.67 rows=1 width=24)
>> (11 rows)
>>
>> With 8.4 ::
>> (8 rows)
> Could you provide us the EXPLAIN *ANALYZ
Jean-Michel Pouré wrote:
> Details, query plan and database:
> http://drupal.org/node/559986
That still has EXPLAIN output rather than EXPLAIN ANALYZE output.
Without the "actual" information, it's much harder to tell where
things might be improved.
-Kevin
--
Sent via pgsql-performance maili
Dear friends,
I contact on Postgresql hackers request.
I am running into a systemic
problem using Drupal under PostgreSQL 8.4
Drupal relies heavily on a domain derived from int:
CREATE DOMAIN int_unsigned
AS integer
CONSTRAINT int_unsigned_check CHECK ((VALUE >= 0));
Analysing slow querie
On Tue, 25 Aug 2009, Fred Janon wrote:
Asking the Performance people as well, since I didn't get any answer from
General...
I have been unable to create a multi column index with 2 integers as well, same
error as
the one I get with 2 dates.
ERROR: data type date has no default operator clas
13 matches
Mail list logo