On 29/12/2013 19:51, Jeff Janes wrote:
> On Thursday, December 19, 2013, Marc Cousin wrote:
>
>
>
> Yeah, I had forgotten to set it up correctly on this test environment
> (its value is correctly set in production environments). Putting it to a
> few gigabyt
On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group by path;
I know you
On 19/12/2013 19:33, Jeff Janes wrote:
> QUERY PLAN
>
> --
> Nested Loop (cost=0.56..4001768.1
Hi,
I'm having something I feel is a bit of a limitation of the optimizer (or
something I don't understand :) ).
Sorry, this is a rather long mail.
I have a workaround for the problem below, but I don't like cheating the
optimizer for no good reason.
First a little bit of context, because the
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
> On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
> > The thing is, the optimizer doesn't know if your data will be in
> > cache when you will run your query… if you are sure most of your
>
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
> On Monday 26 September 2011 19:39, Marc Cousin wrote:
> > Because Index Scans are sorted, not Bitmap Index Scans, which
> > builds a list of pages to visit, to be then visited by the Bitmap
>
Le Mon, 26 Sep 2011 16:28:15 +1000,
anthony.ship...@symstream.com a écrit :
> In Mammoth Replicator (PG 8.3) I have a table described as
>
>Table "public.tevent_cdr"
> Column | Type |
> Modifiers
> +
The Tuesday 01 March 2011 16:33:51, Tom Lane wrote :
> Marc Cousin writes:
> > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> >> It's worth pointing out that the only reason this effect is dominating
> >> the runtime is that you don't have any statistic
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> Marc Cousin writes:
> > The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> >> Could we see a concrete example demonstrating that? I agree with Heikki
> >> that it's not obvious what you are testing that
The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> Marc Cousin writes:
> > The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> >> Testing here with a table with 1000 columns and 100 partitions, about
> >> 80% of the planning time is looking up
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> On 28.02.2011 11:38, Marc Cousin wrote:
> > I've been facing a very large (more than 15 seconds) planning time in a
> > partitioned configuration. The amount of partitions wasn't completely
> >
The Friday 04 June 2010 15:59:05, Tom Lane wrote :
> Marc Cousin writes:
> > I hope I'm not going to expose an already known problem, but I couldn't
> > find it mailing list archives (I only found
> > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543
Hi.
I hope I'm not going to expose an already known problem, but I couldn't find
it mailing list archives (I only found http://archives.postgresql.org/pgsql-
hackers/2009-12/msg01543.php).
On one of my (non production) machines, I've just seen a very big performance
regression (I was doing a ve
The few 'obvious' things I see :
ID and POLLID aren't of the same type (numeric vs bigint)
TTIME isn't indexed.
And as a general matter, you should stick to native datatypes if you don't
need numeric.
But as said in the other answer, maybe you should redo this schema and use
more consistent d
> It really has very little impact. It only affects index scans, and
> even then only if effective_cache_size is less than the size of the
> table.
>
> Essentially, when this kicks in, it models the effect that if you are
> index scanning a table much larger than the size of your cache, you
> migh
Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > to sum it up, should I keep these values (I hate doing this :) ) ?
>
> Many people need to set the random_page_cost and/or seq_page_cost to
> reflect the overall affect of caching on the act
Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > the hot parts of these 2 tables are extremely likely to be in the
> > database or linux cache (buffer hit rate was 97% in the example
> > provided). Moreover, the first two queries of
On Thursday 16 July 2009 07:20:18 Marc Cousin wrote:
> Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> > Marc Cousin wrote:
> > > This mail contains the asked plans :
> > > Plan 1
> > > around 1 million records to insert,
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> Marc Cousin wrote:
> > This mail contains the asked plans :
> > Plan 1
> > around 1 million records to insert, seq_page_cost 1, random_page_cost 4
> >
> > -> Hash (cost=425486.72..425486.72
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit :
> Marc Cousin escribió:
> > There are other things I am thinking of : maybe it would be better to
> > have sort space on another (and not DBRD'ded) raid set ? we have a quite
> > cheap setup right now for the d
This mail contains the asked plans :
I've done them with the different configurations, as I had done the effort of
setting up the whole thing :)
Stats were updated between all runs. Each time is the first run of the query
(that's what we have in production with bacula)
And I added the executor s
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :
> Marc Cousin wrote:
> > Temporarily I moved the problem at a bit higher sizes of batch by
> > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel
> > like an apprentice sorcerer with this, as I
Le Tuesday 14 July 2009 10:15:21, vous avez écrit :
> Marc Cousin wrote:
> >> Your effective_cache_size is really small for the system you seem to
> >> have - its the size of IO caching your os is doing and uses no resources
> >> itself. And 800MB of that on a system
>
> While this is not your questions, I still noticed you seem to be on 8.3 -
> it might be a bit faster to use GROUP BY instead of DISTINCT.
It didn't do a big difference, I already tried that before for this query.
Anyway, as you said, it's not the query having problems :)
> Your effective_cac
We regularly do all of dbcheck. This is our real configuration, there are
really lots of servers and lots of files (500 million files backed up every
month).
But thanks for mentionning that.
The thing is we're trying to improve bacula with postgresql in order to make
it able to bear with this
with bacula ...
effective_cache_size = 800MB
default_statistics_target = 1000
PostgreSQL is 8.3.5 on Debian Lenny
I'm sorry for this very long email, I tried to be as precise as I could, but
don't hesitate to ask me more.
Thanks for helping.
Marc Cousin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers, circu
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
> I am struggeling with the following query which fetches a random subset
> of 200 questions that matches certain tags within certain languages.
> However, the query takes forever to evaluate, even though I have a
> "limit 200" appended. An
Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit :
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > Nevertheless, shouldn't the third estimate be smaller or equal to the sum
> > of the two others ?
>
> The planner's estimation for subplan conditi
Hi,
I'm having a weird problem on a query :
I've simplified it to get the significant part (see end of message).
The point is I've got a simple
SELECT field FROM table WHERE 'condition1'
Estimated returned rows : 5453
Then
SELECT field FROM table WHERE 'condition2'
Estimated returned rows : 705
Th
Hi...
Bacula does no transaction right now, so every insert is done separately with
autocommit.
Moreover, the insert loop for the main table is done by several individual
queries to insert data in several tables (filename, dir, then file), so this
is slow.
There's work underway to speed that up,
I hope I'm not going to say stupid things, but here's what i know (or i think
i know :) ) about bacula + postgresql
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done, and of course i
;
> On 12/13/05, Pandurangan R S <[EMAIL PROTECTED]> wrote:
> > Did you set constraint_exclusion = true in postgresql.conf file?
> >
> > On 12/13/05, Marc Cousin <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> > > I've been workin
Hi,
I've been working on trying to partition a big table (I've never partitioned a
table in any other database till now).
Everything went ok, except one query that didn't work afterwards.
I've put the partition description, indexes, etc ..., and the explain plan
attached.
The query is extremel
Le Mercredi 02 Novembre 2005 14:54, Magnus Hagander a écrit :
> > > I've done the tests with rc1. This is still as slow on windows ...
> >
> > about
> >
> > > 6-10
> > > times slower thant linux (via Ip socket). (depending on
> >
> > using prepared
> >
> > > queries, etc...)
> > >
> > > By the way,
On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
> > This makes me wonder if we are looking in the wrong place. Maybe the
> > problem is coming from psql? More results to follow.
>
> problem is not coming from psql.
>
> One thing I did notice that in a 250k insert transaction the insert
>
> In my experience win32 is par with linux generally with a few gotchas on
> either side. Are your times with fsync=no? It's much harder to give
> apples-apples comparison with fsync=on for various reasons.
It is with fsync=off on windows, fsync=on on linux
>
> Are you running stats_command_str
Hi,
I usually use PostgreSQL coupled with Linux, but I have to use Windows for a
perticular project.
So I wanted to do some tests to know if the performance will be acceptable (I
don't need PostgreSQL to be as fast with windows as with linux, but it has to
be usable...).
I started with tryin
On Wednesday 08 September 2004 16:56, you wrote:
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > The query has been generated by business objects ... i'ill try to suggest to the
> > developpers to remove this constant (if they can)...
> > The fields used by the
ember 2004 16:40, you wrote:
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > I'm having trouble with a (quite big) query, and can't find a way to make it
> > faster.
>
> Seems like it might help if the thing could use a HashAggregate instead
> of sort/gro
Hi. I hope I'm not asking a too trivial question here...
I'm having trouble with a (quite big) query, and can't find a way to make it
faster.
Here is the information :
Tables :
sces_vte -> 2753539 rows
sces_art -> 602327
sces_fsf -> 8126
sces_frc -> 7763
sces_tps ->
41 matches
Mail list logo