Brian Cox wrote:
> Kevin Grittner [kevin.gritt...@wicourts.gov] wrote:
>> Is there any correlation between ts_id and ts_interval_start_time?
> only vaguely: increasing ts_interval_start_time implies increasing
> ts_id but there may be many rows (100,000's) with the same
> ts_interval_start_time
>
Kevin Grittner [kevin.gritt...@wicourts.gov] wrote:
Is there any correlation between ts_id and ts_interval_start_time?
only vaguely: increasing ts_interval_start_time implies increasing ts_id
but there may be many rows (100,000's) with the same ts_interval_start_time
Perhaps if you tried min
Brian Cox wrote:
> cemdb=# explain select min(ts_id) from
> ts_stats_transet_user_interval a
> where 0=0 and a.ts_interval_start_time >= '2009-6-16 01:00' and
> a.ts_interval_start_time < '2009-6-16 02:00';
> seems like max() shouldn't take any longer than min() and certainly
> not 10 times
ts_stats_transet_user_interval has ~48M rows. ts_id is the PK and there
is an index on ts_interval_start_time. I reindexed it and ran vacuum
analyze. Only SELECTs have been done since these operations.
cemdb=# explain select min(ts_id) from ts_stats_transet_user_interval a
where 0=0 and a.ts_i
Hi Tom,
We have managed to improve significantly on the speed of this query.
The way that we did this was through clustering the table based on the
domain index which significantly reduced the page reads that were
required in order to perform the query.
Also to find this we turned on log_st
Peter Alban wrote:
> Why is there such a big difference ?
>
> i.e. off peek times a simple select with where (on indexed column)
> and limit taks* 40 ms* during peek times it took *2 seconds* - 50
> times slower !
If your high work_mem setting you may have been causing the OS to
discard cac
What's still badgering me , is the performance when, there is no load or
significantly lower than peek times ?
Why is there such a big difference ?
i.e. off peek times a simple select with where (on indexed column) and limit
taks* 40 ms* during peek times it took *2 seconds* - 50 times slower !
On Thu, Jun 18, 2009 at 09:42:47PM +0200, Peter Alban wrote:
> So Ken ,
>
> What do you reckon it should be ? What is the rule of thumb here ?
>
> cheers,
> Peter
>
It really depends on your query mix. The key to remember is that
multiples (possibly many) of the work_mem value can be allocated
Tom Lane wrote:
> "Kevin Grittner" writes:
>> With all the optimizer options on, and the from_collapse_limit and
>> join_collapse_limit values both set to 100, run an EXPLAIN (no
>> ANALYZE) on your big problem query. Let us know how long the
>> EXPLAIN runs. If it gets any errors, copy and pa
So Ken ,
What do you reckon it should be ? What is the rule of thumb here ?
cheers,
Peter
On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall wrote:
> On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:
> > Hi All,
> >
> > We are having a reasonably powerful machine for supporting about
"Kevin Grittner" writes:
> With all the optimizer options on, and the from_collapse_limit and
> join_collapse_limit values both set to 100, run an EXPLAIN (no
> ANALYZE) on your big problem query. Let us know how long the EXPLAIN
> runs. If it gets any errors, copy and paste all available
> info
Alberto Dalmaso wrote:
> P.S.: to understand what the query has to make (and 80% of the view
> hve these to make): a lot of time is spend to pivoting a table with
> a structure like
> identifier, description_of_value, numeric value
> that has to be transformed in
> identifier, description_1, desc
On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:
> Hi All,
>
> We are having a reasonably powerful machine for supporting about 20
> databases but in total they're not more then 4GB in size.
>
> The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG
> should cach
Hi All,
We are having a reasonably powerful machine for supporting about 20
databases but in total they're not more then 4GB in size.
The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG
should cache the whole db into memory. Well actually it doesn't.
What is more strange
Brian Cox writes:
> these queries are still running now 27.5 hours later... These queries
> are generated by some java code and in putting it into a test program so
> I could capture the queries, I failed to get the id range correct --
> sorry for wasting your time with bogus data. Below is the
On Thu, Jun 18, 2009 at 6:16 PM, Brian Cox wrote:
> Grzegorz Jakiewicz [gryz...@gmail.com] wrote:
>
>> this might be quite bogus question, just a hit - but what is your
>> work_mem set to ?
>> Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?
>>
> work_mem = 64MB
>
try incr
Grzegorz Jakiewicz [gryz...@gmail.com] wrote:
this might be quite bogus question, just a hit - but what is your
work_mem set to ?
Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?
work_mem = 64MB
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
On Thu, Jun 18, 2009 at 6:06 PM, Brian Cox wrote:
> these queries are still running now 27.5 hours later... These queries are
> generated by some java code and in putting it into a test program so I could
> capture the queries, I failed to get the id range correct -- sorry for
> wasting your time
Tom Lane [...@sss.pgh.pa.us] wrote:
Yup. So according to those stats, all ts_id values fall in the range
61 .. 6000250068. It's no wonder it's not
expecting to find anything between 0 and 10. I think maybe you
forgot to re-analyze after loading data ... although thi
On Wed, Jun 17, 2009 at 6:06 PM, Whit Armstrong wrote:
> I have a column which only has six states or values.
>
> Is there a size advantage to using an enum for this data type?
> Currently I have it defined as a character(1).
>
> This table has about 600 million rows, so it could wind up making a
>
P.S.: to understand what the query has to make (and 80% of the view hve
these to make): a lot of time is spend to pivoting a table with a
structure like
identifier, description_of_value, numeric value
that has to be transformed in
identifier, description_1, description_2, ..., description_n
where n
21 matches
Mail list logo