On Tue, Feb 15, 2011 at 6:19 PM, Thomas Pöhler
wrote:
> Hi list,
>
> See ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg
>
What is the bottom graph? queries/minute? Looks like Your database is
just getting hammered.
Maybe there is a really badly coded page somewhere (a query for each
On Fri, Mar 4, 2011 at 4:18 PM, Landreville
wrote:
> create temporary table deltas on commit drop as
> select * from get_delta_table(p_switchport_id, p_start_date,
> p_end_date);
>
> select round(count(volume_id) * 0.95) into v_95th_row from deltas;
> select in_rate into v_record.
> warehouse-# WHERE e.event_date > now() - interval '2 days'
Try explicitly querying:
WHERE e.event_date > '2006-06-11 20:15:00'
In my understanding 7.4 does not precalculate this timestamp value for the
purpose of choosing a plan.
Greetings
Marcin
---(end of broadc
> Is there anything I'm missing that is preventing it from using the index?
It
> just seems weird to me that other joins like this work fine and fast
> with indexes,
> but this one won't.
Did You consider clustering both tables on the dsiacctno index?
I just checked that for a 4M rows table even
> So the question is why on a relatively simple proc and I getting a query
> performance delta between 3549ms and 7ms?
What version of PG is it?
I had such problems in a pseudo-realtime app I use here with Postgres, and
they went away when I moved to 8.1 (from 7.4). I guess it is better shared
bu
> Ie: select FeatureID from SIMPLE_TABLE where FeatureChromosomeName like
> 'chrX' and StartPosition > 1000500 and EndPosition < 200;
How about ( this assumes that StartPosition <= EndPosition ):
select FeatureID
from SIMPLE_TABLE
where FeatureChromosomeName llike 'chrX'
and StartPosition >
> Just as a question to Tom and team,
maybe it`s time for asktom.postgresql.org? Oracle has it :)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
> So the bottom line here is just that the estimated n_distinct is too
> low. We've seen before that the equation we use tends to do that more
> often than not. I doubt that consistently erring on the high side would
> be better though :-(. Estimating n_distinct from a limited sample of
> the po
> I hit an interestinhg paper on n_distinct calculation:
>
> http://www.pittsburgh.intel-research.net/people/gibbons/papers/distinct-values-chapter.pdf
>
> the PCSA algorithm described there requires O(1) calculation per
> value. Page 22 describes what to do with updates streams.
>
> This I think (
There is one thing I don`t understand:
-> Nested Loop (cost=0.00..180564.28 rows=1806
width=37) (actual time=0.192..60.214 rows=3174 loops=1)
-> Index Scan using visitors_userid_index2 on
visitors v (cost=0.00..2580.97 rows=1300 width=33) (actual
time=0.052..2
>>> max_connections = 500 # (change requires restart)
>>> work_mem = 256MB # min 64kB
>>
>> Not that it has to do with your current problem but this combination could
>> bog your server if enough clients run sorted queries simultaneously.
>> You prob
On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha wrote:
>
> Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf
> and make sure all the information is current. Thanks for pointing out my
> error.
>
excellent report!
about the copy problem: You seem to have created the primar
On Tue, Nov 24, 2009 at 2:37 PM, Luca Tettamanti wrote:
> -> HashAggregate (cost=1031681.15..1033497.20 rows=181605 width=8)
> (a
> ctual time=571807.575..610178.552 rows=26185953 loops=1)
This is Your problem. The system`s estimate for the number of distinct
annotation_ids in t2 is w
ion on the ctid column, to not have to
use the index on ID, and process the rows in physical order. First
make sure that newly inserted production data has the correct value in
the new column, and add 'where new_column is null' to the conditions.
But I have never tried this, use at Your ow
On Thu, Jan 14, 2010 at 8:17 PM, Carlo Stonebanks
wrote:
> . 48 GB RAM
> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)
There is not a 64-bit windows build now - You would be limited to
shared_buffers at about a gigabyte. Choose Linux
Greetings
Marcin Mańk
--
Sent via pgs
15 matches
Mail list logo