On Wed, May 12, 2010 at 1:45 AM, venu madhav wrote:
> [Venu] Yes, autovacuum is running every hour. I could see in the log
> messages. All the configurations for autovacuum are disabled except that it
> should run for every hour. This application runs on an embedded box, so
> can't change the para
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner wrote:
> venu madhav wrote:
>
> >> > If the records are more in the interval,
> >>
> >> How do you know that before you run your query?
> >>
> > I calculate the count first.
>
> This and other comments suggest that the data is totally static
> whil
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner wrote:
> venu madhav wrote:
>
> >>> AND e.timestamp >= '1270449180'
> >>> AND e.timestamp < '1273473180'
> >>> ORDER BY.
> >>> e.cid DESC,
> >>> e.cid DESC
> >>> limit 21
> >>> offset 10539780
>
> > The second column acts as a secondary key for sor
On Wed, May 12, 2010 at 3:20 AM, Kevin Grittner wrote:
> venu madhav wrote:
>
> > When I try to get the last twenty records from the database, it
> > takes around 10-15 mins to complete the operation.
>
> Making this a little easier to read (for me, at least) I get this:
>
> select e.cid, times
sender
> by replying to the message and destroy all copies of the original message.
>
> From: pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] On Behalf Of venu madhav
> Sent: Tuesday, May 11, 2010 2:18 PM
> To: pgsql-performance@postgresql
On Wed, May 12, 2010 at 3:17 AM, Jorge Montero <
jorge_mont...@homedecorators.com> wrote:
> First, are you sure you are getting autovacuum to run hourly? Autovacuum
> will only vacuum when certain configuration thresholds are reached. You can
> set it to only check for those thresholds every so o
venu madhav wrote:
> Kevin Grittner > > I calculate the count first.
>>
>> This and other comments suggest that the data is totally static
>> while this application is running. Is that correct?
>>
> No, the data gets added when the application is running. As I've
> mentioned before it could be as
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page t
venu madhav wrote:
>> > If the records are more in the interval,
>>
>> How do you know that before you run your query?
>>
> I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
> If generate all the
venu madhav wrote:
>>> AND e.timestamp >= '1270449180'
>>> AND e.timestamp < '1273473180'
>>> ORDER BY.
>>> e.cid DESC,
>>> e.cid DESC
>>> limit 21
>>> offset 10539780
> The second column acts as a secondary key for sorting if the
> primary sorting key is a different column. For this query bot
> * select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp
> < '1273473180' ORDE
-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around 10
Million
Hi all,
In my database application, I've a table whose records can reach 10M and
insertions can happen at a faster rate like 100 insertions per second in the
peak tim
First, are you sure you are getting autovacuum to run hourly? Autovacuum will
only vacuum when certain configuration thresholds are reached. You can set it
to only check for those thresholds every so often, but no vacuuming or
analyzing will be done unless they are hit, regardless of how often a
venu madhav wrote:
> When I try to get the last twenty records from the database, it
> takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sn
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data fro
15 matches
Mail list logo