> * select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp
> < '1273473180' ORDE
> Sure, but bulk load + reandom selects is going to *guarentee*
> fragmentatioon on a COW system (like ZFS, BTRFS, etc) as the selects
> start to write out all the hint-bit-dirtied blocks in random orders...
>
> i.e. it doesn't take long to make an originally nicely continuous block
> random
Venu,
For starters,
1) You have used the e.cid twice in ORDER BY clause.
2) If you want last twenty records in the table matching the criteria of
timestamp, why do you need the offset?
3) Do you have indexes on sig_id, signature and timestamp fields?
If you do not get a good response after that
First, are you sure you are getting autovacuum to run hourly? Autovacuum will
only vacuum when certain configuration thresholds are reached. You can set it
to only check for those thresholds every so often, but no vacuuming or
analyzing will be done unless they are hit, regardless of how often a
venu madhav wrote:
> When I try to get the last twenty records from the database, it
> takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sn
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data fro
Hi Andy!
Thanks a lot for your hints!
Indeed the problem was on my side. Some Hibernate tuning solved the
problem (and I was able to speedup the query using IN). The real
problem was that Hibernate using unprepared queries if you create a
native query, but prepares the query if you use JP-QL (ver
On Fri, May 7, 2010 at 8:09 PM, Josh Berkus wrote:
> Jignesh, All:
>
> Most of our Solaris users have been, I think, following Jignesh's advice
> from his benchmark tests to set ZFS page size to 8K for the data zpool.
> However, I've discovered that this is sometimes a serious problem for
> some
Hi all!
We moved from MySQL to Postgresql for some of our projects. So far
we're very impressed with the performance (especially INSERTs and
UPDATEs), except for a strange problem with the following bulk delete
query:
DELETE FROM table1 WHERE table2_id = ?
I went through these Wiki pages, trying
Thanks Scott,
This is almost always due to caching. First time the data aren't in the
cache, second time they are.
<<
I had assumed that it was caching, but I don't know from where because of
the inexplicable delay. Hardware? O/S (Linux)? DB? From the function, which
is IMMUTABLE?
I am co
On 11/05/10 13:32, Carlo Stonebanks wrote:
> Hello all,
>
> A query ran twice in succession performs VERY poorly the first time as
> it iterates through the nested loop. The second time, it rips. Please
> see SQL, SLOW PLAN and FAST PLAN below.
I haven't looked at the details, but the comment you
On Mon, May 10, 2010 at 11:32 PM, Carlo Stonebanks
wrote:
> Hello all,
>
> A query ran twice in succession performs VERY poorly the first time as it
> iterates through the nested loop. The second time, it rips. Please see SQL,
> SLOW PLAN and FAST PLAN below.
This is almost always due to caching.
12 matches
Mail list logo