Yes. I am looking for the justified full table scans.
If bigger tables are getting scanned, I would like to know %age rows scanned
against %age rows as the output.
If the query needs 80% of the rows as the output, then a full table scan is
always better.
I believe there is a possibility for this
Thanks Kevin !!
I will have a look at the source tree.
Regards
VB
On Tue, Sep 27, 2011 at 10:45 PM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:
> Venkat Balaji wrote:
>
> > I would like to know the difference between "n_tup_upd" and
> > "n_tup_hot_upd".
>
> A HOT update is used when n
On Tuesday 27 September 2011 19:22, Mark Kirkwood wrote:
> > The query that I've shown is one of a sequence of queries with the
> > timestamp range progressing in steps of 1 hour through the timestamp
> > range. All I want PG to do is find the range in the index, find the
> > matching records in th
On 09/28/2011 12:26 AM, Venkat Balaji wrote:
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only
small tables or tables with very less pages are (as you said) getting
scanned full.
It can also be best to do a full table scan of a big table for some
querie
On 09/27/2011 12:54 PM, Ben Chobot wrote:
On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:
Our first try to solve this problem has been to convert these triggers
into a constraint trigger which allows for DEFERRABLE INITIALLY
DEFERRED flags. This, we are finding, is forcing the trigger func
Venkat Balaji wrote:
> I would like to know the difference between "n_tup_upd" and
> "n_tup_hot_upd".
A HOT update is used when none of the updated columns are used in an
index and there is room for the new tuple (version of the row) on
the same page as the old tuple. This is faster for a num
Hi Sam,
The purpose of this (framework generated) code is to find out if there is at
least one row that has one of the selected hts_code_ids. We don't care
about anything that's returned other then whether at least one row exists or
not (rewriting the query with EXISTS generates that same plan).
Actually thinking about this a little more what we really want the planner
to do is to consider the codes one at a time till it finds one that exists.
If we write that out explicitly we get a good plan whether the ids are
select many rows or none.
=> explain analyze
select 1 from (
select * from
I would like to know the difference between "n_tup_upd" and "n_tup_hot_upd".
Thanks
VB
On Tue, Sep 27, 2011 at 9:56 PM, Venkat Balaji wrote:
> Thanks a lot Kevin !!
>
> Yes. I intended to track full table scans first to ensure that only small
> tables or tables with very less pages are (as you s
Hello Everyone,
I am implementing a PostgreSQL performance monitoring system (to monitor the
below) which would help us understand the database behavior -
1. Big Full Table Scans
2. Table with high IOs (hot tables)
3. Highly used Indexes
4. Tables undergoing high DMLs with index scans 0 (with unu
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only small
tables or tables with very less pages are (as you said) getting scanned
full.
I am yet to identify slow running queries. Will surely hit back with them in
future.
Thanks
VB
On Tue, Sep 27, 2011 at
Thank you, Marti,
Is there any comprehensive survey of (at least most, if not all)
modern features of operating systems, for example I/O scheduling,
extent-based filesytems, etc.?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
> On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
> > The thing is, the optimizer doesn't know if your data will be in
> > cache when you will run your query… if you are sure most of your
> > data is in the cache mo
Venkat Balaji wrote:
> I am preparing a plan to track the tables undergoing Full Table
> Scans for most number of times.
>
> If i track seq_scan from the pg_stat_user_tables, will that help
> (considering the latest analyzed ones) ?
Well, yeah; but be careful not to assume that a sequential s
On 27/09/2011, at 8:29 PM, Marti Raudsepp wrote:
> 1. First things first: vacuum cannot delete tuples that are still
> visible to any old running transactions. You might have some very long
> queries or transactions that prevent it from cleaning properly:
>
> select * from pg_stat_activity where
On Tue, Sep 27, 2011 at 7:49 AM, Tom Lane wrote:
> Royce Ausburn writes:
>> Since sending this first email I've up'd the autovacuum log level and I've
>> noticed that the same tables seem to be auto vacuum'd over and over again…
>> Some of the tables are a bit surprising in that they're updated
Royce Ausburn writes:
> Since sending this first email I've up'd the autovacuum log level and I've
> noticed that the same tables seem to be auto vacuum'd over and over againâ¦
> Some of the tables are a bit surprising in that they're updated
> semi-regularly, but not enough (I'd think) to war
Hello Everyone,
I am preparing a plan to track the tables undergoing Full Table Scans for
most number of times.
If i track seq_scan from the pg_stat_user_tables, will that help
(considering the latest analyzed ones) ?
Please help !
Thanks
VB
Forgot to mention -
Kevin,
CLUSTER seems to be an very interesting concept to me.
I am thinking to test the CLUSTER TABLE on our production according to the
Index usage on the table.
Will let you know once i get the results.
Regards,
VB
On Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji wrote:
>
We had performed VACUUM FULL on our production and performance has improved
a lot !
I started using pg_stattuple and pg_freespacemap for tracking freespace in
the tables and Indexes and is helping us a lot.
Thanks for all your inputs and help !
Regards,
VB
On Thu, Sep 22, 2011 at 12:11 AM, Kevi
1. First things first: vacuum cannot delete tuples that are still
visible to any old running transactions. You might have some very long
queries or transactions that prevent it from cleaning properly:
select * from pg_stat_activity where xact_start < now()-interval '10 minutes';
2. On 8.3 and ear
On 27/09/11 22:05, anthony.ship...@symstream.com wrote:
What I really want is to just read a sequence of records in timestamp order
between two timestamps. The number of records to be read may be in the
millions totalling more than 1GB of data so I'm trying to read them a slice
at a time but I c
On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
> The thing is, the optimizer doesn't know if your data will be in cache
> when you will run your query… if you are sure most of your data is in
> the cache most of the time, you could try to tune random_page_cost
> (lower it) to reflect that d
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
> On Monday 26 September 2011 19:39, Marc Cousin wrote:
> > Because Index Scans are sorted, not Bitmap Index Scans, which
> > builds a list of pages to visit, to be then visited by the Bitmap
> > Heap Scan step.
> >
> > Ma
24 matches
Mail list logo