Re: [PERFORM] COPY with high # of clients, partitioned table locking issues?

2011-03-31 Thread Emanuel Calvo
Your message was dropped into my Spam lable :S


2011/3/30 Strange, John W :
> Just some information on our setup:
>
> - HP DL585 G6
> - 4 x AMD Opteron 8435 (24 cores)
> - 256GB RAM
> - 2 FusionIO 640GB PCI-SSD (RAID0)
> - dual 10GB ethernet.
>
> - we have several tables that we store calculated values in.
> - these are inserted by a compute farm that calculates the results and stores 
> them into a partitioned schema (schema listed below)
> - whenever we do a lot of inserts we seem to get exclusive locks.
>
> Is there something we can do to improve the performance around locking when 
> doing a lot of parallel inserts with COPY into?  We are not IO bound, what 
> happens is that the copies start to slow down and continue to come in and 
> cause the client to swap, we had hit over 800+ COPYS were in a waiting state, 
> which forced us to start paging heavily creating an issue.  If we can figure 
> out the locking issue the copys should clear faster requiring less memory in 
> use.
>
> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 LOG:  
> process 14405 still waiting for ExclusiveLock on extension of relation 470273 
> of database 16384 after 5001.894 ms
> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 CONTEXT:  
> COPY reportvalues_part_1931, line 1: "660250      41977959        11917   
> 584573.43642105709"
> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 
> STATEMENT:  COPY reportvalues_part_1931 FROM stdin USING DELIMITERS '       '
> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e LOG:  
> process 7294 still waiting for ExclusiveLock on extension of relation 470606 
> of database 16384 after 5062.968 ms
> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e CONTEXT:  
> COPY reportvalues_part_1932, line 158: "660729     41998839        887     
> 45000.0"
> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e STATEMENT: 
>  COPY reportvalues_part_1932 FROM stdin USING DELIMITERS '        '
> [ 2011-03-30 15:54:56.077 EDT ] 25781 [local] asgprod:4d938556.64b5 LOG:  
> process 25781 still waiting for ExclusiveLock on extension of relation 470606 
> of database 16384 after 5124.463 ms
>

But you are using stdin for COPY! The best way is use files. Maybe you must
review postgresql.conf configuration, especially the WAL configuration.
How many times you do this procedure? which is the amount of data involved?




-- 
--
              Emanuel Calvo
              Helpame.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow deleting tables with foreign keys

2011-03-31 Thread Bob Lunney
Jeremy,

Does table_revision have a unique index on id?  Also, I doubt these two indexes 
ever get used:

CREATE INDEX idx_crs_action_expired_created
  ON table_version.bde_crs_action_revision
  USING btree
  (_revision_expired, _revision_created);

CREATE INDEX idx_crs_action_expired_key
  ON table_version.bde_crs_action_revision
  USING btree
  (_revision_expired, audit_id);

Bob Lunney

--- On Wed, 3/30/11, Jeremy Palmer  wrote:

> From: Jeremy Palmer 
> Subject: [PERFORM] Slow deleting tables with foreign keys
> To: "pgsql-performance@postgresql.org" 
> Date: Wednesday, March 30, 2011, 10:16 PM
> Hi All,
> 
> I'm trying to delete one row from a table and it's taking
> an extremely long time. This parent table is referenced by
> other table's foreign keys, but the particular row I'm
> trying to delete is not referenced any other rows in the
> associative tables. This table has the following structure:
> 
> CREATE TABLE revision
> (
>   id serial NOT NULL,
>   revision_time timestamp without time zone NOT NULL
> DEFAULT now(),
>   start_time timestamp without time zone NOT NULL
> DEFAULT clock_timestamp(),
>   schema_change boolean NOT NULL,
>   "comment" text,
>   CONSTRAINT revision_pkey PRIMARY KEY (id)
> )
> WITH (
>   OIDS=FALSE
> );
> 
> This table is referenced from foreign key by 130 odd other
> tables. The total number of rows from these referencing
> tables goes into the hundreds of millions. Each of these
> tables has been automatically created by script and has the
> same _revision_created, _revision_expired fields, foreign
> keys and indexes. Here is an example of one:
> 
> CREATE TABLE table_version.bde_crs_action_revision
> (
>   _revision_created integer NOT NULL,
>   _revision_expired integer,
>   tin_id integer NOT NULL,
>   id integer NOT NULL,
>   "sequence" integer NOT NULL,
>   att_type character varying(4) NOT NULL,
>   system_action character(1) NOT NULL,
>   audit_id integer NOT NULL,
>   CONSTRAINT
> "pkey_table_version.bde_crs_action_revision" PRIMARY KEY
> (_revision_created, audit_id),
>   CONSTRAINT
> bde_crs_action_revision__revision_created_fkey FOREIGN KEY
> (_revision_created)
>       REFERENCES table_version.revision (id)
> MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO
> ACTION,
>   CONSTRAINT
> bde_crs_action_revision__revision_expired_fkey FOREIGN KEY
> (_revision_expired)
>       REFERENCES table_version.revision (id)
> MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO
> ACTION
> )
> WITH (
>   OIDS=FALSE
> );
> ALTER TABLE table_version.bde_crs_action_revision OWNER TO
> bde_dba;
> ALTER TABLE table_version.bde_crs_action_revision ALTER
> COLUMN audit_id SET STATISTICS 500;
> 
> 
> CREATE INDEX idx_crs_action_audit_id
>   ON table_version.bde_crs_action_revision
>   USING btree
>   (audit_id);
> 
> CREATE INDEX idx_crs_action_created
>   ON table_version.bde_crs_action_revision
>   USING btree
>   (_revision_created);
> 
> CREATE INDEX idx_crs_action_expired
>   ON table_version.bde_crs_action_revision
>   USING btree
>   (_revision_expired);
> 
> CREATE INDEX idx_crs_action_expired_created
>   ON table_version.bde_crs_action_revision
>   USING btree
>   (_revision_expired, _revision_created);
> 
> CREATE INDEX idx_crs_action_expired_key
>   ON table_version.bde_crs_action_revision
>   USING btree
>   (_revision_expired, audit_id);
> 
> 
> All of the table have been analysed before I tried to run
> the query.
> 
> The fact the all of the foreign keys have a covering index
> makes me wonder why this delete is taking so long.
> 
> The explain for 
> 
> delete from table_version.revision where id = 1003
> 
> 
> Delete  (cost=0.00..1.02 rows=1 width=6)
>   ->  Seq Scan on revision 
> (cost=0.00..1.02 rows=1 width=6)
>         Filter: (id = 100)
> 
> I'm running POstgreSQL 9.0.2 on Ubuntu 10.4
> 
> Cheers
> Jeremy
> __
> 
> This message contains information, which is confidential
> and may be subject to legal privilege. 
> If you are not the intended recipient, you must not peruse,
> use, disseminate, distribute or copy this message.
> If you have received this message in error, please notify
> us immediately (Phone 0800 665 463 or i...@linz.govt.nz)
> and destroy the original message.
> LINZ accepts no responsibility for changes to this email,
> or for any attachments, after its transmission from LINZ.
> 
> Thank you.
> __
> 
> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] COPY with high # of clients, partitioned table locking issues?

2011-03-31 Thread Maciek Sakrejda
> But you are using stdin for COPY! The best way is use files.

I've never heard this before, and I don't see how reading from files
could possibly help. Can you clarify?
---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
(650) 242-3500 Main
www.truviso.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Why it is using/not using index scan?

2011-03-31 Thread Laszlo Nagy

For this query:

select pp.id,pp.product_id,pp.selling_site_id,pp.asin
from product_price pp
where
(pp.asin is not null and pp.asin<>'')
and (pp.upload_status_id<>1)
and pp.selling_site_id in (8,7,35,6,9)
and (pp.last_od < 'now'::timestamp - '1 week'::interval )
limit 5000

Query plan is:

"Limit  (cost=9182.41..77384.80 rows=3290 width=35)"
"  ->  Bitmap Heap Scan on product_price pp  (cost=9182.41..77384.80 
rows=3290 width=35)"
"Recheck Cond: ((last_od < '2011-03-24 
13:05:09.540025'::timestamp without time zone) AND (selling_site_id = 
ANY ('{8,7,35,6,9}'::bigint[])))"
"Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND 
(upload_status_id <> 1))"
"->  Bitmap Index Scan on idx_product_price_last_od_ss  
(cost=0.00..9181.59 rows=24666 width=0)"
"  Index Cond: ((last_od < '2011-03-24 
13:05:09.540025'::timestamp without time zone) AND (selling_site_id = 
ANY ('{8,7,35,6,9}'::bigint[])))"


For this query:

select pp.id,pp.product_id,pp.selling_site_id,pp.asin
from product_price pp
where
(pp.asin is not null and pp.asin<>'')
and (pp.upload_status_id<>1)
and pp.selling_site_id in (8,7,35,6,9)
and (pp.last_od + '1 week'::interval < 'now'::timestamp )
limit 5000

Query plan is:

"Limit  (cost=0.00..13890.67 rows=5000 width=35)"
"  ->  Seq Scan on product_price pp  (cost=0.00..485889.97 rows=174898 
width=35)"
"Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND 
(upload_status_id <> 1) AND ((last_od + '7 days'::interval) < 
'2011-03-31 13:06:17.460013'::timestamp without time zone) AND 
(selling_site_id = ANY ('{8,7,35,6,9}'::bigint[])))"



The only difference is this: instead of (pp.last_od < 'now'::timestamp - 
'1 week'::interval ) I have used (pp.last_od + '1 week'::interval < 
'now'::timestamp )


First query with index scan opens in 440msec. The second query with seq 
scan opens in about 22 seconds. So the first one is about 50x faster.


My concern is that we are working on a huge set of applications that use 
thousands of different queries on a database. There are programs that we 
wrote years ago. The database structure continuously changing. We are 
adding new indexes and columns, and of course we are upgrading 
PostgreSQL when a new stable version comes out. There are cases when a 
change in a table affects 500+ queries in 50+ programs. I really did not 
think that I have to be THAT CAREFUL with writing conditions in SQL. Do 
I really have to manually analyze all those queries and "correct" 
conditions like this?


If so, then at least I would like to know if there is a documentation or 
wiki page where I can learn about "how not to write conditions". I just 
figured out that I need to put constant expressions on one side of any 
comparison, if possible. But probably there are other rules I wouldn't 
think of.


Might it be possible to change the optimizer so that it tries to rally 
constant expressions in the first place? That cannot be bad, right?


Thanks,

   Laszlo


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Calculating 95th percentiles

2011-03-31 Thread Landreville
On Sat, Mar 5, 2011 at 7:34 PM, marcin mank  wrote:
> Is this fast enough on a slave:
>
>
> with deltas as (select * from get_delta_table(...)),
> p95 as(select round(count(volume_id) * 0.95) as p95v from deltas)
> select
> (select in_rate from deltas, p95 where
> in_rate_order = p95v),
> (select out_rate from deltas, p95 where
> out_rate_order = p95v)
> etc..
> Greetings
> Marcin
>

I really didn't know you could use a with statement on a read-only
database -- I don't think I even knew the with statement existed in
Postgres (is it documented somewhere?). I will try this out.

I am also looking at Pierre's suggestion of calculating the delta
value on insert. To do this I am going to update all the rows
currently in the partitioned tables. Does anyone know if this will
still use constraint exclusion in the correlated subquery or will it
scan every partitioned table for each updated row?:

update volume
set in_delta =  in_octets - vprev.in_octets,
out_delta =  out_octets - vprev.out_octets
from volume vprev
where vprev.insert_timestamp =
(select max(insert_timestamp) from volume  v
 where v.switch_port_id = volume.switchport_id
 andv.insert_timestamp < volume.insert_timestamp);

I suppose I can check with an analyze before I execute it (I still
have to alter the table to add the delta columns).

Thanks,

Landreville

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow deleting tables with foreign keys

2011-03-31 Thread Jeremy Palmer
Hi Bob,

The "table_version.revision" ("revision" is the same) table has a primary key 
on id because of the PK "revision_pkey". Actually at the moment there are only 
two rows in the table table_version.revision!

Thanks for the tips about the indexes. I'm still in the development and tuning 
process, so I will do some analysis of the index stats to see if they are 
indeed redundant.

Cheers,
Jeremy

From: Bob Lunney [bob_lun...@yahoo.com]
Sent: Friday, 1 April 2011 3:54 a.m.
To: pgsql-performance@postgresql.org; Jeremy Palmer
Subject: Re: [PERFORM] Slow deleting tables with foreign keys

Jeremy,

Does table_revision have a unique index on id?  Also, I doubt these two indexes 
ever get used:

CREATE INDEX idx_crs_action_expired_created
  ON table_version.bde_crs_action_revision
  USING btree
  (_revision_expired, _revision_created);

CREATE INDEX idx_crs_action_expired_key
  ON table_version.bde_crs_action_revision
  USING btree
  (_revision_expired, audit_id);

Bob Lunney
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] index usage on queries on inherited tables

2011-03-31 Thread Joseph Shraibman
When I do a query on a table with child tables on certain queries pg
uses indexes and on others it doesn't. Why does this happen? For example:


[local]:playpen=> explain analyze select * from vis where id > 10747 ;
   QUERY
PLAN  

 Result  (cost=4.29..115.11 rows=325 width=634) (actual
time=0.063..0.116 rows=5 loops=1)
   ->  Append  (cost=4.29..115.11 rows=325 width=634) (actual
time=0.053..0.090 rows=5 loops=1)
 ->  Bitmap Heap Scan on vis  (cost=4.29..23.11 rows=5
width=948) (actual time=0.051..0.058 rows=5 loops=1)
   Recheck Cond: (id > 10747)
   ->  Bitmap Index Scan on vis_pkey  (cost=0.00..4.29
rows=5 width=0) (actual time=0.037..0.037 rows=5 loops=1)
 Index Cond: (id > 10747)
 ->  Seq Scan on vis_for_seg_1_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_4_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_66_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_69_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_79_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_80_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_82_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (id > 10747)
 ->  Seq Scan on vis_for_seg_87_2011_03 vis  (cost=0.00..11.50
rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (id > 10747)
 Total runtime: 0.724 ms
(23 rows)

Time: 5.804 ms
[local]:playpen=> explain analyze select * from vis where id = 10747 ;

QUERY
PLAN



 Result  (cost=0.00..74.41 rows=9 width=664) (actual time=0.060..0.503
rows=1 loops=1)
   ->  Append  (cost=0.00..74.41 rows=9 width=664) (actual
time=0.053..0.493 rows=1 loops=1)
 ->  Index Scan using vis_pkey on vis  (cost=0.00..8.27 rows=1
width=948) (actual time=0.051..0.055 rows=1 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_1_2011_03_pkey on
vis_for_seg_1_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.122..0.122 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_4_2011_03_pkey on
vis_for_seg_4_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.043..0.043 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_66_2011_03_pkey on
vis_for_seg_66_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.041..0.041 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_69_2011_03_pkey on
vis_for_seg_69_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.041..0.041 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_79_2011_03_pkey on
vis_for_seg_79_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.043..0.043 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_80_2011_03_pkey on
vis_for_seg_80_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.041..0.041 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_82_2011_03_pkey on
vis_for_seg_82_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.049..0.049 rows=0 loops=1)
   Index Cond: (id = 10747)
 ->  Index Scan using vis_for_seg_87_2011_03_pkey on
vis_for_seg_87_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual
time=0.043..0.043 rows=0 loops=1)
   Index Cond: (id = 10747)
 Total runtime: 1.110 ms
(21 rows)

[local]:playpen=> select version();
 
version  

[PERFORM] table contraints checks only happen in planner phase

2011-03-31 Thread Joseph Shraibman
Is there a reason that when executing queries the table constraints are
only checked during planning and not execution?  I end up making 2 round
trips to the database to get around this.

All of these queries should produce the same output:

 
[local]:playpen=>  explain analyze select count(*) from vis where seg = 69;

QUERY
PLAN

 Aggregate  (cost=857.51..857.52 rows=1 width=0) (actual
time=16.551..16.553 rows=1 loops=1)
   ->  Append  (cost=72.70..849.62 rows=3155 width=0) (actual
time=0.906..12.754 rows=3154 loops=1)
 ->  Bitmap Heap Scan on vis  (cost=72.70..838.12 rows=3154
width=0) (actual time=0.903..6.346 rows=3154 loops=1)
   Recheck Cond: (seg = 69)
   ->  Bitmap Index Scan on vis_seg_firstevent_idx 
(cost=0.00..71.91 rows=3154 width=0) (actual time=0.787..0.787 rows=3154
loops=1)
 Index Cond: (seg = 69)
 ->  Seq Scan on vis_for_seg_69_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.004..0.004 rows=0 loops=1)
   Filter: (seg = 69)
 Total runtime: 16.702 ms
(9 rows)

Time: 27.581 ms
[local]:playpen=>
[local]:playpen=> explain analyze select count(*) from vis where seg =
(select seg from an where key = 471);
   
QUERY
PLAN
---
 Aggregate  (cost=713.50..713.51 rows=1 width=0) (actual
time=16.721..16.722 rows=1 loops=1)
   InitPlan 1 (returns $0)
 ->  Index Scan using an_pkey on an  (cost=0.00..8.27 rows=1
width=4) (actual time=0.037..0.041 rows=1 loops=1)
   Index Cond: (key = 471)
   ->  Append  (cost=10.92..704.35 rows=352 width=0) (actual
time=0.970..13.024 rows=3154 loops=1)
 ->  Bitmap Heap Scan on vis  (cost=10.92..612.35 rows=344
width=0) (actual time=0.967..6.470 rows=3154 loops=1)
   Recheck Cond: (seg = $0)
   ->  Bitmap Index Scan on vis_seg_firstevent_idx 
(cost=0.00..10.83 rows=344 width=0) (actual time=0.862..0.862 rows=3154
loops=1)
 Index Cond: (seg = $0)
 ->  Seq Scan on vis_for_seg_1_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.004..0.004 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_4_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_66_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_69_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_79_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_80_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_82_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (seg = $0)
 ->  Seq Scan on vis_for_seg_87_2011_03 vis  (cost=0.00..11.50
rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
   Filter: (seg = $0)
 Total runtime: 17.012 ms
(26 rows)

Time: 24.147 ms
[local]:playpen=>
[local]:playpen=> explain analyze select count(vis.*) from vis, an where
vis.seg = an.seg and an.key = 471;
  
QUERY
PLAN   
-
 Aggregate  (cost=726.72..726.73 rows=1 width=29) (actual
time=30.061..30.062 rows=1 loops=1)
   ->  Nested Loop  (cost=10.92..725.65 rows=424 width=29) (actual
time=0.999..26.118 rows=3154 loops=1)
 Join Filter: (public.vis.seg = an.seg)
 ->  Index Scan using an_pkey on an  (cost=0.00..8.27 rows=1
width=4) (actual time=0.024..0.032 rows=1 loops=1)
   Index Cond: (key = 471)
 ->  Append  (cost=10.92..701.09 rows=1304 width=36) (actual
time=0.938..18.488 rows=3154 loops=1)
   ->  Bitmap Heap Scan on vis  (cost=10.92..611.49 rows=344
width=36) (actual time=0.936..11.753 rows=3154 loops=1)
 Recheck Cond: (public.vis.seg

Re: [PERFORM] good old VACUUM FULL

2011-03-31 Thread Craig Ringer

On 23/03/2011 12:24 PM, Chris wrote:


You'll need to reindex that table now - vacuum full can bloat your
indexes which will affect your other queries.


It doesn't seem to matter much for a one-off. Index bloat problems have 
mainly been encountered where people are running VACUUM FULL as part of 
routine maintenance - for example, from a nightly cron job.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] COPY with high # of clients, partitioned table locking issues?

2011-03-31 Thread Greg Smith

On 03/30/2011 04:56 PM, Strange, John W wrote:

[ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 LOG:  
process 14405 still waiting for ExclusiveLock on extension of relation 470273 
of database 16384 after 5001.894 ms
[ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e LOG:  
process 7294 still waiting for ExclusiveLock on extension of relation 470606 of 
database 16384 after 5062.968 ms
[ 2011-03-30 15:54:56.077 EDT ] 25781 [local] asgprod:4d938556.64b5 LOG:  
process 25781 still waiting for ExclusiveLock on extension of relation 470606 
of database 16384 after 5124.463 ms
   


When you insert something new into the database, sometimes it has to 
grow the size of the underlying file on disk to add it.  That's called 
"relation extension"; basically the table gets some number of 8K blocks 
added to the end of it.  If your workload tries to push new blocks into 
a table with no free space, every operation will become serialized 
waiting on individual processes grabbing the lock for relation extension.


The main reasonable way around this from a high level is to write 
something that makes the extensions create significantly more data when 
they get into this situation than they do right now.  Don't just extend 
by one block; extend by a large numer instead, if you believe you're in 
this sort of situation.  That's probably going to take a low-level code 
change to actually fix the issue inside PostgreSQL though.



--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance