[PERFORM] Determine dead tuples size

2007-03-15 Thread Alexey Romanchuk
Hello guys,

is it possible to determine dead tuples size for table?

-- 
Alexey Romanchuk



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] unsubscribe

2007-03-15 Thread Neelam Goyal

unsubscribe


Re: [PERFORM] Determine dead tuples size

2007-03-15 Thread Daniel Cristian Cruz

Try the contrib module pgstattuple.

2007/3/15, Alexey Romanchuk <[EMAIL PROTECTED]>:

Hello guys,

is it possible to determine dead tuples size for table?

--
Alexey Romanchuk



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




--
Daniel Cristian Cruz
Analista de Sistemas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Determine dead tuples size

2007-03-15 Thread Michael Fuhr
On Thu, Mar 15, 2007 at 01:58:47PM +0600, Alexey Romanchuk wrote:
> is it possible to determine dead tuples size for table?

See contrib/pgstattuple.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres batch write very slow - what to do

2007-03-15 Thread femski

I tried maxpages = 0 and  full_page_writes=off and it seemed to be taking
forever.
All other tricks I have already tried.

At this point I wondering if its a jdbc client side issue - I am using the
latest 8.1.
(as I said in an earlier post - I am using addBatch with batch size of 100).
But just in case - I am missing something.

If 17k record/sec is right around expected then I must say I am little
disappointed from the "most advanced open source database".

thanks for all your help.

-Sanjay


Merlin Moncure-2 wrote:
> 
> On 3/15/07, femski <[EMAIL PROTECTED]> wrote:
> well, what sql is the jdbc driver creating exactly?  It is probably
> running inserts in a transaction.   your load is about 17k inserts/sec
> which about right for postgres on your hardware.  you have the
> following options to play increase insert performance:
> 
> * tweak postgresql.conf
>   fsync: off it is not already
>   wal_segments: bump to at least 24 or so
>   maintenance_work_mem: if you create key after insert, bump this high
> (it speeds create index)
>   bgwriter settings: you can play with these, try disabling bgwriter
> first (maxpages=0)
>   full_page_writes=off might help, not 100% sure about this
> 
> * distribute load
>   make load app multi thrreaded.
> 
> * use copy for bulk load
>   [is there a way to make jdbc driver do this?]
> 
> * use multi-line inserts (at least 10 rows/insert)...nearly as fast as
> copy
> 
> * if jdbc driver is not already doing so, prepare your statements and
> execute.
> 
> merlin
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9492938
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Dispatch-Merge pattern

2007-03-15 Thread James Riordan

Howdy-

I am currently using PostgreSQL to store and process a high-bandwidth
event stream.  I do not need old events but the delete and vacuum does
not terminate due to the large number of events being inserted (it
just pushes me over the tipping point of where the machine can keep up
with the events).

I ended up implementing a scheme where a trigger is used to redirect
the events (round robin based on time) to a series of identically
structured tables.  I can then use TRUNCATE older tables rather than
DELETE and VACUUM (which is a significant speed up).

It worked out pretty well so thought post the idea to find out if

- it is stupid way of doing things and there is a correct database
  abstraction for doing this

or

 - it is a reasonable way of solving this problem and might be of use
   to other folks using rdbs as event processing

I then use a view to merge the tables.  Obviously update would be a
problem for my purposes, and I suppose a lot of event processing, it
isn't an issue.

Either way, details are at:
 
http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/Dispatch-Merge-Database-Pattern

Cheers,
James

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Dispatch-Merge pattern

2007-03-15 Thread Steve Atkins


On Mar 13, 2007, at 6:36 AM, James Riordan wrote:


Howdy-

I am currently using PostgreSQL to store and process a high-bandwidth
event stream.  I do not need old events but the delete and vacuum does
not terminate due to the large number of events being inserted (it
just pushes me over the tipping point of where the machine can keep up
with the events).

I ended up implementing a scheme where a trigger is used to redirect
the events (round robin based on time) to a series of identically
structured tables.  I can then use TRUNCATE older tables rather than
DELETE and VACUUM (which is a significant speed up).

It worked out pretty well so thought post the idea to find out if

- it is stupid way of doing things and there is a correct database
  abstraction for doing this

or

 - it is a reasonable way of solving this problem and might be of use
   to other folks using rdbs as event processing

I then use a view to merge the tables.  Obviously update would be a
problem for my purposes, and I suppose a lot of event processing, it
isn't an issue.

Either way, details are at:
 http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/ 
Dispatch-Merge-Database-Pattern


I can't reach that URL, but from what you say it sounds like you've
re-invented table partitioning.

http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html

If you do it the postgresql way you can also take advantage of
constraint exclusion, to speed up some selects on the set of
partitioned tables, and inheritance means you don't have to
maintain the union view yourself.

Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Postgres batch write very slow - what to do

2007-03-15 Thread Tom Lane
femski <[EMAIL PROTECTED]> writes:
> If 17k record/sec is right around expected then I must say I am little
> disappointed from the "most advanced open source database".

Well, the software is certainly capable of much more than that;
for instance, on a not-too-new Dell x86_64 machine:

regression=# \timing
Timing is on.
regression=# create table t1(f1 int);
CREATE TABLE
Time: 3.614 ms
regression=# insert into t1 select * from generate_series(1,100);
INSERT 0 100
Time: 3433.483 ms

which works out to something a little shy of 300K rows/sec.  Of course
the main difference from what I think you're trying to do is the lack of
any per-row round trips to the client code.  But you need to look into
where the bottleneck is, not just assume it's insoluble.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Determine dead tuples size

2007-03-15 Thread Alexey Romanchuk
Hello, Michael.

> On Thu, Mar 15, 2007 at 01:58:47PM +0600, Alexey Romanchuk wrote:
>> is it possible to determine dead tuples size for table?

> See contrib/pgstattuple.

thanks, i install contribs and try to analyze result of pgstattuple
function and found it strange.

Here it is output:
 pgstattuple
--
 (233242624,1186804,206555428,88.56,20707,3380295,1.45,13896816,5.96)

When i try to sum all size (live, dead and free) the sum is not equal
total size. For this table 206555428 + 3380295 + 13896816 = 223832539.
The difference between total and sum is 9410085. It is near 5%.

Is it ok?

-- 
Alexey Romanchuk



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq