Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Alvaro Herrera
Tom Lane wrote: > [EMAIL PROTECTED] writes: > > While exploring this problem, I've noticed that one of the frequent insert > > processes creates a few temporary tables to do post-processing. Is it > > possible that the stats collector is getting bloated with stats from these > > short-lived tempor

Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes: > Regarding temp tables, I'd think that the pgstat entries should be > getting dropped at some point in both releases. Maybe there's a bug > preventing that in 8.2? Hmmm ... I did rewrite the backend-side code for that just recently for performance reaso

Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Tom Lane
I wrote: > Alvaro Herrera <[EMAIL PROTECTED]> writes: >> Regarding temp tables, I'd think that the pgstat entries should be >> getting dropped at some point in both releases. Maybe there's a bug >> preventing that in 8.2? > Hmmm ... I did rewrite the backend-side code for that just recently for >

Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Benjamin Minshall
Tom Lane wrote: I wrote: Alvaro Herrera <[EMAIL PROTECTED]> writes: Regarding temp tables, I'd think that the pgstat entries should be getting dropped at some point in both releases. Maybe there's a bug preventing that in 8.2? Hmmm ... I did rewrite the backend-side code for that just recen

Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Tom Lane
Benjamin Minshall <[EMAIL PROTECTED]> writes: > When I checked on the server this morning, the huge stats file has > returned to a normal size. I set up a script to track CPU usage and > stats file size, and it appears to have decreased from 90MB down to > about 2MB over roughly 6 hours last ni

[PERFORM] Recreate big table

2007-02-09 Thread Gábriel Ákos
Hi, We have a quite big table, which is heavily used by our online clients. The table has several indexes, but no other relation to other tables. We have an import process, which can fill/update this table. The full import takes 1 hour, and this is very long. We are thinking of doing the full imp

Re: [PERFORM] stats collector process high CPU utilization

2007-02-09 Thread Benjamin Minshall
Tom Lane wrote: Well, that's pretty interesting. What are your vacuuming arrangements for this installation? Could the drop in file size have coincided with VACUUM operations? Because the ultimate backstop against bloated stats files is pgstat_vacuum_tabstat(), which is run by VACUUM and arran

Re: [PERFORM] Recreate big table

2007-02-09 Thread Daniel Cristian Cruz
Gábriel, You could use table inheritance, like table partitioning is explained in manual: http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html Kind regards, Daniel Cristian On 2/9/07, Gábriel Ákos <[EMAIL PROTECTED]> wrote: Hi, We have a quite big table, which is heavily use

[PERFORM] cube operations slower than geo_distance() on production server

2007-02-09 Thread Mark Stosberg
With the help of some of this list, I was able to successfully set up and benchmark a cube-based replacement for geo_distance() calculations. On a development box, the cube-based variations benchmarked consistently running in about 1/3 of the time of the gel_distance() equivalents. After settin

Re: [PERFORM] Can anyone make this code tighter? Too slow, Please help!

2007-02-09 Thread Mark Stosberg
[EMAIL PROTECTED] wrote: > I have this function in my C#.NET app that goes out to find the > business units for each event and returns a string (for my report). > I'm finding that for larger reports it takes too long and times out. > > Does anyone know how I can speed this process up? Is this cod

[PERFORM] Is there an equivalent for Oracle's user_tables.num_rows

2007-02-09 Thread Virag Saksena
All, I am looking to automate analyze table in my application. I have some insert only tables in my application which I need to analyze as data grows. Since the inserts are application controlled, I can choose to run analyze when I determine the data has grown more than x% since last analyze

Re: [PERFORM] Is there an equivalent for Oracle's user_tables.num_rows

2007-02-09 Thread Tom Lane
"Virag Saksena" <[EMAIL PROTECTED]> writes: > Does someone know of a way of telling what the optimizer believes the = > number of rows are ? You're looking in the wrong place; see pg_class.relpages and reltuples. But note that in recent releases neither one is taken as gospel. Instead the planner

Re: [PERFORM] Is there an equivalent for Oracle's user_tables.num_rows

2007-02-09 Thread Virag Saksena
Thanks, that is exactly what I was looking for I know that number of rows may not be the best indicator, but it is a heuristic that can be tracked easily, causing analyze for the first x insert events, and then only doing it only when an insert event causes total rows to exceed y % of the opti

Re: [PERFORM] Is there an equivalent for Oracle'suser_tables.num_rows

2007-02-09 Thread Simon Riggs
On Fri, 2007-02-09 at 19:45 -0500, Tom Lane wrote: > "Virag Saksena" <[EMAIL PROTECTED]> writes: > > Does someone know of a way of telling what the optimizer believes the = > > number of rows are ? > > You're looking in the wrong place; see pg_class.relpages and reltuples. > > But note that in re

Re: [PERFORM] cube operations slower than geo_distance() on production server

2007-02-09 Thread Merlin Moncure
On 2/10/07, Mark Stosberg <[EMAIL PROTECTED]> wrote: With the help of some of this list, I was able to successfully set up and benchmark a cube-based replacement for geo_distance() calculations. On a development box, the cube-based variations benchmarked consistently running in about 1/3 of th