Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > While exploring this problem, I've noticed that one of the frequent insert
> > processes creates a few temporary tables to do post-processing. Is it
> > possible that the stats collector is getting bloated with stats from these
> > short-lived tempor
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Regarding temp tables, I'd think that the pgstat entries should be
> getting dropped at some point in both releases. Maybe there's a bug
> preventing that in 8.2?
Hmmm ... I did rewrite the backend-side code for that just recently for
performance reaso
I wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Regarding temp tables, I'd think that the pgstat entries should be
>> getting dropped at some point in both releases. Maybe there's a bug
>> preventing that in 8.2?
> Hmmm ... I did rewrite the backend-side code for that just recently for
>
Tom Lane wrote:
I wrote:
Alvaro Herrera <[EMAIL PROTECTED]> writes:
Regarding temp tables, I'd think that the pgstat entries should be
getting dropped at some point in both releases. Maybe there's a bug
preventing that in 8.2?
Hmmm ... I did rewrite the backend-side code for that just recen
Benjamin Minshall <[EMAIL PROTECTED]> writes:
> When I checked on the server this morning, the huge stats file has
> returned to a normal size. I set up a script to track CPU usage and
> stats file size, and it appears to have decreased from 90MB down to
> about 2MB over roughly 6 hours last ni
Hi,
We have a quite big table, which is heavily used by our online clients.
The table has several indexes, but no other relation to other tables.
We have an import process, which can fill/update this table.
The full import takes 1 hour, and this is very long.
We are thinking of doing the full imp
Tom Lane wrote:
Well, that's pretty interesting. What are your vacuuming arrangements
for this installation? Could the drop in file size have coincided with
VACUUM operations? Because the ultimate backstop against bloated stats
files is pgstat_vacuum_tabstat(), which is run by VACUUM and arran
Gábriel,
You could use table inheritance, like table partitioning is explained in manual:
http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html
Kind regards,
Daniel Cristian
On 2/9/07, Gábriel Ákos <[EMAIL PROTECTED]> wrote:
Hi,
We have a quite big table, which is heavily use
With the help of some of this list, I was able to successfully set up
and benchmark a cube-based replacement for geo_distance() calculations.
On a development box, the cube-based variations benchmarked consistently
running in about 1/3 of the time of the gel_distance() equivalents.
After settin
[EMAIL PROTECTED] wrote:
> I have this function in my C#.NET app that goes out to find the
> business units for each event and returns a string (for my report).
> I'm finding that for larger reports it takes too long and times out.
>
> Does anyone know how I can speed this process up? Is this cod
All,
I am looking to automate analyze table in my application.
I have some insert only tables in my application which I need to analyze as
data grows.
Since the inserts are application controlled, I can choose to run analyze when
I determine the
data has grown more than x% since last analyze
"Virag Saksena" <[EMAIL PROTECTED]> writes:
> Does someone know of a way of telling what the optimizer believes the =
> number of rows are ?
You're looking in the wrong place; see pg_class.relpages and reltuples.
But note that in recent releases neither one is taken as gospel.
Instead the planner
Thanks, that is exactly what I was looking for
I know that number of rows may not be the best indicator, but it is a
heuristic that can be tracked
easily, causing analyze for the first x insert events, and then only doing
it only when an insert event causes
total rows to exceed y % of the opti
On Fri, 2007-02-09 at 19:45 -0500, Tom Lane wrote:
> "Virag Saksena" <[EMAIL PROTECTED]> writes:
> > Does someone know of a way of telling what the optimizer believes the =
> > number of rows are ?
>
> You're looking in the wrong place; see pg_class.relpages and reltuples.
>
> But note that in re
On 2/10/07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
With the help of some of this list, I was able to successfully set up
and benchmark a cube-based replacement for geo_distance() calculations.
On a development box, the cube-based variations benchmarked consistently
running in about 1/3 of th
15 matches
Mail list logo