Duan Ligong wrote:
> Would you like to be so kind as to answer the following questions:
>
> - Is there any way to control the number of clog files and xlog files?
> I encounter an issue that there are too many clog files under the
> pg_clog/ directory which occupy more space than I can endure..
Hi,
Would you like to be so kind as to answer the following questions:
- Is there any way to control the number of clog files and xlog files?
I encounter an issue that there are too many clog files under the
pg_clog/ directory which occupy more space than I can endure..
- What determines the
> What do your various logs (pgsql, application, etc...) have to say?
There
is hardly anything helpful in the pgsql log. The application log
doesn't mention anything either. We log a great deal of information in
our application, but there's nothing out of the ordinary there,
although there's of co
> If the select returns a lot of data and you haven't enabled cursors (by
> calling setFetchSize), then the entire SQL response will be loaded in
> memory at once, so there could be an out-of-memory condition on the
> client.
I hear you. This is absolutely not the case though. There is no other ex
On Tue, Aug 26, 2008 at 5:53 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Frank Joerdens" <[EMAIL PROTECTED]> writes:
>> It seems to me that the planner makes a very poor decision with this
>> particular query:
>
> Try increasing the stats on woouser. You need it to make a smaller
> estimate of the
On Tue, Aug 26, 2008 at 10:44 AM, henk de wit <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> We're currently having a problem with queries on a medium sized table. This
> table is 22GB in size (via select
> pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring
> the total size of th
Thanks for the help. The applied solution follows. We will be taking a
number of maintenance steps to manage these very high update tables
which I will summarize later as I suspect we are not the only ones with
this challenge.
http://www.postgresql.org/docs/current/interactive/routine-vacuum
On Tue, 2008-08-26 at 18:44 +0200, henk de wit wrote:
> Hi,
>
> We're currently having a problem with queries on a medium sized table. This
> table is 22GB in size (via select
> pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring
> the total size of the table to 35 GB (me
On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:
> This makes sense. What queries can I run to see how close to the limit
> we are? We need to determine if we should stop the process which
> updates and inserts into this table until after the critical time this
> afternoon wh
Eh, there was a spurious join in that query which was created by an
ORM which messed things up apparently. Sorry for the noise. This
abstracted version of the original query that does the same is fast:
woome=> EXPLAIN ANALYZE
SELECT *
FROM webapp_invite i
INNER JOIN webapp_person p ON (i.id = p.id
"Frank Joerdens" <[EMAIL PROTECTED]> writes:
> It seems to me that the planner makes a very poor decision with this
> particular query:
Try increasing the stats on woouser. You need it to make a smaller
estimate of the number of matching rows here:
>-> Index Scan Backward using
This makes sense. What queries can I run to see how close to the limit
we are? We need to determine if we should stop the process which
updates and inserts into this table until after the critical time this
afternoon when we can perform the required maintenance on this table.
hubert depesz l
Hi,
We're currently having a problem with queries on a medium sized table. This
table is 22GB in size (via select pg_size_pretty(pg_relation_size('table'));).
It has 7 indexes, which bring the total size of the table to 35 GB (measured
with pg_total_relation_size).
On this table we're inserti
It seems to me that the planner makes a very poor decision with this
particular query:
--- snip ---
woome=> explain analyze SELECT "webapp_invite"."id",
"webapp_invite"."person_id", "webapp_invite"."session_id",
"webapp_invite"."created", "webapp_invite"."text",
"webapp_invite"."subject", "webapp_
On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:
>
> Does anyone know what will cause this bahavior for autovacuum?
You're probably approaching the wraparound limit in some database.
If you think you can't afford the overhead when users are accessing
the system, when are you vacu
On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:
> Does anyone know what will cause this bahavior for autovacuum?
http://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html
-> autovacuum_freeze_max_age
depesz
--
Linkedin: http://www.linkedin.com/in/depesz
We are attempting to turn off autovacuum but it keeps coming back. We
can't afford the performance hit from vacuum while end users are
accessing our system.
Postgresql Version: 8.3.3
OS: Linux 2.6.18-53.el5PAE #1 SMP
Running PostgreSQL setting:
sspg=# show autovacuum;
autovacuum
-
On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <[EMAIL PROTECTED]> wrote:
> On Sat, 23 Aug 2008, Loic Petit wrote:
>>
>> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
>> of sensors. In order to have good
>> performances on querying by timestamp on each sensor, I part
On Sat, 23 Aug 2008, Loic Petit wrote:
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of
sensors. In order to have good
performances on querying by timestamp on each sensor, I partitionned my
measures table for each sensor. Thus I create
a lot of tables.
As far as
19 matches
Mail list logo