Hi Everyone,
I recently saw a crash on one of our databases, and I was wondering if this
might be an indication of something with WAL that might be unexpectedly
creating more files than it needs to?
Nov 5 16:18:27 localhost postgres[25092]: [111-1] 2011-11-05 16:18:27.524
PDT [user=slony,db=uk d
On Jul 8, 2010, at 12:50 PM, Kevin Grittner wrote:
> Richard Yen wrote:
>
>> there were moments where 129 WAL files were generated in one
>> minute. Is it plausible that this autovacuum could be responsible
>> for this?
>
> I don't remember seeing your au
On Jul 8, 2010, at 12:50 PM, Tom Lane wrote:
> Richard Yen writes:
>> My concern is that--as in the original post--there were moments where 129
>> WAL files were generated in one minute. Is it plausible that this
>> autovacuum could be responsible for this?
>
>
On Jul 8, 2010, at 12:27 PM, Tom Lane wrote:
>
> (Hmm ... but those complaints are logged at level WARNING, which as
> discussed elsewhere is really lower than LOG. Should we change them?)
Hmm, I did a grep on "WARNING" on my log, and the only thing that turns up are
the "WARNING: terminating
On Jul 8, 2010, at 12:04 PM, Kevin Grittner wrote:
> Robert Haas wrote:
>
>> I don't understand how you managed to fill up 37GB of disk with
>> WAL files. Every time you fill up checkpoint_segments * 16MB of
>> WAL files, you ought to get a checkpoint. When it's complete, WAL
>> segments comp
On Jul 6, 2010, at 8:25 PM, Scott Marlowe wrote:
> Tell us what you can about your hardware setup.
Sorry, I made the bad assumption that the hardware setup would be
irrelevant--dunno why I thought that.
My hardware setup is 2 FusionIO 160GB drives in a RAID-1 configuration, running
on an HP DL
Sorry, I forgot to mention that archive_mode is "off" and commented out, and
archive command is '' and commented out.
Thanks for following up!
-- Richard
On Jul 7, 2010, at 1:58, Mark Kirkwood wrote:
> On 07/07/10 13:10, Richard Yen wrote:
>>
>> This l
Hi everyone,
I'm running 8.4.2 on a CentOS machine, and postgres recently died with signal 6
because the pg_xlog partition filled up (33GB) on 7/4/10 10:34:23 (perfect
timing, as I was hiking in the mountains in the remotest parts of our country).
I did some digging and found the following:
-
Hi everyone,
I use DBD::Pg to interface with our 8.4.2 database, but for a particular query,
performance is horrible. I'm assuming that the behavior of $dbh->prepare is as
if I did PREPARE foo AS (query), so I did an explain analyze in the commandline:
> db_alpha=# prepare foo6 as (SELECT me.id
Hello,
I'm about to embark on a partitioning project to improve read performance on
some of our tables:
db=# select relname,n_live_tup,pg_size_pretty(pg_relation_size(relid)) from
pg_stat_all_tables where schemaname = 'public' order by n_live_tup desc limit
10;
relname
Kind of off-topic, but I've found that putting the history table on a separate
spindle (using a separate tablespace) also helps improve performance.
--Richard
On Apr 8, 2010, at 12:44 PM, Robert Haas wrote:
> 2010/4/8 Merlin Moncure :
>> previous to 8.2, to get good performance on zabbix you
On Mar 26, 2010, at 5:25 PM, Scott Carey wrote:
> Linux until recently does not account for shared memory properly in its swap
> 'aggressiveness' decisions.
> Setting shared_buffers larger than 35% is asking for trouble.
>
> You could try adjusting the 'swappiness' setting on the fly and seeing
Hi everyone,
We've recently encountered some swapping issues on our CentOS 64GB Nehalem
machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set
shared_buffers to 40GB. I was wondering if anyone would have any insight into
why the swapping suddenly starts, but never recover
Hello,
Wondering what's a good value for effective_io_concurrency when dealing with
FusionIO drives...anyone have any experience with this?
I know that SSDs vary from 10 channels to 30, and that 1 SSD about as fast as a
4-drive RAID, but I can't seem to settle on a good value to use for
effect
Hi All,
I encountered an odd issue regarding check constraints complaining
when they're not really violated.
For this particular machine, I am running 8.3.7, but on a machine
running 8.3.5, it seems to have succeeded. I also upgraded a third
machine from 8.3.5 to 8.3.7, and the query suc
Hi All,
Not sure if this is the right pgsql-* "channel" to post to, but I was
hoping maybe someone could answer a question from one of my fellow
developers. Please read below:
So, following the documentation, we wrote a little ASYNC version of
exec. here is the code:
PGresult *PGClient
On Dec 10, 2008, at 4:08 PM, Tom Lane wrote:
Richard Yen <[EMAIL PROTECTED]> writes:
Is there any way to tune this so that for the common last names,
the query
run time doesn't jump from <1s to >300s?
Well, as near as I can tell there's factor of a couple hundred
On Dec 10, 2008, at 11:39 AM, Robert Haas wrote:
You guys are right. I tried "Miller" and gave me the same result.
Is there
any way to tune this so that for the common last names, the query
run time
doesn't jump from <1s to >300s?
Thanks for the help!
Can you send the output of EXPLAIN A
On Dec 10, 2008, at 11:34 AM, Tom Lane wrote:
Richard Yen <[EMAIL PROTECTED]> writes:
You guys are right. I tried "Miller" and gave me the same result.
Is
there any way to tune this so that for the common last names, the
query run time doesn't jump from <1s to &
On Dec 9, 2008, at 3:27 PM, Tom Lane wrote:
Richard Yen <[EMAIL PROTECTED]> writes:
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
What PG version is this?
This is running on 8.3.3
In particular, I'm wondering if
Hi,
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
My table is like this:
Table "public.m_object_paper"
Column| Type | Modifiers
-++--
Hi All,
I've recently run into problems with my kernel complaining that I ran
out of memory, thus killing off postgres and bringing my app to a
grinding halt.
I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.
Naturally, I have to set my shmmax to 2GB because the kernel c
22 matches
Mail list logo