On Thu, 28 Aug 2008, Tom Lane wrote:
[EMAIL PROTECTED] writes:
On Wed, 27 Aug 2008, Andrew Sullivan wrote:
The upshot of this is that postgres tends to be a big target for the
OOM killer, with seriously bad effects to your database. So for good
Postgres operation, you want to run on a machine
On Wed, 27 Aug 2008, Craig James wrote:
The OOM killer is a terrible idea for any serious database server. I wrote a
detailed technical paper on this almost 15 years ago when Silicon Graphics
had this same feature, and Oracle and other critical server processes
couldn't be made reliable.
Th
The OOM killer is a terrible idea for any serious database server. I wrote a
detailed technical paper on this almost 15 years ago when Silicon Graphics had
this same feature, and Oracle and other critical server processes couldn't be
made reliable.
The problem with "overallocating memory" as
[EMAIL PROTECTED] writes:
> On Wed, 27 Aug 2008, Andrew Sullivan wrote:
>> The upshot of this is that postgres tends to be a big target for the
>> OOM killer, with seriously bad effects to your database. So for good
>> Postgres operation, you want to run on a machine with the OOM killer
>> disable
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Some time ago I found that it was possible to fiddle with a /proc entry
> to convince the OOM to not touch the postmaster. A postmaster with the
> raw IO capability bit set would be skipped by the OOM too killer (this
> is an Oracle tweak AFAIK).
> Thes
[EMAIL PROTECTED] wrote:
> On Wed, 27 Aug 2008, Andrew Sullivan wrote:
>>> seperate copies for the seperate processes (and if at this time it runs of
>>> of memory it invokes the OOM killer to free some space),
>>
>> . . .it kills processes that are using a lot of memory. Those are not
>> necessa
On Wed, Aug 27, 2008 at 4:22 PM, <[EMAIL PROTECTED]> wrote:
> I disagree with you. I think goof Postgres operation is so highly dependant
> on caching as much data as possible that disabling overcommit (and throwing
> away a lot of memory that could be used for cache) is a solution that's as
> bad
On Wed, 27 Aug 2008, Andrew Sullivan wrote:
On Wed, Aug 27, 2008 at 02:45:47PM -0700, [EMAIL PROTECTED] wrote:
with memory overcommit enabled (the default), the kernel recognises that
most programs that fork don't write to all the memory they have
allocated,
It doesn't "recognise" it; it "ho
On Wed, Aug 27, 2008 at 02:45:47PM -0700, [EMAIL PROTECTED] wrote:
> with memory overcommit enabled (the default), the kernel recognises that
> most programs that fork don't write to all the memory they have
> allocated,
It doesn't "recognise" it; it "hopes" it. It happens to hope
correctly in
On Wed, 27 Aug 2008, Florian Weimer wrote:
* henk de wit:
On this table we're inserting records with a relatively low
frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine
with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian
Linux. The machine is completely devoted to P
"Jerry Champlin" <[EMAIL PROTECTED]> writes:
> We have synchronous_commit turned off, commit siblings set to 5,
> commit_delay set to 50,000. With synchronous_commit off does it make
> any sense to be grouping commits?
No. In fact commit_delay is a total no-op in that mode. If it were
doing any
Jerry Champlin wrote:
> If it were implemented in such a way that when the top level pruning
> happens, a set of 3 sub partitions is selected from say 18 total and then at
> the next level is selects the 3 matching sub partitions from each matched
> group of 30 then you are only looking at 18+3*30
If it were implemented in such a way that when the top level pruning
happens, a set of 3 sub partitions is selected from say 18 total and then at
the next level is selects the 3 matching sub partitions from each matched
group of 30 then you are only looking at 18+3*30 = 108 instead of 548 checks
to
Bill Moran wrote:
On a side note, what version of PG are you using? If it was in a
previous email, I missed it.
He mentioned 8.3.1 in the first email.
Although nothing stands out in the 8.3.2 or 8.3.3 fix list (without
knowing his table structure or any contrib modules used) I wonder if
on
On Wed, 27 Aug 2008, Jerry Champlin wrote:
After it's a day old, there are no longer any updates or inserts and we
can vacuum it at that point.
A pattern that has worked very well for other people is to have two
separate tables (or partitions). One contains today's data, and the other
contain
Jerry Champlin <[EMAIL PROTECTED]> writes:
> Is there a way to use multi-level inheritance to achieve sub
> partitioning that the query optimizer will recognize?
No, I don't think so. How would that make things any better anyway?
You're still going to end up with the same very large number of
pa
Is there a way to use multi-level inheritance to achieve sub
partitioning that the query optimizer will recognize? With our current
application design, we would need a partition for every other day for
18 months which will not perform well. The reason we need so many
partitions is that we ca
Maybe strace could help you find the problem, but could cause a great
overhead...
"Bill Moran" <[EMAIL PROTECTED]> escreveu:
> ...
--
Daniel Cristian Cruz
Administrador de Banco de Dados
Direção Regional - Núcleo de Tecnologia da Informação
SENAI - SC
Telefone: 48-3239-1422 (ramal 1422)
--
Se
Duan Ligong wrote:
> Alvaro, Thanks for your answer.
> It would be very helpful.
>
> > > Would you like to be so kind as to answer the following questions:
> > >
> > > - Is there any way to control the number of clog files and xlog files?
> > > I encounter an issue that there are too many clog
In response to henk de wit <[EMAIL PROTECTED]>:
> > What do your various logs (pgsql, application, etc...) have to say?
>
> There
> is hardly anything helpful in the pgsql log. The application log
> doesn't mention anything either. We log a great deal of information in
> our application, but ther
Alvaro, Thanks for your answer.
It would be very helpful.
> > Would you like to be so kind as to answer the following questions:
> >
> > - Is there any way to control the number of clog files and xlog files?
> > I encounter an issue that there are too many clog files under the
> > pg_clog/ dir
* henk de wit:
> On this table we're inserting records with a relatively low
> frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine
> with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian
> Linux. The machine is completely devoted to PG, nothing else runs on
> the box.
Hav
22 matches
Mail list logo