On Thu, Nov 6, 2008 at 4:03 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 4:04 PM, Kevin Grittner <[EMAIL PROTECTED]> wrote:
>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>>> Without write barriers in my file system an fsync request will
>>> be immediately returned true, cor
On Thu, Nov 6, 2008 at 4:04 PM, Kevin Grittner
<[EMAIL PROTECTED]> wrote:
"Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>> On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner
>> <[EMAIL PROTECTED]> wrote:
>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
I am pretty sure that with no write barrie
>>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner
> <[EMAIL PROTECTED]> wrote:
> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>>> I am pretty sure that with no write barriers that even a BBU
>> hardware
>>> caching raid controller cannot guarantee yo
> > no table was ever large enough that 256k buffers would ever be filled by
> > the process of vacuuming a single table.
>
> Not 256K buffers--256K, 32 buffers.
Ok.
> > In addition, when I say "constantly" above I mean that the count
> > increases even between successive SELECT:s (of the stat
On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner
<[EMAIL PROTECTED]> wrote:
"Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>> I am pretty sure that with no write barriers that even a BBU
> hardware
>> caching raid controller cannot guarantee your data.
>
> That seems at odds with this:
>
> http://os
>>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> I am pretty sure that with no write barriers that even a BBU
hardware
> caching raid controller cannot guarantee your data.
That seems at odds with this:
http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent
What evidence to you have that
On Thu, 6 Nov 2008, Peter Schuller wrote:
In order to keep it from using up the whole cache with maintenance
overhead, vacuum allocates a 256K ring of buffers and use re-uses ones
from there whenever possible.
no table was ever large enough that 256k buffers would ever be filled by
the proces
On Thu, Nov 6, 2008 at 2:05 PM, Scott Carey <[EMAIL PROTECTED]> wrote:
> To others that may stumble upon this thread:
> Note that Write Barriers can be very important for data integrity when power
> loss or hardware failure are a concern. Only disable them if you know the
> consequences are mitiga
>>> "Scott Carey" <[EMAIL PROTECTED]> wrote:
> Note that Write Barriers can be very important for data integrity
when power
> loss or hardware failure are a concern. Only disable them if you
know the
> consequences are mitigated by other factors (such as a BBU + db using
the
> WAL log with sync w
To others that may stumble upon this thread:
Note that Write Barriers can be very important for data integrity when power
loss or hardware failure are a concern. Only disable them if you know the
consequences are mitigated by other factors (such as a BBU + db using the
WAL log with sync writes), o
>>> "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:
>> the new kernel
>> defaulted to using write barriers, while the old kernel didn't.
Since
>> we have a BBU RAID controller, we will add nobarrier to the fstab
>> entries. This makes file
On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:
> >>> "Kevin Grittner" <[EMAIL PROTECTED]> wrote:
> > If I find a particular tweak to the background writer or some such
> is
> > particularly beneficial, I'll post again.
>
> It turns out that it was not the PostgreSQL version which was
>
>>> "Kevin Grittner" <[EMAIL PROTECTED]> wrote:
> If I find a particular tweak to the background writer or some such
is
> particularly beneficial, I'll post again.
It turns out that it was not the PostgreSQL version which was
primarily responsible for the performance difference. We updated the
On Thu, Nov 6, 2008 at 8:07 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 8:47 AM, David Rees <[EMAIL PROTECTED]> wrote:
>>
>> In the case of the machines without a BBU on them, they are configured
>> to be in WriteBack, but are actually running in WriteThrough.
>
> I'm pret
2008/11/6 Richard Huxton <[EMAIL PROTECTED]>
> Віталій Тимчишин wrote:
> > As you can see from other plans, it do have all the indexes to perform
> it's
> > work fast (when given part by part). It simply do not wish to use them.
> My
> > question: Is this a configuration problem or postgresql opti
As far as i know if you created the indexes properly and postgres sees that
it will give some improvement he will use those.
- Look at the page of index creation that we may be forgeting some thing.
http://www.postgresql.org/docs/8.3/static/indexes.html
I have to go to the hospital know. Tomorro
Віталій Тимчишин wrote:
> As you can see from other plans, it do have all the indexes to perform it's
> work fast (when given part by part). It simply do not wish to use them. My
> question: Is this a configuration problem or postgresql optimizer simply
> can't do such a query rewrite?
I must admi
2008/11/6 Helio Campos Mello de Andrade <[EMAIL PROTECTED]>
> For what i see in four OR-plan.txt tou are doing too much "sequencial scan"
> . Create some indexes for those tables using the fields that you use an it
> may help you.
>
> OBS: If you already have lots of indexes in your tables it may
On Thu, Nov 6, 2008 at 8:47 AM, David Rees <[EMAIL PROTECTED]> wrote:
>
> In the case of the machines without a BBU on them, they are configured
> to be in WriteBack, but are actually running in WriteThrough.
I'm pretty sure the LSIs will refuse to actually run in writeback without a BBU.
--
Sen
On Thu, Nov 6, 2008 at 2:21 AM, Peter Schuller
<[EMAIL PROTECTED]> wrote:
>> I also found that my write cache was set to WriteThrough instead of
>> WriteBack, defeating the purpose of having a BBU and that my secondary
>> server apparently doesn't have a BBU on it. :-(
>
> Note also that several RA
For what i see in four OR-plan.txt tou are doing too much "sequencial scan"
. Create some indexes for those tables using the fields that you use an it
may help you.
OBS: If you already have lots of indexes in your tables it may be a good
time for you re-think your strategy because it´s ot working.
Richard Huxton writes:
> I'm guessing what you've got is a table that's not being vacuumed
> because you've had a transaction that's been open for weeks.
Or because no vacuuming at all is performed on this table (no
autovacuum and no explicit VACUUM on database or table).
--
Guillaume Cottence
1. Don't email people directly to start a new thread (unless you have a
support contract with them of course).
2. Not much point in sending to two mailing lists simultaneously. You'll
just split your responses.
brahma tiwari wrote:
> Hi all
>
> My database server db01 is on linux environment an
Hi all
My database server db01 is on linux environment and size of base folder
increasing very fast unexpectedly(creating renamed files of 1 GB in base folder
like 1667234568.10)
details as below
path of the table space/base file
/opt/appl/pgsql82/data/base/453447624/
[EMAIL PROTECTED] 4
My main message is that I can see this in many queries and many times. But
OK, I can present exact example.
2008/11/5 Jeff Davis <[EMAIL PROTECTED]>
> On Wed, 2008-11-05 at 13:12 +0200, Віталій Тимчишин wrote:
> > For a long time already I can see very poor OR performance in
> > postgres.
> > If
> I also found that my write cache was set to WriteThrough instead of
> WriteBack, defeating the purpose of having a BBU and that my secondary
> server apparently doesn't have a BBU on it. :-(
Note also that several RAID controllers will periodically drop the
write-back mode during battery capacit
Hello,
> At one point I envisioned making it smart enough to try and handle the
> scenario you describe--on an idle system, you may very well want to write
> out dirty and recently accessed buffers if there's nothing else going on.
> But such behavior is counter-productive on a busy system, whi
27 matches
Mail list logo