On Thu, May 16, 2013 at 9:13 PM, Robert Haas wrote:
> On Thu, May 16, 2013 at 10:06 PM, Daniel Farina wrote:
>> Do you have a sketch about mechanism to not encounter that problem?
>
> I didn't until just now, but see my email to Peter. That idea might
> be all wet, but off-hand it seems like it
On Thu, May 16, 2013 at 10:06 PM, Daniel Farina wrote:
> Do you have a sketch about mechanism to not encounter that problem?
I didn't until just now, but see my email to Peter. That idea might
be all wet, but off-hand it seems like it might work...
> However little it may matter, I would like t
On Thu, May 16, 2013 at 10:05 PM, Peter Geoghegan wrote:
>>> I don't think it's bad. I think that we shouldn't be paternalistic
>>> towards our users. If anyone enables a setting like zero_damaged_pages
>>> (or, say, wal_write_throttle) within their postgresql.conf
>>> indefinitely for no good rea
On Thu, May 16, 2013 at 5:43 PM, Robert Haas wrote:
> On Thu, May 16, 2013 at 2:42 PM, Peter Geoghegan wrote:
>> On Thu, May 16, 2013 at 11:16 AM, Robert Haas wrote:
>>> Well, I think it IS a Postgres precept that interrupts should get a
>>> timely response. You don't have to agree, but I think
On Thu, May 16, 2013 at 5:43 PM, Robert Haas wrote:
> At times, like when the system is under really heavy load? Or at
> times, like depending on what the backend is doing? We can't do a
> whole lot about the fact that it's possible to beat a system to death
> so that, at the OS level, it stops
On Thu, May 16, 2013 at 2:42 PM, Peter Geoghegan wrote:
> On Thu, May 16, 2013 at 11:16 AM, Robert Haas wrote:
>> Well, I think it IS a Postgres precept that interrupts should get a
>> timely response. You don't have to agree, but I think that's
>> important.
>
> Well, yes, but the fact of the m
On Thu, May 16, 2013 at 11:16 AM, Robert Haas wrote:
> Well, I think it IS a Postgres precept that interrupts should get a
> timely response. You don't have to agree, but I think that's
> important.
Well, yes, but the fact of the matter is that it is taking high single
digit numbers of seconds t
On Wed, May 15, 2013 at 6:40 PM, Peter Geoghegan wrote:
> On Wed, May 15, 2013 at 3:46 AM, Robert Haas wrote:
>> One possible objection to this line of attack is that, IIUC, waits to
>> acquire a LWLock are non-interruptible. If someone tells PostgreSQL
>> to wait for some period of time before
On Wed, May 15, 2013 at 3:46 AM, Robert Haas wrote:
> One possible objection to this line of attack is that, IIUC, waits to
> acquire a LWLock are non-interruptible. If someone tells PostgreSQL
> to wait for some period of time before performing each WAL write,
> other backends that grab the WALW
On Tue, May 14, 2013 at 12:23 AM, Daniel Farina wrote:
> On Mon, May 13, 2013 at 3:02 PM, Peter Geoghegan wrote:
>> Has anyone else thought about approaches to mitigating the problems
>> that arise when an archive_command continually fails, and the DBA must
>> manually clean up the mess?
>
> Nota
On Mon, May 13, 2013 at 3:02 PM, Peter Geoghegan wrote:
> Has anyone else thought about approaches to mitigating the problems
> that arise when an archive_command continually fails, and the DBA must
> manually clean up the mess?
Notably, the most common problem in this vein suffered at Heroku has
The documentation says of continuous archiving:
"While designing your archiving setup, consider what will happen if
the archive command fails repeatedly because some aspect requires
operator intervention or the archive runs out of space. For example,
this could occur if you write to tape without a
12 matches
Mail list logo