On Wed, Feb 28, 2018 at 04:16:53PM -0600, Justin Pryzby wrote:
> On Wed, Feb 28, 2018 at 01:43:11PM -0800, Andres Freund wrote:
> > a significant number of times during investigations of bugs I wondered
> > whether running the cluster with various settings, or various tools
> > could've caused the
On 2018-03-07 23:34:37 -0500, Tom Lane wrote:
> Craig Ringer writes:
> > As I understand it, because we allow multiple Pg instances on a system, we
> > identify the small sysv shmem segment we use by the postmaster's pid. If
> > you remove the DirLockFile (postmaster.pid) you remove the interlock
On 8 March 2018 at 12:34, Tom Lane wrote:
> Craig Ringer writes:
> > As I understand it, because we allow multiple Pg instances on a system,
> we
> > identify the small sysv shmem segment we use by the postmaster's pid. If
> > you remove the DirLockFile (postmaster.pid) you remove the interlock
Craig Ringer writes:
> As I understand it, because we allow multiple Pg instances on a system, we
> identify the small sysv shmem segment we use by the postmaster's pid. If
> you remove the DirLockFile (postmaster.pid) you remove the interlock
> against starting a new postmaster. It'll think it's
On 8 March 2018 at 10:18, Andres Freund wrote:
>
>
> On March 7, 2018 5:51:29 PM PST, Craig Ringer
> wrote:
> >My favourite remains an organisation that kept "fixing" an issue by
> >kill
> >-9'ing the postmaster and removing postmaster.pid to make it start up
> >again. Without killing all the le
On March 7, 2018 5:51:29 PM PST, Craig Ringer wrote:
>My favourite remains an organisation that kept "fixing" an issue by
>kill
>-9'ing the postmaster and removing postmaster.pid to make it start up
>again. Without killing all the leftover backends. Of course, the system
>kept getting more unsta
On 8 March 2018 at 04:58, Robert Haas wrote:
> On Wed, Feb 28, 2018 at 8:03 PM, Craig Ringer
> wrote:
> > A huge +1 from me for the idea. I can't even count the number of black
> box
> > "WTF did you DO?!?" servers I've looked at, where bizarre behaviour has
> > turned out to be down to the user
On Wed, Feb 28, 2018 at 8:03 PM, Craig Ringer wrote:
> A huge +1 from me for the idea. I can't even count the number of black box
> "WTF did you DO?!?" servers I've looked at, where bizarre behaviour has
> turned out to be down to the user doing something very silly and not saying
> anything about
On Thu, Mar 01, 2018 at 09:12:18AM +0800, Craig Ringer wrote:
> On 1 March 2018 at 06:28, Justin Pryzby wrote:
> > The more fine grained these are the more useful they can be:
> >
> > Running with fsync=off is common advice while loading, so reporting that
> > "fsync=off at some point" is much les
On 1 March 2018 at 09:00, Justin Pryzby wrote:
>
> > > - started in single user mode or with system indices disabled?
> > why?
>
> Some of these I suggested just as a datapoint (or other brainstorms I
> couldn't
> immediately reject). A cluster where someone has UPDATED pg_* (even
> pg_statist
On 1 March 2018 at 06:28, Justin Pryzby wrote:
> On Wed, Feb 28, 2018 at 02:18:12PM -0800, Andres Freund wrote:
> > On 2018-02-28 17:14:18 -0500, Peter Eisentraut wrote:
> > > I can see why you'd want that, but as a DBA, I don't necessarily want
> > > all of that recorded, especially in a quasi-p
On 1 March 2018 at 05:43, Andres Freund wrote:
> Hi,
>
> a significant number of times during investigations of bugs I wondered
> whether running the cluster with various settings, or various tools
> could've caused the issue at hand. Therefore I'd like to propose adding
> a 'tainted' field to p
On Wed, Feb 28, 2018 at 02:23:19PM -0800, Andres Freund wrote:
> Hi,
>
> On 2018-02-28 16:16:53 -0600, Justin Pryzby wrote:
> > - did recovery (you could use "needed recovery" instead, but then there's
> > the
> >question of how reliable that field would be);
> >+ or: timestamp of most
On Wed, Feb 28, 2018 at 02:18:12PM -0800, Andres Freund wrote:
> On 2018-02-28 17:14:18 -0500, Peter Eisentraut wrote:
> > I can see why you'd want that, but as a DBA, I don't necessarily want
> > all of that recorded, especially in a quasi-permanent way.
>
> Huh? You're arguing that we should ma
Hi,
On 2018-02-28 16:16:53 -0600, Justin Pryzby wrote:
Unfortunately your list seems to raise the bar to a place I don't see us
going soon :(
> - pg_control versions used on this cluster (hopefully a full list..obviously
>not going back before PG11);
That needs arbitrary much space, that'
Hi,
On 2018-02-28 17:14:18 -0500, Peter Eisentraut wrote:
> I can see why you'd want that, but as a DBA, I don't necessarily want
> all of that recorded, especially in a quasi-permanent way.
Huh? You're arguing that we should make it easier for DBAs to hide
potential causes of corruption? I fail
On Wed, Feb 28, 2018 at 01:43:11PM -0800, Andres Freund wrote:
> a significant number of times during investigations of bugs I wondered
> whether running the cluster with various settings, or various tools
> could've caused the issue at hand. Therefore I'd like to propose adding
> a 'tainted' fiel
On 2018-02-28 23:13:44 +0100, Tomas Vondra wrote:
>
> On 02/28/2018 10:43 PM, Andres Freund wrote:
> > Hi,
> >
> > a significant number of times during investigations of bugs I wondered
> > whether running the cluster with various settings, or various tools
> > could've caused the issue at hand.
On 2/28/18 16:43, Andres Freund wrote:
> a significant number of times during investigations of bugs I wondered
> whether running the cluster with various settings, or various tools
> could've caused the issue at hand. Therefore I'd like to propose adding
> a 'tainted' field to pg_control, that co
On 02/28/2018 10:43 PM, Andres Freund wrote:
> Hi,
>
> a significant number of times during investigations of bugs I wondered
> whether running the cluster with various settings, or various tools
> could've caused the issue at hand. Therefore I'd like to propose adding
> a 'tainted' field to pg_
20 matches
Mail list logo