Nope, my cvs tree is clean. i only applied those diffs since they are small.

On Fri, Apr 1, 2016 at 10:56 AM, Bob Beck <b...@obtuse.com> wrote:

> I would hazard a guess that if you are running a random diff, the
> problem is with the diff you are running - not those other things.
>
> On Fri, Apr 1, 2016 at 9:30 AM, Amit Kulkarni <amitk...@gmail.com> wrote:
> > I see the writes are not being done to disk in case of a simple cvs
> update,
> > and the machine locks up for a solid couple of minutes afterwards also.
> This
> > happens in a dual CPU config with plenty of free memory, even with
> stefan,
> > mpi and kettenis recent diffs. For a curious kernel reader, where could
> the
> > bug(s) be? in amap, uvm/buffer cache, rthreads???
> >
> > Thanks in advance
> >
> >
> > On Fri, Apr 1, 2016 at 9:06 AM, Bob Beck <b...@obtuse.com> wrote:
> >>
> >> I have more up to date versions of these patches around here.
> >>
> >> The problem with them is that fundamentally, the WAPBL implementation
> >> as it is assumes that it may infinitely steal
> >> buffers from the buffer cache and hold onto them indefinitely - and it
> >> assumes it can always get buffers from it. While the patch as it sits
> >> may "work" in the "happy case" on many people's machines, as it sits
> >> today it is dangerous and can lock up your machine and corrupt things
> >> in low memory situations.
> >>
> >> Basically in order to progres WAPBL (renamed "FFS Journalling" here)
> >> needs to have a mechanism added to allow
> >> it be told "no it can't have a buffer" and let it deal with it
> >> correctly.  The first part is done, the latter part is complex.
> >>
> >>
> >> On Sat, Mar 26, 2016 at 1:27 PM, Martijn Rijkeboer <mart...@bunix.org>
> >> wrote:
> >> > Hi,
> >> >
> >> > Just out of curiosity, what has happend with WAPBL? There were some
> >> > patches
> >> > floating around on tech@ in the last months of 2015, but then it
> became
> >> > quiet. I'm not complaining just curious.
> >> >
> >> > Kind regards,
> >> >
> >> >
> >> > Martijn Rijkeboer

Reply via email to